Podcast: AI in HR: What’s the Worst That Could Happen? (Live from SMA)
Live from the Southern Management Association conference in Greenville, SC, Frank hosts a special episode of The Busyness Paradox with guest Dr. Julie Hancock, Director of the People Center at the University of North Texas (https://cob.unt.edu/mgmt/tpc/index.html). With Paul “replaced by AI” for the day, the conversation explores the intersection of artificial intelligence and human resources—and what happens when algorithms meet employee lives.
They tackle big questions:
•Should we actually worry when AI isn’t involved in hiring?
•Can algorithms remove bias—or just learn new ones from us?
•Does AI improve HR decisions, or simply automate bad ones faster?
•What skills will the next generation need to thrive alongside AI?
•How do we protect the “human” side of human resources?
Along the way, they dive into:
•The dangers of AI hallucinations (including fake meta-analyses and citations).
•Students using ChatGPT in creative — and not-so-creative — ways.
•HR’s reputation as the “Grim Reaper” and how AI could make that worse.
•Whether robots will eventually achieve citizenship, feelings, or just vacuum better.
•Why prompt engineering may become the next big job field.
This episode blends humor, real-world HR insight, and a live audience eager to jump into the discussion. If you care about work, people, ethics, or the future of HR, this one’s for you.
Come visit us at busynessparadox.com to see episode transcripts, blog posts and other content while you’re there!
Untitled – October 25, 2025
00:00:00 Frank Butler: Hello, busybodies. Welcome to another episode of The Busyness Paradox. I am Frank Butler, here with Paul Harvey. Paul. Paul. I, I darn it, I think AI might have fired him. No. In all seriousness, though, Paul, Paul has a family situation to attend to, so he’s not able to attend. This is a live episode at the Southern Management Association conference in Greenville, South Carolina. Yay! This is an exciting time for us. It’s an exciting episode. We’re going to be talking about AI and its impact in the human relations area. Um, and we have a special guest this year. It is Julie Hancock who is the director of the People’s Centre at the University of North Texas, and she’s also an associate professor at the University of North Texas. Julie, thank you so much for being our guest. Thanks so much, Frank and Paul, for having me. Um, like Frank said, I’m the director of the People Center. And so what we are is an academic center that kind of focuses on HR related things within the UNT community. So that is from student experience to our corporate outreach, um, bringing in alums and things like that, and corporate partners making sure that they are connected with our students and also, of course, why we’re all here at the Southern Management Association Conference. In addition to the fun party that it is, um, being able to coordinate our research in a way that is useful for practitioners. Yeah, that sounds really important and like important work. And I’m sure the, uh, Dallas area appreciates all the hard work you guys do. And, uh, and, uh, if you want, go check them out. We’ll drop a link to the people center in the show notes. Uh, they are doing some very cool things, but I’m sure you would be happy to work with others that are not in the Dallas area. Absolutely, yes. And I’d also like to highlight and thank my colleague, Maria Gavrilovna Aguilar for being here today. She is our Director of Student Experience. Wonderful. And thank you for being here. We do have a great audience today. And, uh, we will open up the floor to some questions in a little bit. Uh, since Paul Harvey has been dismissed by AI, uh, our HR that the business paradox has employed. Um, he did send some parting questions, uh, for Julie. And so, uh, the first one that he sent was there’s not a lot of enthusiasm for the idea of AI bots making decisions about things like hiring, firing, and promotions. But given our well-documented Susceptibility of things like halo effects, reverse halo effects, primacy effects, recency effects, anchoring bias. You can tell Paul had a lot of time to figure this out. Confirmation bias, affinity bias, and so on. When evaluating job candidates and employees, should we be more concerned when an AI bot is not involved in the process? I think that’s a really great question, and it’s something that I’ve had the opportunity to recently chat with some HR professionals in the metroplex. About a couple of weeks ago, we hosted an event, um, through the People Center and had Doctor David Swanigan, who is the editor of the machine Learning Journal. Um, and he came in and did a talk on how it is that AI is being created by it and then implemented, you know, where HR needs to kind of step in. And one of the conversations we did have was about those biases. Um, it’s also conversations that I’ve had a lot with my doctoral students and other researchers. So it’s kind of both from a practice, practice and research perspective getting a lot of attention and discussion. And, you know, I think one of the things to consider is that these algorithms and these codes that are going in, um, to these AI algorithms or, you know, whatever they’re called, codes, um, they are initiated by humans, right? And so we have to pay attention to the fact that there can still be biases. And as the algorithms continue to look through resumes and other selection mechanisms, we have to consider the fact that they may actually develop some biases, even though we think that they’re kind of benign in that way. But they develop patterns, right. So if they’re seeing, um, you know, that certain organizations are hiring from a pool of applicants, a certain demographic or something like that, certain qualifications from certain schools, things like that, then those biases can then get put into those algorithms over time. So I think it’s, you know, taking a step back to say, if I was an applicant applying for a job, am I going to feel more or less encouraged or discouraged by the fact that there is not an actual person making some of those decisions. Right? And I think for a long time, even without AI, the way we currently see it today, um, ten, fifteen years ago when people started using it for, you know, um, like, uh, culling resumes and things like that where people were saying, oh, well, you have to craft it for the AI, you have to put in these certain keywords and these certain phrases in your cover letter to make sure that you get past that initial screening stage. And I think that that can create some great opportunities to free up some time for HR professionals. But I think it also begs the question, where does that line, where is the line of where that, you know, um, creates problems versus, you know, enthusiasm surrounding AI being a part of that process? And if we could guarantee, like, hey, we don’t have any biases anymore because we’re using this AI technology and we can guarantee that, then I think that that would be wonderful. However, I don’t think that we’re at a point now with I mean, I think it’s coming, but I think at this point that’s not possible. I think that’s that’s actually extremely interesting, because you do still hear about how AI can be manipulated in certain ways to, I guess, generate these biases that people have. I mean, you know, you got to figure that that it’s learning from us and all the content we’ve put out there. And I remember Microsoft running into this problem on Twitter a lot, or now X or whatever it’s called. Who knows? Um, but I remember that very clearly that it would get like it apparently would turn very racist very quickly. And so that shows you how much AI is still a work in process. Right. And um, and I think we’ve heard about this a lot. You know, it’s, uh, from the tech world. We always talk about garbage in, garbage out, right? And there’s a lot of garbage out there that that will do that. And, uh, and I think that’s an interesting point to think about. Now, just just as an aside here, I know we had a colleague at UTC. Um, he has since left us to go elsewhere into greener pastures. But, uh, he, uh, he did some research on how the combination of AI and human expertise made better decisions in combination, while AI tended to make better decisions than the person, but together it was substantially better. And that’s kind of cool research. And his context was looking at Samsung’s theme park in South Korea. I don’t know how many of you all know that Samsung has a giant like Disney, essentially. And, uh, the experts were trying to predict what the daily, uh, traffic would be for the park. AI was doing it, and it was something like humans got it right, like seventy two, seventy three percent of the time. AI was like eighty five percent of the time, but together it was like ninety four Four percent, percent. So I think, you know, obviously human intervention here is great to have. Yeah, absolutely. Otherwise we’re going to end up in somebody made the reference to the Terminator movie. Right. Which I’ve never actually seen. But um, I think it’s the idea that we, we want to make sure that there’s still this human touch. I mean, it is human resources, right? We’re in human resources. There is this element of humanity that we’d like to still be able to tap into. And just like many other areas, you know, we’ve been using tools for selection for a long, long time. And so over time, you do research on those. You hone them in both in practice and theoretically, to make sure that they align with the overall job roles that we’re trying to fill. And if we aren’t putting our own human touch on that, then it just kind of leads us into a wild robot situation. So, um, I think, you know, I love that point that if you have them working together, if we can work with the AI and harness it to get the best talent, and to ensure that one of the topics that came up, one of the members that was at this event said, well, I don’t want to be the person that is, you know, implementing HR and I’m the bad person again, because our implementing AI, I don’t want to be that bad person that’s saying like, oh, we’re bringing it in. And the kind of counterargument to that was, well, if HR is not putting their hands into it and, and and getting into it and then IT people are bringing it in, then we have this disconnect of, okay, this is how it works. It wants to make sure that it works and that it hopefully isn’t producing those biases and things like that, like how can we get the best product out there? But if HR can’t come in and implement that in a way that is going to make the culture thrive, then I think that leads to a lot of other issues that that we can see with, you know, implementing AI into not just selection, but a variety of other aspects of HR, many of which can be very helpful and, you know, lessen time, increase efficiency. But if our employees are just seeing us putting out, you know, um, AI to do what we consider to be a human job, then that creates a culture. I think, of disconnect, and I think it creates a lot of stress and fear of job loss. And people may go look for greener pastures, right. To see. Well, I may want to leave this particular profession because I think that it’s my job is going to be taken by AI. So what is it that I can, you know, do instead? And so they may leave and that may not be what your organization is going to be using AI for. It may be just to streamline and be more efficient, but that may not necessarily mean that you’re having to remove jobs. Right. So yeah, I think that’s a that’s an interesting take there. You know, we are seeing a lot more layoffs happening. You’re hearing a lot of companies CEOs saying it’s related to AI replacing some of these jobs. And sometimes I feel like that’s a it’s a statement they’re making to justify perhaps poor performance as it is, or upcoming poor performance. And they’re just saying, oh, we can replace them with AI versus sometimes it actually being AI. But, you know, it’s a it’s an interesting time. We’re living in from a technology transition point. And, uh, you know, I think we all deal with this differently because, you know, as you say, it’s like if AI can replace my job, right. Um, that becomes an interesting thing. And one of the I was just having a conversation with somebody the other day and we were talking about, well, you know what we’ve really got to do, being in higher education is teach our students the competencies so they can, again, kind of like with this idea of the human and the AI working together, be better. If they can be replaced by AI, that means that they’re not probably using the competencies and continuing this lifelong learning to help leverage AI as a tool and use it as a tool versus a replacement. And I think that’s what we’re seeing right now, is that some people are. Maybe. Relying too much on AI versus their own personal development. So do you think there’s something that that we can do or that organizations can do to help make it so the AI is still a tool versus a replacement? Yeah, I think that starts with everyone in this room. Honestly. Um, you know, I’ve had conversations with our board members about this because, you know, what we’re doing at universities is in addition to the research and the other things that we do, we’re you know, our goal is to teach students how to go into the world and be successful talent wherever it is that they land. And I think that we see this because certain aspects of AI like ChatGPT and and Copilot, these are all newer developments. And I think as faculty, maybe we’re not as comfortable with it, because what we see and what we interpret is sometimes students are using this in ways that are perhaps not as ethical. Right. Like I am going to be very frustrated if I get a paper that is submitted by a student that is fully written by AI. The worst. It is the worst. And it takes a lot of time from us to, you know, police that. But in talking to some of our, um, our board members who are all very prominent HR professionals in the metroplex and beyond, um, one of the things that they’re wanting to see and that I think is very pertinent to this conversation, is how we as faculty can teach our students how to utilize these things in a way that’s creating efficiency, that is ethical, but it’s part of the problem there is, I think we have no real boundary conditions to look at it through. Right? We are all kind of trying to figure out, well, what is the ethical way to utilize AI. Like what is the what is that gray line, um, between using it to help us do our jobs better to help us, you know, um, do references for a paper, for example, or to help us come up with an idea for a training program that would benefit our new employees, our socialization process or things like that, versus using it to generate ideas that we’re now going to call our own. And I think that starts with us as faculty. And so we’re actually putting together a panel to come out to you and to do, to have a discussion with our faculty, because, you know, I’m sure everyone in this room has faculty that are really excited about AI and maybe encouraging students to use it in the classroom and trying to teach to the point where it’s like, okay, these are the ways that you’re allowed to use it. And being very clear. But many universities don’t necessarily have a specific policy on that because we’re all navigating it. And then I think, you know, we have faculty who are very anti AI. They’re not going to allow it in the classroom. They’re not, you know, if you use it at all, even for references or something like that, that is not acceptable in their class. And so it’s like how do we create and teach them if we’re not able to or willing to go out and figure out the the parameters around its use as okay. Yeah, I think that’s a challenge a lot of us face. I’m on the camp that I want my students to use AI, but it’s got to. What’s that? That idea of what’s responsible use. I don’t want them to use it for answering questions on a test or writing papers, because I want them to develop the understanding competencies. But at the same time, I do want them to learn how to use it so it can help enhance them in their jobs. And I and I struggle with what’s the right way to do that. And I think we all do. And I think that’s a that’s one of the challenges we face going forward. And it’ll be interesting to see how that evolves. Now Paul’s other question, he’s got two more, but I’m going to go with this one here. Uh, with AI bots showing some ability to handle sensitive human interactions, for example, serving as financial advisors and apparently romantic partners. That was that’s funny. Uh, do you see a role for AI in the more delicate realms of HR, like employee conflict, dispute Mediation. I think that’s kind of a hard one, because on on the one hand, it seems like a great option, right? Because you can you can craft through AI a discussion that meets all legal requirements for what you have to consider in kind of a maybe more tumultuous situation, you can ensure that it maybe says the right thing. But again, I think it comes back to that first question where where’s the line with that human response? And so, you know, can you teach AI to have compassion? I mean, I say please and thank you to ChatGPT. Does it matter? Probably not. I do the same thing. Great. I’m glad I’m not alone. Um, but, you know, at what point do we get into Wild Robot where it’s like, now we have a sentient being that is able to make these choices. And is that a plus or is that not a plus? And so when you’re navigating, it’s already hard enough to navigate those difficult conversations. Um, so maybe there’s a way that it again, kind of coming back to what you said, Frank, about partnering with that technology, maybe to help develop a script that’s more appropriate and maybe even having some options that you can, you know, if it goes this way, some decision path, you know, um, processes. But just to have a bot come in and try to have some of those conversations, unless it’s maybe email, you know, I think that that becomes really challenging when you take away that human compassion and human empathy portion. That’s interesting. The decision path thing. And I hadn’t really thought about that, but that makes a lot of sense. Giving you options and again, helping enhance. So you can have that. Like I mean it’s the blank page problem, right? That’s one of the things I love about AI is it solves that blank page problem for me. I feed it a prompt, it gives me something, and then I can start honing it from there based on what I like and don’t like. And um, but yeah, that’s a that’s actually kind of cool. I like that thought, but I also think about it from this perspective too. There’s so many of us who are maybe they like to avoid conflict. Generally. Right. And and, you know, I keep having to tell myself because I am I am a definitely more in the conflict avoidance mostly. There are some times I have no problems getting in somebody’s ear and kind of giving it to them, but most of the time I don’t like it. It makes me, you know, anxious and and what have you. But I feel like there’s a way to use AI to help you with that conflict process that maybe helps. You know, I haven’t tried it yet, but maybe it helps you stay a little bit more. Or take some of the anxiety out of those situations because, I mean, I think this is going to be true for a lot of management, a lot of even, you know, HR, you know, having to deal with a stressful situation. You know, the world’s the oyster right now there, don’t you think? I do. And I think, you know, it’s interesting I think that if you can have it as maybe a planning tool, if you know that you’re going to have a have to have a conversation that’s difficult. You can maybe use it in that planning process. Right. And having some of those scripts that are available. But what if you’re just at work and you’re having I mean, we still all have to have those skill sets, right? Because if you’re just at work going about your day and there is an issue as a manager that you have to deal with, you can’t just be like, excuse me, ChatGPT, how do I handle this question? How do I deal with these two employees that are, you know, having an issue? Or how do I deal with this machine that’s broken down and now my staff can’t come in to work? Like, what do I do with this conflict? I mean, some of some things are more time sensitive. And so, you know, I’m not sure how the AI technology can help in that way other than if you’re carrying something around or if there’s some other mechanism for implementation to help. I feel like this is where things like, what is it that the meta glasses or something where it’s always on, always recording, always seeing, and you’re like, now it’s in your ear. And you know, we start drooling and becoming. We’re all on Wally now. Exactly. That’s what I was thinking. Um, that’s that is. That’s funny. Uh, so here’s Paul’s other questions. Uh, now, this is this is a Paul, and I love Reddit, but he said members of the our human resources subreddit often expressed confidence that their jobs are at much lower risk of AI replacement than those in other business fields. Is that truth or wishful thinking? That’s a really good question too. Good job, Paul, on these questions. I wonder if he used AI for it. I actually had that thought too. Um, we’ll just assume that maybe he used it for the blank page problem and then crafted it and integrated his own human touch. Um, so, you know, that was part of the conversation we had a couple of weeks ago at this event. And, you know, before going into that, I think many of us maybe in The last year or two with this, you know, implementation of ChatGPT. I think a lot of people have had concerns about where my job is going. I mean, I think as academics, we have this struggle too, because it can write papers, right? And so if it’s not caught, what is stopping people from submitting to journals and whatnot? Right. So I think even there’s a broad group of people who are very concerned about this and how it’s going to impact our work. And I think I will say that having heard this machine learning expert talk about this, I felt a lot more confident about HR because I think that that’s where it comes back to HR is going to be the implementation mechanism and the adoption crew for successful AI and organizations. And so yes, where maybe some jobs are, you know, um, I don’t want to say eliminated, but I don’t have a better synonym at this time. Um, you know, streamlined. Maybe some departments are streamlined. I feel like there’s a lot of opportunity to again, put that human touch with the AI to make sure that, you know, we’re we’re we’re sending the signal that it is a tool that can be used rather than something that we should be afraid of. Taking all of our jobs. And I think if we, you know, I mean, I was marginally alive in the eighties, but, you know, when computers came around, I think this was the same conversation that was happening then, like, computers are going to take all of our jobs. And I think we can see that that has not been the case. I mean, in some cases it was. But I think we all use computers every single day and they’re to our benefit. Well, mostly. So I think, you know, I think as it evolves, we’re going to find different job roles that we haven’t thought about yet. And I think HR is going to have to be a prime source for how we navigate implementing those jobs into the organization. And if they do take up that, we’re going to be the department that brings in, you know, our AI technology and make sure that it’s implemented correctly. Make sure that our culture doesn’t have this significant shift, that the fear of job losses, and that we don’t increase our turnover because people are scared. Then I think that there’s a real opportunity for HR to to really put a nice touch on this, a human touch on this. Um, so hopefully not. I think it’s also too early to really determine solidly what jobs will be impacted. Yeah, I think that’s the real challenge we’re running into is we know that jobs are going to get impacted in certain cases. But like as you said, much like computers replace certain jobs. But we saw an evolution. How do you fill in these other areas that now need to be filled in because of this advancement in technology? And, and I’m not sure if we’ve seen any trends there. If somebody in here knows of trends that you’re seeing jobs coming up as a result of AI. I’d be curious to hear that. But when that one of our board members threw out last year and now I’m actually hearing it more and more, um, is like prompt engineer, which sounds pretty cool. Um, but I don’t know what that really entails. I mean, other than being really good at prompting ChatGPT, which I am not, so maybe I need to be training more in that they need an AI training. For those of us that want to learn more about how to utilize it, perhaps that’s actually interesting. Yeah, the prompt engineer and I guess that comes back to you got to develop competence first in order to be able to prompt it and then understand the output, to then be able to continue to drive to what you’re trying to get to. And I think that’s the challenge I try to explain to my students quite a bit because I, I will confess, I use ChatGPT pretty regularly most of the time to either help me go through, like if I did a survey doing a strategic planning session, I’ll upload it in there and say, hey, can you put this into some themes for me? Because it does it in like five minutes, whereas it would take me literally all day. I get paid by the, the, the, the, the, you know, the project, not by the hour. And so the more time it saves me. But again, I feel like I have the confidence to leverage it versus just using it to replace my own idiocy, per se. And I think that prompt engineer, I guess that’s going to be one of those. It’s like you gotta you have to develop those competencies before you even see the technology. So that way you know that you’re not just getting garbage out and putting garbage in to prompt it, even in the first place. Absolutely. That is a I didn’t even think, but that makes sense. Prompt engineer would be the next phase of expertise there. Um, so, uh, anybody in the audience have any questions for for Julie? And if you do, we have somebody coming around with the microphone. And if you are not ready for any questions, I can certainly come up. Oh, we do have one right up here in the front. Hello. Is this on? There we go. So one of the things with AI that’s interesting is this idea that, hey, we’re going to replace a lot of jobs, but it feels like in tech, we’ve seen this story over and over and over, and what seems to be hard in general is seeing up front what we might get as opposed to what we might lose. Right. And so it seems like we drive a lot of cost down. We increase flexibility. And we’re not surprised when demand goes way up for things. Is AI different? I don’t think so, but I’m very interested to hear what you think. Or is there something sort of fundamentally about it that alters that pattern we’ve seen before? I would say, and this is my opinion, um, and just from conversations that I’ve had is to me, it seems like it’s an important next step, right? So if you look at, like, the history of the United States, we had, you know, the Industrial revolution, we went from farming communities and whatnot to manufacturing jobs. And that was great. And it created a lot of opportunities for people. But people were concerned. And then, like I mentioned with the computer example, we had this, you know, we’ve had consistent Implementation of new technology that I think creates this fear that my job is going to be now taken. Um, but I think that we’ve seen that for the most part, we’ve come out of that on the other side, using that technology very successfully to create better lives and jobs for many people. And so I think that this is kind of the next step in that, um, and it’s hard to imagine what this step after that might be. Right. So I think, you know, my opinion is that I think that it’s it’s a great tool for streamlining, hopefully not with removing too many roles in that streamlining process so that organizations can maybe focus on some of the, you know, outcomes that they have that might benefit many of us better and differently. Um, I’m not sure if that answers your question. Frank, do you have thoughts? I actually don’t on this one. I think you did a great job of answering that quite honestly. Um, we got another question over here. Just like with any tool, AI has good and bad uses. And we’re seeing a lot of, um, both in the workplace and in academia right now, both on the side of our students and on the side of our academics and professors. What do you think are the next steps in ensuring that we place safeguards in those areas so that those negative uses are decreased? Yeah, I think that’s a great question. Um, and I think to some degree, like I said before, it kind of comes back to us, but I think it starts even earlier than that. Right? So, um, I have two kids who are. One is in elementary school, one is in pre-K, and I’m concerned about how this impacts their trajectories. Right. So I think that honestly, we have to support our educators at a lower level. Like and I don’t mean lower level, sorry. What I mean is like at the elementary level, um, to start helping us with protecting how we go about teaching, right. So if AI is going to be a part of. I don’t know if any of you have kids, but you know, my kid has been able to swipe things since he was like eighteen months old. They’ve been on devices their entire lives, and they they understand technology to some degree. And I think there’s a lot of, you know, aside from AI, this safeguarding of now there’s this argument about smartphones and things like that and how we’re educating people. But I think that education also needs to include AI, because I think to your point, there’s a lot of really great opportunity for it. But if we’re not teaching that from an early stage at this point, because we know now that it’s going to be a part of our daily lives for the foreseeable future until something, whatever, the next great thing that comes along is going to be. Um, then I think that we really need to, as a, on a more societal level, honestly try to figure out how we’re going to fund some of those early development opportunities for kids to make sure that we’re creating ethical standards for how to use it, how to use it as a tool, rather as rather than as a, you know, substitute for creative thinking. I think there’s already challenges with that. I think we’ve seen, you know. You know, as as faculty at universities, I think we see some problems with how. Um, you know, students want to kind of like we’re teaching to a test in a lot of situations. I think a lot of us are trying to have more creative, innovative options for students now. And I think this is a great opportunity for education to really step in and say, hey, we we need to come together across levels from primary to secondary education and say, how do we create these safeguards around AI, AI use and what are, you know, part of the issue, I think, is all of our universities probably have a different policy or don’t have a policy about AI. And I think we’re seeing this with journals, too. More and more journals are saying you need to make an AI statement, which is great because now I’ve implemented that in my doctoral seminar. I’m like, well, I need an AI statement with your submission for your final paper. Thank you. You know, um, I need to know how you’ve used it. I need to know what the parameters were that you used Prompts, that kind of thing. And if we’re able to teach our younger, our kids the next generation how to properly do that, then perhaps we kind of streamline, I guess, um, how we get to use that at a societal level in, you know, when they start to go out and get jobs, when they’re creating this next, you know, industrial revolution, it’s not really an industrial revolution. But however, these jobs are going to be created, whether it’s prompt engineer or what have you, then they will have a solid foundation for that. But I think that before we can do that, we have to decide more societally how we’re going to. What is an ethical use of AI? Is it okay for me to do references, or does that give me some sort of information that I didn’t have before thinking outside the box to do my reference list? Is it okay for me to say, hey, can you create this outline so I don’t have this blank page problem? I also struggle with that. So I get it. Is that okay? Like what is that line and how much do we blur that? And then what the consequences are for using it. Right. So I think in an education setting we can very clearly state some consequences. But I know at my university I can’t really you know, a person, a student can’t really get an F for me thinking that they used AI to write their paper. Um, and I think many of us are probably because it’s hard to prove. So when it’s something that’s hard to prove, it’s like ethical decision making, right? Like, where are we on that level? Right now, we’re maybe in Preconventional. Well, if we don’t get caught, maybe it’s okay, you know? Um, but how do we get us collectively, both from an educational and organizational perspective, up to that highest level where we’re like, okay, we’re using it to help ourselves to to create more efficiency so that we create opportunities for jobs and for employers rather than, well, I can use it because it makes my life easier. But where is that ethical line? I don’t know if that answered your question. I think to add into that, I was having a conversation with somebody here at SMA the other day, and he put it in a nice way, you know, because we were talking curriculum and such, and he’s like, you’ve got to think about the timely versus the timeless skills, right? Timely. And I liked it because it helped me kind of, you know, our brains like to categorize things and and the timely skills are the things like data analytics and AI and using that kind of stuff. But there’s a lot of timeless skills that as humans, you know, again, to Paul’s question about like empathy or its ability to sort of relate to people, like, that’s the timeless stuff. How do you communicate? How do you interact? You know, the problem solving pieces that come in there, the ability to do collaboration and teamwork and, uh, resiliency. Right. There’s this idea of resiliency that that we can instill through education, you know, especially from that entrepreneurial mindset that’s required for entrepreneurial success. And I think, you know, as we think about the future generations. We have to think about that element of what’s timely versus what are the timeless skills that we need to instill. And again, I think, as you said, it goes to we need to get it in the primary education, secondary education. It has to happen way before they come to college and, uh, or before they graduate from high school and go and work in a in a trade or what have you. I mean, we need to have that, that, that developed at a much earlier time. And, you know, it’s it’s a disrupter to education across the board. But it doesn’t have to change really what we do substantively, substantively or substantially or however we want to phrase that. So other questions from the audience about this any any. Ooh this is fun stuff. I’m going to I’m going to ask this question because I know that, um, right now And this is a little bit of an aside. Um, at UTC and some other institutions that I know of, we’re having a hard time with getting people to go into the HR major. It’s like our enrollments are either flat or declining. I don’t know what it’s like at North Texas, but we’re seeing we’re seeing kind of a flat or declining element. Do you have recommendations for the universities that might be experiencing this as to what might help? And I’m not I’m not an HR person. I’m a strategy guy. So I don’t like people. I’ll just throw that out there. Um, I jest, but, uh, the but the, the element there is that, you know, is there something that, that we need to be looking at that that will reinvigorate people’s interest in going into, uh, into HR? I actually think AI is kind of a cool place where we can do that because, you know, to the point I made earlier, it’s an opportunity, I think that hasn’t really I mean, I guess it’s existed before with those examples I gave. But I think we have a much stronger understanding of how to like the different facets of HR, right? We have so many different areas, and one of the things that I love telling my undergrad students, um, is like the first day of class, I’m like, how many of you want to go into HR? And I have like, you know, I mean, I teach I was teaching like junior senior level classes. So most of them were like, yeah. And then I’d have people from history or whatever come in and I’m like, great, you know, but you’re not going to be an HR. But all of you have to deal with HR no matter what job that you’re in. You have to deal with HR. So it’s even better if you’re majoring in it and you have a better, more complete understanding of it. Because even if you don’t major in it, you still have to go out there and deal with somebody who’s your HR person, and you need to be able to speak competently about your compensation package and the rewards that you expect and the benefits that you’re being offered, because those affect your actual work and personal life. Um, and I think this AI journey that we have to overcome, I mean, if you will, is an opportunity for people to really make a difference, because I think in a world where we see and this is also kind of post-Covid, where we we kind of all hunkered down for a couple of years, and now we’re kind of like back into the normal stride of things. For the most part, I think it really made people think about like, well, I mean, I used to go into the office every day and maybe I don’t want to. Now, how do we navigate those situations in a way that’s, you know, we can utilize, you know, we have to build a culture is what I’m getting at. And there are different struggles that I think organizations are dealing with right now. And it’s a great opportunity for people who really want to make a difference in an organization as a whole. Because I think HR, the people that make things run, I mean, everyone else makes things run, but I think HR gets a bad rap for being, you know, the folks that fire people. That’s a very small part of the role. You know, ultimately, if you’re doing a great job, you’re creating a good culture. You’re finding ways that these new technological innovations, whether it’s manufacturing equipment or computers or AI can be utilized in a way that still is motivating to your workforce and can still help to create a productive, you know, organization. So, I mean, I, I still find it very exciting. And I think that, you know, I’ll say at UNT, we have a very passionate faculty group. So I think that Maria does a great job. Erin Welch, who is our internship lead, also does a great job of really encouraging our students about all the different ways that HR is important. Another thing that we try to do, and I think that others can do, is, um, bringing in HR professionals to student events. Having a student SHRM chapter and, you know, kind of making that connection of the importance of HR without that, you know, I used to also pull up the Grim Reaper, like on my first slide of my first class of the semester and be like, this is not what HR is. Um, you know, the ultimate goal is helping people, helping the organization and helping people. You’re a liaison between those two things. And I think that that offers a number of challenges, as we’ve talked about today. But I think the opportunities kind of outweigh that. And so if you’re a person that’s really excited about, you know, helping an organization grow and flourish, then I think HR is HR is the major for you. Sign up today. Talk about passion there I could, I could. That that was really that’s really well done. And uh, yeah, I think it’s interesting because like, you know, some folks see HR as being the CIA tool for companies, right? It’s like we’re here to protect the company legally. And and Paul and I have had an episode in which we talk about how HR is really, you know, it should almost be its own thing, separate from the leadership team that represents the people and allows it. You know, they’re not the CIA tool, right? They’re the thing that that is the voice of the of the employees, you know, something to that effect. And I think there is that bad rep element where it’s like they’re they’re the CIA legal protection. Oh. Something happened. HR gets involved and they try to squash it so the company doesn’t get sued or what have you. This is definitely an important part of of that role. But I think that there are all these other aspects that don’t get discussed as much. Um, hence the Grim reaper. It’s like this negative perception. But when you think about the fact that you’re a people manager instead of just HR, you’re like a person. You’re a person managing people and helping people to thrive in the organization that you’re trying to support, right? So you’re supporting your people. You’re supporting your organization. And if we don’t have people doing that, and if we have AI doing that on its own, I think that’s where we start seeing potential for legal ramifications. I think, um, you know, I just had my HR seminar for PhD students. We just read an article by Bud Hamilton and Crystal Davidson that kind of talked about the legal ramifications and the areas like selection and retention and things like that, where, you know, there are. really great opportunities for AI to be helpful, but we also have to consider that legal area. And I mean, again, that’s not maybe the most exciting things for some people. But if you are a person who wants to take on that role, I think there’s going to be plenty of opportunity in that role of HR navigation, if you will. People, navigation to help organizations kind of navigate this next field of legal. Challenges, if you will, that are going to be inevitable with change and with the implementation of AI. Yeah, I think that’s probably going to be a nice evolution through that process. I’m going to bring it back to something that we’ve covered on the podcast before. Um, Amazon and, you know, kind of talking ethics. And I feel like this is a real ethical issue. Um, Amazon uses AI to fire people quite a bit. What is interesting about how they do it, though, is that they. give people, they give their AI who’s doing this because they’re driving a lot of metrics, right? Their drivers have so much time out there. They have to deliver so many packages. But what was I think really that stands out to me is that they would give the HR folks that were doing the firing foreign names. And so typically they were from, you know, like Southeast Asia, kind of, you know, Indian or Bangladeshi or something to that effect. And yet, like, to me, it’s like there’s not a person there, right? It’s not a person who sent that. It was it was an HR bot that said, oh, you’ve been fired because you’re not meeting your metrics. I mean, that’s an ethical, real ethical challenge, especially using, you know, instead of like going, this is an AI bot firing you. We’re personifying it and pointing to a culture and creating bias or hatred or what have you. like? What’s your take on? I think there are a lot of problems with that. Um, and I think, uh, you know, not leaning into the part of, you know, like the obvious potentially ethical issues there, um, where you’re trying to pinpoint it on a specific group or groups of people, which I think is very problematic. You know, I think that oftentimes we’ve seen people get fired in ways that are maybe not the most wonderful. Like if you think back to I have I have very minimal pop culture references. But I can say, like, if you remember that movie, I think it was up in the air with George Clooney coming in. And so is it like, is it better that there’s George Clooney coming in and firing people with blanket firings or the the bobs from the, you know, office space or whatever? So we’ve got that picture of what firing looks like, like middle aged white guy versus what this is saying, which is now we’re going to say we’ve taken that across, you know, away from what this typical mindset is of what people look like who are firing you. And then we’re just saying it’s going to be one of these other groups so that we can create this almost hostility towards that group seems very problematic. So I definitely think that there, um, that should be revised. Yeah. I mean, you know, you think a lot of that drove you know, we’ve heard stories now where these Amazon, Amazon drivers can’t even stop to have a bathroom break. They’re, they’re using like bottles. And you know, it’s just it seems like we’re not treating people as people. Right. And kind of going to the name of the people center. Right. The people center is about, you know, it’s got it in the name people. And I think that’s something that, um, AI is not it’s still not it will never be a person. And, uh, I think we really need to remember that there are actual humans with emotions on the other side of a lot of this stuff. And yeah, yeah, they lack the ability the AI lacks the ability to take into consideration. Um, you know, the recognition part of the decision making process, right? It doesn’t have the ability unless you feed it all the data, but you can’t feed in that personal data necessarily, right? Like why you’re getting rid of somebody or why someone’s performance isn’t granted from an efficiency standpoint. Maybe, maybe letting that person go is the best choice. But when you think about the fact that AI has no idea what the personal background is, that maybe something is going on in that person’s personal life and they, you know, have had some absenteeism or issues or whatever. AI doesn’t have that information unless you feed it to it, right? We do have a question. So hold on. I will unmute that mic so that, um, spikes a question that I had. So they have there’s like robots that now have citizenship and they call AI like Alexa and like or claw and things like that. Do you think we’ll ever get to a point where AI will be intelligent enough to be like a person, like and be person. I don’t know again. I recently watched Wild Robot and that movie was really lovely. If you haven’t seen it. I am not getting paid by Pixar. Whoever developed that. But it is like a it’s kind of a terrifying when you compare that to Terminator, which again, I haven’t seen, but I’ve heard. I think that the idea that it can be a sentient being, being is, is very much there. I mean, I think that, um, that’s a terrifying thought because then you have these autonomous robots walking around making decisions with, you know, whatever they’re able to. However, the algorithms are able to be developed through, you know, their, their feedback that they’re getting and whatnot from just data. So unless it’s creating an actual, you know, emotional response, I don’t know. I think it’s something that the people who are making robots and that are profiting off of these things would be interested or probably are interested in probably already talking about what that might look like. But I don’t know. It seems like a terrifying prospect to me. Maybe there’s a benefit in it, I don’t know. I mean, I think there’s ways that it would, you know, make more efficient, you know, if you can have someone, a robot, I guess, cleaning your house. I know they’re working on these models already and just like. Yes. Um, so. Yeah. Same. Um, my robotic vacuum cleaner does not do what I think it should do. Um, so if there’s an upgrade to that, that does a better job, then. Yeah, but you know. But where does it come into like that decision making process right now I can type into ChatGPT. Hey, I got this really challenging email from someone. Can you please give it a more empathetic response than what I would do? And it can do it, which is awesome for me. But do we want it to make that decision on its own? And that’s where I think there’s a very clear there’s a line where it’s just like, well, how far do we want that to go as a society? Are we okay with non-humans walking around making decisions and and all of that, I don’t know. Initially, I’d say I’m terrified by that. Two things I really want a robot to cook for me. Um, I really want an AI bot because I, I that’s not my that’s like my least fun thing. And I don’t want to eat out either. But, you know, I think I think what’s, what’s fascinating is, you know, you talked about efficiency earlier. And, you know, I think the one thing we don’t think about frequently in this is how inefficient AI really is. It draws a substantial amount of power. You know, I think more so than what humans probably do just living their day to day. I mean, I haven’t seen anything on that, but I mean, there’s a there’s a tremendous amount of energy. And, you know, you’re hearing about this where these AI data centers are going, how power bills are going up substantially to the people who live in that area because of how intensive they are. And I see you’re chomping at the bit to say something, because I have an example of this, actually, because you and I just talked about this, right. So I said, I say please and thank you to the, to the ChatGPT. And you were like, yeah, me too. I just read an article about how apparently terrible we are as humans, you and me, for doing that, because apparently it takes some ridiculous amount of additional power for for that politeness to be processed. So we’re adding all these additional words that we’re like, oh, this is just how I talk to a normal person. And apparently that is a drain on the power system for it. So maybe we need to stop doing that to be more sustainable. We’re anthropomorphizing the AI already. I know it’s terrifying. Um, so yeah, I think that, you know, there’s there’s a very clear example with that and how it’s taking these resources, right. I will also say it takes the resource of time because unless you’re a prompt engineer and I am not, I, you know, okay, so I’m the vice president of the PTA for my kids school. And we were doing teacher appreciation week, and I was like, I just want you to take this word document and put it into a graphic. And I thought, it’ll do it much more efficiently than I will. It was like two and a half hours, and I really should have stopped my, you know, escalation of commitment was very it was not good. So but it couldn’t do it and it would have errors, just consistent errors. And I’m like, it’s reading from the actual word document where I’ve double and triple checked that everything is correct, but it’s it’s not. Translating it into this graphic graphic looked awesome. Wrong date, wrong way to spell words that are basic and it never gets it right. You keep going back and saying no, don’t change this. And it’s like still changes. You’re like stop. So there’s even apologizes. It’s like, I’m sorry that I didn’t do it right here. Let me do it wrong again. Okay. Thank you. Now we’ve wasted all of our time and and power right at the end of the day. Exactly. We got one more question here that we’ll take. Yeah. So kind of speaking to that, um, the hallucinations, like AI is wrong a lot. Often. And what I worry about and I do try to teach my students this is do people using AI know that? I mean, because, I mean, I’ve had to do my references and I’m like that. That isn’t real. Like, you made that up. That’s not a real. Because I’ll go and try to find an article and it’s not it looks like a real article. It’s in a real journal name, and it’s real people’s names, and it’s not a real article at all. Um, and so I worry that there are people out there just using ChatGPT to do all sorts of things. And, and they’re it’s wrong. They’re doing and it’s wrong. It’s wrong information. It’s inaccurate. And they’re using it in hospitals. Now, maybe not chat, but they’re using AI and hospitals for diagnosis. And what are your thoughts About about about. Yeah, I think you’re absolutely right. So I actually did a little bit of an experiment with that at the beginning of the semester. So, um, you know, I just wanted to see if I could get it to summarize my, my weekly readings for my doctoral students. And so I just threw in the citations and I was like, give me a brief summary for, you know, an HR seminar on, you know, whatever. And one of I mean, I didn’t in-depth check all of them. And so most of them were like generally, okay. Um, I probably would have written a different summary, but, you know, that’s why I asked it to do it. And so one of them, it was just completely off. It was like this meta analysis. I’m like, nope, it was not there was no meta analysis in this paper whatsoever. And so I but I took it in and I showed the students because I was like, look, this is why we’re not allowing computers in class this semester. We’re having this where it’s like, you guys have to read and do your own notes, at least take notes and bring them in and, you know, have this conversation and why I’m cautioning you. And I think that the, you know, I mean, the diagnosis thing, that’s interesting too, because I think, you know, from a radiology perspective, perhaps there’s, you know, maybe they’re getting it right in terms of their, you know, they’re having people maybe double check it. And so I think if you have that inter-rater reliability, you know, I was talking to a co-author the other day and she was saying she had been talking to a different team and they had done a lot of qualitative, um, you know, like theme development. And then they threw it and they did it all. They did all the work, and then they put it into AI, and it was actually very similar. So I was like, when you’re doing it to kind of double check, I think that’s really helpful. And then over time perhaps that, you know, impacts the algorithm and the codes that are going into it and helps to have a more accurate diagnosis. For example, now, maybe it’s better than, you know, for us, as you know, people seeking diagnoses, maybe it’s better to use AI to help us than it is to use like Doctor Google. But I think both of those things have a lot of superfluous information that you’re now getting into that formula. And there, to your point, can be a lot of information that’s incorrect. So I think that’s part of that idea that we have to educate people from an early stage to not believe, you know, it’s part of like questioning, right? Part of our jobs as researchers is to question things and say, why are things done this way? Or why is this the relationship that we’ve always thought? Maybe it’s different. Um, and I think that’s what we have to do is we continue to navigate this AI environment is say, all right, you’re in elementary school, we’re going to be teaching you how to use AI, you know, ethically and whatnot. And we also need to make sure that you understand that what you put in there, maybe you’re not a great prompt engineer yet at age seven and you just don’t know what you’re doing. And it comes back with some crazy answer. But I think for us as adults, we’re putting in information, getting wrong information, and it’s up to us to teach the next generation how to say, you know what, let’s step back for a minute and think, maybe I should do a little bit more research separately and see if I can corroborate these responses. Um, and I think a lot of us do that with Google too, so I’m not sure the AI is super unique in that way. It’s just that we have been conditioned in the couple of years that ChatGPT has been available to think that it is the end all, be all of knowledge, and that it’s just really quick to bring up correct information. I have a co-author that tested it too, and it came up with papers that he hadn’t even written. He was like, great, my CV looks awesome. Now. I’m like, yeah, that’s awesome. Too bad it’s not real, you know? Well, with that, I think we are out of time. Uh, we, I guess are the ones closing out SMA for the most part, which is very exciting that we are the last little bit here. Uh, Julie, thank you so much to you and to the People Center in North Texas to help sponsor this event. And to the audience here. Thank you for your questions. Uh, this is fascinating stuff. I’m really excited to see where things go and also somewhat horrified and terrified. But, you know, that’s the reality of it. And so thank you, everybody. Uh, have a wonderful travel home and it’s over. Thanks so much for having me. All right. Awesome. Is it off?
