A new study reveals that artificial intelligence (AI) is increasingly being used by managers to make critical employment decisions, including hiring, firing, and promotions, despite many lacking formal training in the technology. This widespread adoption raises significant concerns about bias, fairness, and the need for human oversight in sensitive personnel matters.
Key Takeaways
- Managers are rapidly integrating AI tools like ChatGPT into their decision-making processes for employment-related tasks.
- A substantial portion of managers using AI for personnel decisions lack formal training in these tools.
- The reliance on AI extends to critical areas such as hiring, promotions, raises, and layoffs.
- Experts warn about the inherent risks of AI, including potential biases and the difficulty in assessing qualitative aspects of performance.
- There is a strong call for continued human oversight and intervention to ensure fairness and prevent legal complications.
The landscape of workforce management is undergoing a significant transformation. A recent study by career site Resumebuilder.com indicates a growing trend among U.S. managers to leverage artificial intelligence (AI) for key personnel decisions.
This includes critical areas like determining who gets hired, promoted, or even laid off. The findings suggest that AI’s role in the workplace is expanding beyond mere automation to influence core human resource functions.
Table of Contents
The Rise of AI in Managerial Decisions
Managers across various organizations are increasingly outsourcing personnel-related matters to a range of AI tools. This shift is happening at a rapid pace.
The survey, which polled over 1,300 managers, highlighted a notable reliance on these emerging technologies. It found that a significant majority of managers now incorporate AI into their daily decision-making.
Specifically, 65% of managers reported using AI for work-related decisions. This broad adoption underscores AI’s growing ubiquity in the corporate environment.
Even more strikingly, the study revealed that 94% of managers turn to AI tools when tasked with crucial decisions. These include determining who should be promoted, receive a raise, or be subject to layoffs.
This extensive use of AI for such sensitive tasks marks a pivotal moment in human resources practices. It suggests a fundamental shift in how companies manage their most valuable asset: their people.
A Gap in Training Amidst Rapid Adoption
Despite the high rate of AI adoption, the study uncovered a significant concern regarding managers’ preparedness. Approximately one-third of the individuals responsible for employees’ career trajectories have no formal training in using AI tools.
This lack of foundational knowledge raises questions about the informed and ethical application of these powerful technologies. It implies that many are navigating complex AI systems without adequate understanding of their capabilities or limitations.
The absence of proper training could lead to unintended consequences. Misuse or misunderstanding of AI outputs could impact employee morale, productivity, and legal compliance.
Corporate Directives Driving AI Integration
The increasing reliance on AI tools by managers for personnel decisions appears to contradict the traditional purview of human resources departments. However, companies are actively integrating AI into their day-to-day operations.
There’s a palpable push from the top down for employees, including managers, to adopt AI. Erica Pandey, an Axios Business reporter, explained this corporate directive to CBS News.
Pandey stated, “The guidance managers are getting from their CEOs over and over again, is that this technology is coming, and you better start using it.” This directive is influencing how managers approach critical responsibilities.
Given that managerial roles inherently involve critical decisions around hiring, firing, raises, and promotions, it’s logical that they are beginning to incorporate AI into these sensitive areas. The pressure to innovate and optimize operations is a key driver.
Potential Risks and Ethical Dilemmas
The use of generative AI in determining career trajectories and job security is fraught with significant risks. These challenges are amplified when users lack a deep understanding of the technology.
“AI is only as good as the data you feed it,” Pandey cautioned. She highlighted that many users are unaware of the volume or quality of data required for effective AI performance.
Beyond data considerations, these are profoundly sensitive decisions. They directly impact an individual’s life and livelihood. Therefore, human input remains indispensable.
“These are decisions that still need human input — at least a human checking the work,” Pandey emphasized. This underscores the need for a collaborative approach, where AI assists but does not replace human judgment.
The Risk of Bias and Discrimination
A major concern associated with AI in HR is the potential for bias. Numerous reports have highlighted that AI can reflect and even amplify existing societal biases present in its training data.
“Report after report has told us that AI is biased. It’s as biased as the person using it,” Pandey noted. This inherent bias can lead to discriminatory outcomes in employment decisions.
Companies that rely heavily on biased AI without sufficient human oversight could find themselves in significant legal jeopardy. This includes exposure to discrimination lawsuits, which can be costly and damaging to reputation.
For more insights into the ethical considerations of AI, you can read more on the topic from reputable sources.
Challenges with Qualitative Assessments
AI may also struggle to make sound personnel decisions when a worker’s success is measured qualitatively rather than quantitatively. Many aspects of job performance, such as leadership, creativity, or teamwork, are subjective.
“If there aren’t hard numbers there, it’s very subjective,” Pandey explained. AI algorithms often excel at processing structured, numerical data but falter with nuanced, qualitative information.
Such subjective assessments necessitate human deliberation. In fact, they often require the input of multiple human perspectives to ensure a fair and comprehensive evaluation.
“It very much needs human deliberation. Probably the deliberation of much more than one human, also,” Pandey concluded. This highlights the limitations of AI in understanding the full spectrum of human performance.
The Imperative for Human Oversight
Problems inevitably arise when AI increasingly determines staffing decisions with minimal input from human managers. The scenario of a manager simply asking an AI tool, “Hey, who should I lay off? How many people should I lay off?” is genuinely unsettling.
Such an approach bypasses the complex ethical and practical considerations that human managers are trained to handle. It risks reducing individuals to data points rather than recognizing them as valuable contributors to an organization.
Ultimately, AI should serve as a powerful tool to enhance human decision-making, not replace it. Its strengths lie in processing vast amounts of data and identifying patterns, which can inform, but not dictate, sensitive human resource outcomes.
Ensuring transparency in AI algorithms, implementing rigorous auditing processes, and maintaining robust human oversight are crucial steps. These measures can mitigate the risks of bias, ensure fairness, and uphold ethical standards in the modern workplace.
The goal should be to harness AI’s efficiency while preserving the empathy, judgment, and ethical considerations that only human professionals can provide.
Source: https://www.cbsnews.com/amp/news/ai-hired-fired-promotion-managers/
Frequently Asked Questions
What types of AI are managers using for HR decisions?
Managers are primarily utilizing generative AI tools, such as large language models (LLMs) like ChatGPT, for a range of human resource functions. These tools can analyze resumes, generate interview questions, sift through performance data, and even suggest candidates for promotion or layoff. Their capabilities often extend to synthesizing vast amounts of information to provide recommendations or summaries for decision-makers.
What are the main risks associated with using AI for hiring and firing decisions?
The primary risks include the potential for inherent bias within the AI algorithms, leading to discriminatory outcomes. AI models learn from historical data, which may contain human biases, inadvertently perpetuating unfair practices. Other risks involve the AI’s inability to interpret qualitative aspects of human performance, a lack of transparency in its decision-making process (the “black box” problem), and the potential for legal challenges if decisions are made without sufficient human oversight or adherence to anti-discrimination laws.
Is it legal for companies to use AI for employment decisions?
Yes, generally it is legal, but companies must still comply with existing anti-discrimination laws and regulations. The challenge lies in ensuring that the AI tools themselves do not create or perpetuate discriminatory practices based on protected characteristics (e.g., race, gender, age). Regulatory bodies and governments are increasingly scrutinizing AI’s role in employment, with some jurisdictions beginning to propose or enact specific laws governing the use of AI in HR to ensure fairness and transparency. Companies are advised to perform regular audits of their AI systems to detect and mitigate bias.
- Sam Altman Career Advice 2025
- AI frontrunners bottlenecks: 3 Alarming Challenges Facing Tech Leaders
- Beginner’s guide to automating business processes with AI
- AI Writing Tools Battle: Jasper vs ChatGPT vs Claude – Which Delivers the Best ROI?
- The Complete Guide to AI Content Optimization: 10 Tools That Actually Work in 2025