AI at Work: The Hidden Cost of Productivity Boosters
Artificial Intelligence in the Workplace May Negatively Affect Your Professional Standing, Research Indicates
Embracing artificial intelligence (AI) in the workplace can result in a double-edged sword situation, as a new study by Duke University implies. While AI tools like ChatGPT, Claude, and Gemini may spike productivity, they potentially damage an employee's professional standing amidst a swirl of social stigma.
"Our findings reveal a consistent pattern of bias against those who receive help from AI. Individuals who use AI tools face negative judgments about their competence and motivation from others, creating a paradox where productivity-enhancing AI tools can simultaneously improve performance and damage one's professional reputation," the study in the Proceedings of the National Academy of Sciences (PNAS) declared.
Appraising over 4,400 participants, the study titled "Evidence of a social evaluation penalty for using AI" found that this stigmatization isn't limited to specific demographic groups. Even those who avoid AI frequently consider their AI-using peers as lazy, deterring perceptions of a good task fit.
On the bright side, this social penalty can be mitigated if AI is perceived as essential for the task at hand. Candidates who use AI in such situations are viewed as having a higher task fit, the study stated.
Factors contributing to the social stigma attached to AI use involve concerns about a person's skills, trust in AI’s output, ethical issues, and rigidity in cultural norms. Consequently, employees using AI may endure social penalties that impact teamwork, leadership opportunities, and informal social capital.
However, AI can foster increased inclusivity, productivity, and mental health support when organizations adopt a human-centric, trust-building approach, thereby debunking the social stigma over time. Implementing AI as a supportive tool rather than a skill replacement can pave the way for a new workplace norm, alleviating potential negative social consequences.
*Bonus Read | Sam Altman Offers Peaceful Resolution to Elon Musk: "Let's Be Friends" After Old Posts Resurface*
Sources:
- Evidence of a social evaluation penalty for using AI
- The Role of AI in Reducing Stigma in Mental Health Support
- Impact of AI on Self-Perception and Social Judgment
- Ethical Considerations in the Adoption of AI
- Human-Centric AI: The Key to Building Trust and Reducing Stigma
- The study in the Proceedings of the National Academy of Sciences (PNAS) suggests that employees who use AI tools like ChatGPT, Claude, and Gemini might face a social stigma, damaging their professional standing.
- The use of AI may lead to negative judgments about an individual's competence and motivation, as indicated by the study titled "Evidence of a social evaluation penalty for using AI."
- Organizations can mitigate the social penalty associated with AI use by perceiving AI as essential for specific tasks, as stated in the study.
- Factors contributing to the social stigma attached to AI use involve concerns about a person's skills, trust in AI’s output, ethical issues, and cultural norms, according to sources like the Impact of AI on Self-Perception and Social Judgment.
- Embracing a human-centric, trust-building approach can help reduce the social stigma over time and foster increased inclusivity, productivity, and mental health support, as suggested in Human-Centric AI: The Key to Building Trust and Reducing Stigma.