Competence penalty: The hidden cost of using AI at work
The competence penalty reveals a deeper misalignment in how organisations approach AI adoption

Whenever Imran uses ChatGPT at the office, he makes sure no one is watching.
Imran does not use the tool because he is incompetent; he uses it strategically to enhance his productivity. Yet, he is always afraid of being judged.
However, Imran's case is not isolated. Two years after ChatGPT's launch, only 16% of American workers use AI for work, despite 91% having access, Pew Research Center found. Even among tech workers, who are early adopters, usage remains surprisingly low and unequal.
Guess why? The usual explanation might be lack of skills or awareness, but it does not fully hold up. There is more to the story.
A Harvard Business Review (HBR) research uncovered a deeper barrier: a "competence penalty."
In a pre‑registered experiment with 1,026 engineers, participants reviewed identical Python code that was either described as written by a human alone or with AI assistance. When they believed AI was used, they rated the engineer's competence 9% lower—even though code quality ratings didn't change.
Ultimately, the companies that succeed in AI transformation won't just have the best tools or the biggest budgets, they'll be the ones that will create environments where every employee can use AI without fear. Until then, organisations aren't just wasting licences — they're wasting the very human potential AI was meant to unlock.
This penalty was much harsher for female engineers, who faced a 13% reduction in perceived competence versus 6% for men. Male non‑adopters were the most severe critics, penalising female AI users 26% more than male counterparts.
Follow‑up surveys with 919 engineers revealed that many anticipated this penalty and avoided using AI to protect their reputations—especially women and older employees. Ironically, those who stood to gain most from AI were using it the least.
Fear of judgment also drives 'shadow AI' use, where employees turn to unauthorised tools instead of sanctioned ones. This increases risks around data security, compliance, and accountability.
Most troubling, the penalty worsens existing workplace inequalities. Instead of leveling the playing field, AI usage can reinforce stereotypes. Women in male‑dominated environments or older employees in youth‑centric fields risk being seen as "less capable" if they use AI, amplifying biases through a phenomenon called social identity threat.
Transparency is a core principle of responsible AI use, but disclosure can backfire in biased environments. Requiring employees to flag AI‑assisted work exposes them to prejudice. HBR's findings suggest that mandatory disclosure of AI use in performance evaluations may do more harm than good until organisational cultures evolve.
The way forward
The competence penalty reveals a deeper misalignment in how organisations approach AI adoption.
While companies invest heavily in access, training, and technical infrastructure, they often overlook the social dynamics that ultimately determine whether employees actually embrace these tools.
This perspective helps explain several puzzling patterns in AI transformations. Why do employees secretly use ChatGPT while ignoring sanctioned tools? Why do the very groups that could benefit most from AI augmentation adopt it at the lowest rates? Why do companies see disappointing returns despite massive AI investments?
The penalty strikes hardest where competence is already under scrutiny. The HBR research shows that women using AI face nearly twice the reputational damage as men. Older workers in youth-dominated fields encounter similar bias. The irony is striking: those who most need AI's equalising power are often the least able to use it without consequence.
But this penalty isn't inevitable. Organisations can dismantle it through three key actions: identifying where these penalties are most acute, mobilising respected leaders as visible AI champions, and redesigning evaluation systems to reward outcomes rather than methods.
Ultimately, the companies that succeed in AI transformation won't just have the best tools or the biggest budgets. They'll be the ones that create environments where every employee can use AI without fear. Until then, organisations aren't just wasting licences — they're wasting the very human potential AI was meant to unlock.