The latest buzzword is “AI literacy.” Much like “social media,” “ESG,” and “CSR” before it, employers are now looking for proof of fluency on résumés, and individuals are desperate to differentiate themselves to show that they are keeping pace.
And it’s everywhere, mentions of terms like “agentic AI,” “AI workforce,” “digital labor,” and “AI agents” during earnings calls increased by nearly 800% in the last year, according to AlphaSense data. Over the last five years, workers across industries have become expected to be well-versed in a technology that is ever-evolving and still relatively new for so many, including the leaders implementing it. The trouble with AI is that by the time a candidate hits “send” on a CV, their level of proficiency is already outdated.
It’s a quiet, corrosive force that’s keeping people silent in the very moments when we need their voices most. But what if the real problem isn’t the pace of change or people not understanding AI, but instead that we have made them feel ashamed for their lack of understanding, preventing people from raising their hand to say, “I don’t know”?
Vulnerability makes us human. Mark Cuban recently posted on X, The greatest weakness of AI is its inability to say ‘I don’t know.’ Our ability to admit what we don’t know will always give humans an advantage. Why, then, are we creating an environment and fostering workplace cultures that encourage people to “fake it, until you make it” as it relates to AI? The cost of staying quiet is real. We’re at risk of shaming ourselves into obscurity.
The Shame Spiral in Action
Everyone’s talking about the AI hype cycle. But almost no one is talking about the shame spiral it’s creating. AI not only has a long-term impact on the economy, but also on the day-to-day lives of people. Companies are replacing roles faster than they’re training workers and in some cases, like Klarna, laying off workers only to hire back when AI tools fall short. People miss out on jobs, not because they’re unqualified, but because no one gave them a path forward. They walk around feeling like impostors in rooms they’ve already earned the right to be in. Inside companies, we see biased tools get approved and shortcuts turn into systems.
A recent report by LinkedIn shows 35% of professionals feel too nervous to talk about AI at work, and 33% feel embarrassed by how little they know. These aren’t just workers, they’re parents and community leaders.
This shame spiral, fueled by hype that says “everyone gets AI except you,” risks shutting down curiosity and critical questions before they even start. The pattern signals a bigger issue: at the same time people feel too ashamed to engage, AI systems are taking over and making decisions, incremental and important, that affect everyone. To avoid embarrassment, people take shortcuts.
A recruiter might rely on an AI résumé screener without understanding how it works and which candidates it may be discarding. A manager might approve a tool that decides who gets extended care without asking what drives the algorithm. A parent might sign off on an AI-powered teaching tool without knowing who designed the curriculum. A 2024 Microsoft and LinkedIn survey found that only 39% of people globally who use AI at work have gotten AI training from their company.
We’ve seen what happens when these systems go unchecked. Amazon scrapped its AI recruiting tool after it was found to discriminate against women. Workday faces a class-action lawsuit alleging its AI screening tools systematically exclude older workers and people with disabilities from job opportunities. Microsoft’s chatbot Tay launched with the intention of learning from conversations, was exposed to trolls, and within 24 hours, was posting racist, misogynistic, and offensive content.
When silence replaces curiosity, people essentially remove themselves from the decision-making process until they are no longer accounted for.
Reshaping The Workplace Reality
AI is here, and it is changing the workforce. The choice is ours: Bring people along with us and help them be part of the transformation or leave them behind in the name of efficiency?
What moves people from anxiety to agency isn’t more lectures or tutorials. People are inspired by permission and tools. Permission to be a beginner. The freedom and the space to learn. The most confident AI users aren’t experts; they play with different tools until they find what works for them.
Digital dignity starts with that permission—permission to ask basic questions, to slow down, to admit gaps. It means leaders modeling vulnerability before demanding employees fill theirs.
To truly embrace and harness the potential of AI, we must focus on impact, not mechanics. You don’t need to code a neural net, but you do need to spot when AI systems are making decisions about you. Start with what affects you directly: parents can ask what tools schools are using, job seekers can learn how résumé screening works, and managers can ask what AI tools are coming into their workplace—and what training comes with them.
Practice saying “I don’t know.” The best leaders see gaps as opportunities to ask good questions. JPMorgan created low-stakes spaces for managers to experiment with AI, encouraging leaders to admit when they were stuck. That openness built trust and sped up adoption. Johnson & Johnson encouraged broad experimentation across business units, generating nearly 900 AI / generative AI use cases across research, supply chain, commercial, and internal support. The result? An internal chatbot for employees and a fresh approach to making clinical trials more representative.
This isn’t just a knowledge gap. It’s a culture of silence. And if we don’t break it, AI won’t be a tool for transformation; it’ll be a mirror for all the systems we were too ashamed to question.
The most powerful thing we can say in this moment is: “I don’t know. But I want to learn.”
Because the future is still being written, and we all deserve a seat at the table and a hand on the pen.