
When ChatGPT went viral, leadership teams rushed to understand it, but their employees had already beat them to the chase. Workers were already experimenting with AI tools behind the scenes, using them to summarize notes, automate tasks, and hit performance goals with limited resources. What started as a productivity shortcut has evolved into a new workplace norm.
According to Microsoft’s Work Trend Index, three in four employees are using AI at work—and nearly 80% of AI users at small and medium-size companies are bringing their own tools into the workplace; that number is 78% for larger organizations. These tools range from text generators, such as ChatGPT, to automation platforms and AI-powered design software.
This bottom-up phenomenon is known as Bring Your Own AI, or BYOAI. It mirrors the early days of “bring your own device” (BYOD) policies , when employees began using their personal smartphones and laptops for work tasks—often before employers had protocols in place to manage them. Those policies eventually evolved to address security, data privacy, and access control concerns.
But with BYOAI, the stakes are even higher.
Instead of physical devices, employees are introducing algorithms into workflows—algorithms that weren’t vetted by IT, compliance, or legal. And in today’s fast-moving regulatory climate, that can create serious risk: Almost half of employees using AI at work admitted they were doing so inappropriately, such as trusting all answers AI gives without checking them, or entrusting it with sensitive information.
The BYOAI trend is not a fringe behavior or a passing tech fad. It’s a fast-growing reality in modern workplaces, driven by overworked employees, under-resourced teams, and the growing accessibility of powerful AI tools. Without policies or oversight, workers are taking matters into their own hands, often using tools their employers are unaware of. And while the intention may be productivity, this can expose companies to data leaks and other security problems.
The Compliance Gap Is Widening
Whether it’s a marketing team inputting customer data into a chatbot, or an operations lead automating workflows with plug-ins, these tools can quietly open the door to privacy violations, biased decisions, and operational breakdown.
Nearly six in ten employees say they’ve made mistakes at work due to AI errors, and many are using it improperly (57% admit errors, 44% knowingly misuse it).
Yet, according to a 2024 report from Deloitte that surveyed organizations on the cutting edge of AI, only 23% of these organizations reported feeling highly prepared to manage AI-related risks. And only 6%, according to KPMG, had a dedicated team focused on evaluating AI risk and implementing guardrails.
“When employees use external AI services without the knowledge of their employers . . . we tend to think about risks like data loss, intellectual property leaks, copyright violations, [and] security breaches,” says Allison Spagnolo, chief privacy officer and senior managing director at Guidepost Solution, a company that specializes in investigations, regulatory compliance, and security consulting.
How forward-thinking companies are getting ahead
Some organizations are starting to respond—not by banning AI, but by working to empower employees to use AI.
According to the Deloitte report, 43% of organizations that use AI invest in internal AI audits, 37% train users to recognize and mitigate risks, and 33% keep a formal inventory of how gen AI is used, so managers can lead with clarity, not confusion.
Meanwhile, Salesforce provides employees with secure, approved AI tools, like Slack AI and Einstein that integrate with internal data systems, while maintaining strict boundaries on sensitive data use and offering regular training. The company also has a framework for advising other companies on how to develop their own AI internal use policy.
“The best strategy is actually to open up those lines of communication with employees,” says Reena Richtermeyer, partner at CM Law PLLC, a boutique firm that advises clients on emerging technology issues. She says employers shouldn’t say no to AI, but instead provide employees with guardrails, parameters, and training. For example, maybe employers ask employees only to use public data and “slice out data that is proprietary, trade secret, or customer-related.”
BYOAI isn’t going away
BYOAI isn’t just a tech trend. It’s a leadership challenge.
Managers now find themselves overseeing both human and machine output, often without formal training on how to manage this combination effectively. They must decide when AI is appropriate, how to evaluate its use, and ensure that both ethical and performance standards are maintained.
Companies are best served by shifting from reactive policies to proactive cultures. Employees need clear communication about what is safe, what is off-limits, and where to go for guidance.
“I think having a dedicated AI acceptable use policy is really helpful . . . you can tell your employees exactly what the expectations are, what the risks are if they go outside of that policy, and what the consequences are,” says Spagnolo.
The companies that will gain the most from AI are the ones that understand how to empower their employees to use AI and innovate with it. That requires leaders to shift from asking employees: “Are you using AI?” to “How can we support you to use it well?”