Cheating has long been an unwelcome but expected risk in the hiring process. While most people are honest and well-intentioned, there are always a handful of candidates who attempt to game the system. Today, however, the problem is evolving at an unprecedented speed. Generative AI has made new, more sophisticated types of cheating possible for any position, from software development to finance to design. In my work with hundreds of employers helping them hire and develop talent, I’ve seen firsthand the myriad ways candidates attempt to game the system.
So, why are candidates resorting to these methods? Sometimes, candidates are attempting to secure a position they’re underqualified for, or otherwise gain a leg up in the hiring process. Other times, candidates pursue multiple full-time roles at once—a trend known as “overemployment”—which increases the likelihood that they’ll cheat. Here are the four most common approaches candidates use to cheat, and what employers across all industries can do to detect and prevent dishonesty in their hiring processes.
THE FOUR CHEATING TYPES
1. Copy-paste plagiarism
This is the most widespread and fundamental form of cheating. A candidate is given a task—it could be a coding challenge, a writing sample, or a case study—and they simply copy or heavily borrow from existing resources found online. In some cases, candidates even use answer keys for standardized assessments, which are often sold or shared in online forums.
How to detect and prevent it: The ideal mitigation strategy for this type of cheating is to prevent it in the first place by ensuring multiple candidates don’t see the same questions. Think about the SAT, for example—thousands of versions of each question are created and dynamically rotated, but calibrated to be equally difficult. So if a question leaks, it’s unlikely that another candidate sees the same question. Assessment platforms should also crawl the web to see if submitted work matches known public answers, and flag if a candidate spends lots of time in a separate browser window.
2. Hiring a “ringer”
With this method, a candidate hires a highly-qualified individual—a “ringer” or “proxy interviewer”—to take a skills assessment or even a live interview on their behalf. This is a particularly sneaky form of cheating because the person taking the test is genuinely skilled, but they aren’t the person you’re considering for the job. The problem only becomes apparent later when the person you hired can’t replicate their performance on the job.
How to detect and prevent it: The best way to combat this is with identity verification and proctoring. This can be as simple as asking candidates to show a photo ID via webcam at the start of the assessment. Organizations can also use AI-powered proctoring to monitor a candidate’s behavior, flagging suspicious activity like multiple people in the room or eye movements that suggest they’re getting help—and verify this with human review.
3. Using AI to generate answers
This is where AI has truly changed the game. Instead of searching for an answer, a candidate can use a text- or voice-based AI tool to get a complete answer in seconds. These AI models are not only fast, but they generate original content that wouldn’t necessarily be flagged by a simple plagiarism checker. While some organizations may be okay with candidates leveraging AI tools—especially if they’d be using AI on-the-job—others are looking to see a candidate’s skill without the assistance of AI.
How to detect and prevent it: One solution is to use AI detection tools that can analyze text for patterns consistent with AI generation. A more robust approach, however, is to design assessments that require human-level reasoning and creativity—and even allow candidates to leverage AI to produce their response. With this approach, employers can see how well candidates make use of all the tools they’d have at their disposal on the job—including AI—to solve realistic challenges or tasks.
4. AI deepfakes
This is perhaps the most frightening new form of cheating, and it’s making headlines. A candidate can use AI deepfake technology to create a convincing, real-time avatar of themselves that takes a live interview. This AI-generated persona would not only answer questions but could also mimic facial expressions and body language, which makes it difficult for a human interviewer to distinguish it from the real person.
How to detect and prevent it: One way to spot deepfakes is with sophisticated AI-powered analysis. These tools can look for subtle inconsistencies like unnatural eye movements, a lack of blinking, or a disconnect between audio and video streams. Companies can also require a simple, real-time action from the candidate, such as holding up a specific object or moving their head in a certain way, that would be difficult for a deepfake to replicate perfectly.
FINAL THOUGHTS
The risks and costs of a bad hire are well-documented. An employee who lacks the skills they claimed to have will drag down a team, create poor quality work, and ultimately have a negative impact on the business. The integrity of your hiring process is a direct reflection of the quality of your future team.
While the rise of AI has introduced new risks to employers, it has also given us new tools to more accurately identify candidates with the right skills for the job. Used well, AI tools can be a powerful partner in our efforts to build a fair and predictive hiring process. By embracing these advancements, we can move beyond simply detecting cheating and build a future where AI empowers us with new ways to find and hire candidates who will take innovation to the next level.
Tigran Sloyan is CEO and cofounder of CodeSignal.