Guide

5 Mistakes Recruiters Make When Using ATS Scoring Tools (and How to Fix Them)

Most ATS scoring tools are just glorified keyword counters that miss great candidates and waste recruiter time. After comparing tools across Ashby, Workable, Lever, and Greenhouse, we identified 5 critical mistakes teams make—and how research-driven AI can transform your hiring process from generic rankings to intelligent candidate insights.

Most modern ATS platforms now offer some form of "AI" scoring or candidate ranking—but how useful are these tools really? In theory, they should help recruiters save time and focus on the most relevant applicants. In practice, they often confuse, mislead, or get ignored entirely.

After running side-by-side comparisons across Ashby, Workable, Lever, Greenhouse, and others, we've identified some glaring gaps. Here are the 5 most common mistakes recruiters make when using built-in ATS scoring tools—and what industry-leading teams do instead.


1. Mistaking Keyword Density for Real Experience

The Mistake: Trusting high scores without understanding how they were calculated.

Why It's a Problem: Most ATS scoring tools are glorified keyword counters. A candidate gets 95% because they mention "Python" eight times and "machine learning" six times—even if their actual ML experience is from a weekend bootcamp. Meanwhile, a senior engineer with deep expertise but concise descriptions gets a 60% because they don't repeat buzzwords.

We've seen candidates score in the top 10% simply because they copy-pasted job requirements into their resume summary. The algorithm rewards keyword stuffing over substance.

The Fix: Choose tools that understand context and career progression. Nova doesn't just count mentions—it analyzes the depth and quality of experience. It distinguishes between "Completed Python coursework" and "Built recommendation engine serving 2M+ users using Python and TensorFlow."

Nova evaluates:

  • Role progression: Junior → Senior → Lead trajectories
  • Responsibility scope: Team size, budget, impact metrics
  • Technical depth: Implementation details vs. surface-level mentions
  • Industry context: Understanding that "customer success" means different things at a 50-person startup vs. a Fortune 500 company

Every score comes with specific evidence from the candidate's background, so you know exactly why someone ranked high or low.


2. Using Scores as Hard Cutoffs Instead of Intelligence

The Mistake: Setting arbitrary thresholds like "only interview 85%+ candidates."

Why It's a Problem: This approach misses exceptional candidates whose backgrounds don't fit traditional molds. The best product managers often come from consulting, customer success, or even completely different industries. Career changers, international candidates, and those with non-linear paths get systematically filtered out.

One client told us they almost missed their best hire—a former teacher who became a standout UX researcher—because she scored 72% due to "non-traditional background."

The Fix: Use scores as prioritization signals, not elimination criteria. Start with the highest-scoring candidates, but always review the reasoning behind lower scores.

Nova's structured assessments help you spot when a lower score reflects formatting issues or unconventional experience rather than poor fit. You'll see assessments like:

  • Score: 73% | "Strong analytical skills and user empathy from education background, but needs to build portfolio"
  • Score: 91% | "Extensive UX experience but all in B2B SaaS—may need time to adapt to consumer products"

This context helps you make informed decisions about who deserves a conversation, regardless of their numerical score.


3. Relying on Generic, One-Size-Fits-All Criteria

The Mistake: Assuming your ATS understands the nuances of your specific role and company context.

Why It's a Problem: Generic scoring criteria miss what actually matters. A "marketing manager" at a 20-person B2B startup needs completely different skills than one at Nike. Most ATS tools can't distinguish between performance marketing and brand marketing, or between luxury retail and fast fashion.

Standard templates often prioritize easily-identifiable qualifications (degrees, certifications) over the subtle indicators that predict success in your specific environment.

The Fix: This is where Nova's research-driven approach fundamentally changes the game. Instead of starting with generic templates, Nova researches the career paths of people who actually work in similar roles at companies like yours.

Here's how it works:

  1. Industry Research: Nova analyzes LinkedIn profiles of professionals at companies in your space
  2. Pattern Recognition: Identifies common background patterns, career progression, and skill combinations
  3. Criteria Generation: Creates role-specific criteria based on real successful career paths
  4. Company Context: Factors in your company size, industry, and growth stage

Real Example: When hiring for a luxury retail manager, instead of generic "retail experience," Nova might generate criteria like:

  • "Experience with high-net-worth clientele and personalized service delivery"
  • "Background at premium brands with similar price points and customer expectations"
  • "Track record of managing boutique-style customer relationships vs. high-volume transactions"

These insights come from analyzing professionals at brands like Hermès, Cartier, and Tiffany & Co.—intelligence you'd never get from a standard template.


4. Accepting Black Box Scoring

The Mistake: Using tools that provide scores without explanation.

Why It's a Problem: When you can't understand why someone scored high or low, you can't learn from the AI's analysis or catch its mistakes. This leads to either blind trust (dangerous) or complete skepticism (wasteful). You miss opportunities to calibrate your own judgment and improve your hiring process.

The Fix: Demand complete transparency. Nova provides structured assessments that break down exactly why each candidate received their score:

Verdict: "Strong technical fit with leadership potential, but may need support scaling to enterprise clients"

Strengths:

  • Led engineering team of 8 through successful product launch
  • Architected systems handling 500K+ daily active users
  • Strong track record of mentoring junior developers

Concerns:

  • All experience at Series A/B startups—no enterprise environment exposure
  • Limited experience with compliance-heavy industries

Suggested Interview Focus:

  • Explore approach to building systems for regulated environments
  • Discuss strategies for managing larger, more distributed teams
  • Assess comfort level with enterprise sales cycles and stakeholder management

This transparency serves two purposes: it helps you understand who to interview and exactly what to focus on during those conversations.


5. Stopping at the Score Instead of Enabling Better Conversations

The Mistake: Treating scoring as the end goal rather than the beginning of better hiring decisions.

Why It's a Problem: A ranked list of candidates is only valuable if it leads to more effective interviews. Too many teams get scores, schedule interviews, then default to generic conversations: "Tell me about yourself," "Why are you interested in this role?" The AI insight gets wasted on surface-level discussions.

The Fix: Use scoring tools that set you up for interview success. Nova's assessments don't just rank candidates—they prepare you for meaningful conversations.

Before Nova: Generic first call

  • "Walk me through your background"
  • "What interests you about this role?"
  • "What are your salary expectations?"

With Nova: Targeted, insight-driven conversation

  • "I see you've successfully scaled engineering teams at three different startups. What patterns have you noticed about team dynamics as companies grow from 10 to 50 engineers?"
  • "Your background shows strong experience with real-time systems. How would you approach the reliability challenges we're facing with our notification infrastructure?"
  • "Nova flagged that your experience is primarily in consumer products. How do you think about the different constraints and opportunities in B2B software?"

The result? More productive interviews that actually assess fit instead of wasting time on resume reviews you could have done beforehand.


The Real Cost of Poor Scoring Tools

Poor ATS scoring doesn't just waste time—it actively damages your hiring process through two critical failure modes: false positives and false negatives.

False Positives: Wasting Time on Wrong Candidates When keyword-heavy but inexperienced candidates score 90%+, you spend valuable interview slots on people who can't actually do the job. We've seen teams waste weeks interviewing "top-scoring" candidates who talked a good game but lacked real depth. One client spent a month interviewing 12 highly-scored "senior engineers" only to discover none could architect systems beyond basic CRUD operations.

False Negatives: Missing Your Best Hires Even worse, truly exceptional candidates with non-traditional backgrounds get filtered out entirely. Career changers, international talent, and people with unconventional resume formats never make it to your interview pipeline. That former teacher who could be your best UX researcher? The consultant who'd excel at product management? They're lost in the 60% pile while you interview keyword stuffers.

The Confusion Cascade When scoring tools consistently deliver poor results, it creates organizational confusion. Hiring managers stop trusting scores entirely and revert to resume scanning. Recruiters second-guess every AI recommendation. Teams develop workarounds that defeat the purpose of having scoring tools at all.

The Strategic Damage Beyond individual bad hires, poor scoring tools train your team to ignore AI insights entirely. When scores don't correlate with interview performance, people learn to rely on gut instinct alone—missing out on the genuine benefits that good AI can provide: identifying subtle patterns, processing backgrounds faster than humanly possible, and maintaining consistency across different hiring managers.

The goal isn't to replace human judgment—it's to augment it with reliable, explainable insights that actually improve decision-making.


TL;DR: What Great Scoring Tools Actually Do

The best ATS scoring tools don't just rank candidates—they create a complete intelligence layer for your hiring process:

  1. Research-Driven Criteria: Generate role-specific requirements based on real career paths in your industry
  2. Context-Aware Scoring: Evaluate experience depth and progression, not just keyword matches
  3. Transparent Reasoning: Provide clear explanations with specific evidence from candidate backgrounds
  4. Interview Preparation: Set up every conversation for success with targeted focus areas
  5. Continuous Learning: Improve recommendations based on your hiring decisions and feedback

The difference between good and great hiring isn't just finding qualified candidates—it's having more insightful conversations with the right people at the right time.