ATS score is not a single number. It is the output of five weighted components, each computed differently by each platform. Workday emphasizes keyword match and strict date parsing. Greenhouse performs no algorithmic scoring whatsoever (every rejection is a human decision; Jobscan Greenhouse guide, 2026). Lever uses AI-driven semantic matching that credits synonyms and related terms. iCIMS leans on Boolean-style skills matching. Taleo runs the oldest, most exact-match-hungry algorithm of the five. This article walks through the formula components, shows the actual weights each platform assigns, and finishes with a step-by-step worked example: a real job description, a real resume, and the math that turns them into a 62% score on Jobscan and an 85% score on Resume Worded.

The Five Components of an ATS Score

Every major scoring ATS evaluates the same five inputs. The differences between platforms are in the weights and the matching logic, not the components themselves. The universal formula looks like this:

Universal ATS score formula

Score = (Keyword Match × W1) + (Placement × W2) + (Sections × W3) + (Parse Success × W4) + (Relevance × W5)

Where W1 + W2 + W3 + W4 + W5 = 100%. Each platform sets its own weights.

40–50%
Keyword Match

Overlap between extracted job-description keywords and keywords in your resume

10–15%
Placement Multiplier

Where those keywords appear (summary, title, bullets, skills list)

15–20%
Section Completeness

Presence of contact, experience, education, skills, dates

10–20%
Parse Success

Did the parser successfully extract structured data?

5–15%
Relevance Boost

Tenure, title alignment, education level match

Component 1: Keyword Match (40-50% of Total Score)

The biggest single component. Every scoring ATS extracts a list of keywords from the job description (hard skills, tools, certifications, industry terms) and checks how many appear in your resume. Two matching styles dominate: exact match (the keyword must appear verbatim) and semantic match (the system credits synonyms and related terms). Workday and Taleo lean heavily on exact match. Lever runs semantic match powered by modern embeddings. iCIMS uses a Boolean-weighted hybrid.

How keyword match is computed
  1. Parse the job description into a keyword list (typically 30 to 60 items)
  2. Assign weights: hard skills (2x), certifications (2x), tools (1.5x), soft skills (0.5x)
  3. Check presence of each weighted keyword in your resume
  4. Compute: match_rate = sum(matched_weights) / sum(total_weights)
  5. Multiply by the platform's keyword-match weight (40-50%)

The weighting matters more than total keyword count. A resume that mentions 20 soft skills but only 3 of the 8 hard skills in the job description will score lower than a resume with 0 soft skills and all 8 hard skills. Use the job description's exact wording for hard skills ("project management" not "managing projects") because exact-match platforms will not credit the variation. For semantic-match platforms, the variation is fine.

Component 2: Placement Multipliers (10-15% of Total Score)

A keyword in your professional summary is worth more than the same keyword buried in bullet 12 of your oldest role. Modern ATS systems weight keyword placement by section, and those multipliers compound. A "Python" mention in your job title and summary can equal three "Python" mentions in old bullet points.

Location Typical Weight Multiplier Why It Weights Higher
Job title in most recent role 3.0x Signals current function; highest-confidence match
Professional summary / headline 2.5x Parsed as high-level candidate framing
Skills list (dedicated section) 2.0x Structured, easy to match against JD extracted list
Bullet points in recent role 1.5x Evidence of applied use, but lower confidence than titles
Bullet points in older roles (3+ years back) 0.7x Older experience is discounted for recency
Certifications section 2.0x High-confidence hard-skill signal
Education section 1.0x Baseline credibility but not weighted for skill match

The actionable rule: for every top-priority skill in the target job, make sure it appears in your professional summary and your most recent job bullets and your skills list. Three placements with multipliers of 2.5x, 1.5x, and 2.0x compound to 6x the weight of a single bullet mention. This is the single highest-ROI optimization for your score.

Component 3: Section Completeness (15-20%)

The parser must be able to locate and extract five mandatory sections: contact information, work experience, education, skills, and employment dates. Missing any one of these drops the completeness sub-score dramatically. This is where unconventional section headings silently destroy scores.

Headers That Parse Reliably
  • Professional Summary
  • Work Experience (or Professional Experience)
  • Education
  • Skills (or Technical Skills, Core Competencies)
  • Certifications
  • Projects (for early-career/technical roles)
Headers That Break Parsing
  • "What I Bring" (Work Experience)
  • "My Journey" (also Work Experience)
  • "Knowledge" (Skills)
  • "Learning" (Education)
  • Custom creative section names of any kind
  • Icons or emoji replacing section labels

Employment dates are the most commonly missed piece. Workday in particular downgrades the entire entry if dates are absent or malformed. Use Month YYYY – Month YYYY or MM/YYYY – MM/YYYY. "Summer 2023," "Recent," or "Present" alone (without a start date) all trip parsers.

Component 4: Parse Success (10-20%)

Parse success is binary at the component level but probabilistic at the overall score level. Either the parser successfully extracted structured data or it did not. If it did not, some systems zero out the other components (because there is nothing to score); others allocate partial credit based on how much text was recoverable.

Failures that zero out parse success
  • Image-only PDFs (scanned resumes, where text is pixels, not characters)
  • PDFs exported with outlined text (Figma, Adobe Illustrator default)
  • Protected PDFs with copy-protection flags enabled
  • Corrupted DOCX files (rare but fatal)
Failures that reduce parse success
  • Two-column layouts (parser interleaves columns line-by-line)
  • Text boxes (extracted in arbitrary order, typically after main text)
  • Tables in the skills or experience section
  • Embedded images with text (logos, headers, SmartArt graphics)
  • Non-standard fonts that confuse OCR fallback pipelines

Component 5: Relevance Boost (5-15%)

The smallest component but the most platform-variable. Workday credits tenure consistency (gaps are penalized). Lever rewards title match (current/recent title closely mirrors the target title). iCIMS gives bonus points for matching education level. Most platforms roll a "did the candidate apply to a role in the same family as their previous roles?" check into this bucket.

Relevance Signal Typical Boost What Triggers It
Title match +3 to +8 points Recent title matches target title (or a close synonym)
Tenure fit +2 to +5 points Years of experience matches the JD minimum
Education level match +1 to +3 points Degree level and field aligned with target
Industry consistency +1 to +3 points Previous employers in same or adjacent industries
Clean employment timeline +1 to +2 points No unexplained gaps, no overlapping dates

Platform-Specific Math: Workday vs Greenhouse vs Lever vs iCIMS vs Taleo

Here is how the five major scoring platforms weight the components. Numbers are synthesized from published vendor documentation, third-party parser benchmarks (Hireflow 2026; Jobscan platform guides), and our own scoring engine's calibration data.

Platform Keyword Match Placement Sections Parse Relevance Match Style
Workday 50% 10% 20% 15% 5% Exact + limited semantic
Greenhouse No algorithmic scoring; human scorecards only N/A
Lever 40% 15% 15% 15% 15% Semantic (modern embeddings)
iCIMS 45% 12% 18% 15% 10% Boolean-weighted hybrid
Taleo 55% 8% 17% 15% 5% Exact match (no semantic)

The single most important thing to know: Greenhouse does not auto-score resumes. Recruiters in Greenhouse-powered hiring flows build custom scorecards and manually rate each candidate. If you are applying through a Greenhouse job, your "ATS score" in any external checker is a directional estimate only; the actual decision is human. For Workday and Taleo, the score is real and algorithmic; optimize to the top of the table. For Lever and iCIMS, optimize for semantic match by using the exact JD language and adding 2 or 3 synonyms for each hard skill.

Worked Example: Calculating a Real Score

Let's run an actual calculation on a Senior Python Developer job description with a sample resume. We will score it on Workday and Lever to show why the same resume scores differently.

Job description extract (Senior Python Developer)

Must have skills: Python, Django, PostgreSQL, AWS, Docker, REST APIs, Git, CI/CD

Preferred: Kubernetes, Celery, Redis, microservices architecture

Required: 5+ years of Python experience, Bachelor's in CS or equivalent

Sample resume highlights

Professional Summary: Python developer with 6 years building Django applications on AWS. Experienced with PostgreSQL, Docker, REST API design, Git workflows, and CI/CD pipelines.

Skills: Python, Django, Flask, PostgreSQL, MongoDB, AWS, Docker, Git, CI/CD, Jenkins, REST

Recent title: Senior Software Engineer (not "Senior Python Developer")

Work experience (last 3 years): Heavy Django, AWS, Docker, PostgreSQL mentions. No Kubernetes or Celery. Mentions "container orchestration" but not Kubernetes directly.

Education: B.S. Computer Science

Workday calculation

  • Keyword match (weight 50%): 8 of 8 must-have found (100%), 0 of 4 preferred (0%). Weighted: (100% × 0.7) + (0% × 0.3) = 70%. Component score: 70% × 50% = 35 points
  • Placement (weight 10%): Python in summary (2.5x), summary (2.5x), skills (2.0x), bullets (1.5x). Strong placement. Component score: 85% × 10% = 8.5 points
  • Sections (weight 20%): All 5 mandatory present with standard headers. Component score: 100% × 20% = 20 points
  • Parse success (weight 15%): Single-column Word doc, clean structure. Component score: 100% × 15% = 15 points
  • Relevance (weight 5%): Title is "Senior Software Engineer" not "Senior Python Developer", so partial match. Tenure 6 years vs 5+ required: fit. Education matches. Component score: 70% × 5% = 3.5 points

Workday total: 35 + 8.5 + 20 + 15 + 3.5 = 82 out of 100

Lever calculation (same resume, same JD)

  • Keyword match (weight 40%): 8 of 8 must-have (100%). Semantic match credits "container orchestration" as partial match for "Kubernetes" (+0.5), and Jenkins as partial match for "CI/CD" (already counted). Preferred: 1 of 4 partial credit (12.5%). Weighted: (100% × 0.7) + (12.5% × 0.3) = 73.75%. Component score: 73.75% × 40% = 29.5 points
  • Placement (weight 15%): Same placements as Workday. Component score: 85% × 15% = 12.75 points
  • Sections (weight 15%): All 5 present. Component score: 100% × 15% = 15 points
  • Parse success (weight 15%): Component score: 100% × 15% = 15 points
  • Relevance (weight 15%): Lever weights title match harder. "Senior Software Engineer" is a semantic neighbor of "Senior Python Developer" (close but not exact). Tenure fit, industry consistent. Component score: 80% × 15% = 12 points

Lever total: 29.5 + 12.75 + 15 + 15 + 12 = 84 out of 100

Why the Same Resume Scores Differently on Different Platforms

Three reasons your Jobscan score can differ from your Resume Worded score by 15 points or more:

  1. Different keyword extractors. Each scoring tool parses the job description with its own NLP pipeline. One tool might pull 40 keywords from the JD, another 55. The denominator in the match-rate calculation changes, so the ratio changes.
  2. Different match logic. Exact-match tools penalize singular/plural variations; semantic-match tools credit them. "API" vs "APIs" is a match on Lever, a miss on Taleo.
  3. Different weights. One checker might weight keyword match at 60%; another at 40%. Your keyword-heavy resume scores better on the first, your structure-heavy resume scores better on the second.

The actionable takeaway: treat absolute scores as directional. Run your resume through two tools. If both clear 75%, you are in the interview zone on most platforms. If the two scores disagree by more than 15 points, read the lower score's feedback first; it is telling you something the higher score missed.

How to Push Your Score from 60 to 85

Most resumes sit in the 55-70 range on a first scan. Here is the order of operations to get into the 80s.

1. Fix parsing first (costs 15 points if broken)

Run the copy-paste test. If your structure scrambles, fix the columns, tables, and text boxes before anything else. No amount of keyword work matters if the parser cannot read your resume. See our 5 ATS tests ranked guide.

2. Close the hard-skill gap (costs 10-20 points)

Every must-have keyword from the JD that is missing from your resume costs you. Add each missing keyword in a natural context: bullet, summary, or skills list. Exact wording matters on Workday and Taleo.

3. Stack placements (costs 3-8 points if weak)

For each top-3 hard skill, ensure it appears in the summary + skills + a bullet in your recent role. Three placements compound to ~6x the weight of a single buried mention.

4. Align title (costs 3-8 points on Lever)

Use a parenthetical or slash to align your current title with the target. "Senior Software Engineer (Python Developer)" is safe and captures both exact and semantic credit.

5. Tighten section headers (costs 4-8 points)

Use standard section names: Professional Summary, Work Experience, Education, Skills, Certifications. Creative alternatives silently tank your sections score.

6. Verify, do not guess

Run the improved resume through a second scan. If your score did not move from the fix, the fix did not land. Iterate. Our free ATS resume checker shows parse-level detail on every change.

Frequently Asked Questions

It varies by platform. Workday, iCIMS, Taleo, and Lever all compute scores, but with different weights and matching logic. Greenhouse does not auto-score at all. External checkers (Jobscan, Resume Worded, Enhancv, Resume Optimizer Pro) each use their own algorithms calibrated against the major platforms. Expect 10-15 point variations between tools on the same resume.

It does not. Greenhouse operates on custom recruiter scorecards. A human recruiter builds a scorecard with criteria like "5+ years of Python experience," "Django production experience," etc., and manually rates each candidate against those criteria. The output is a weighted rating, but the rating comes from a person, not an algorithm. This is documented in Jobscan's 2026 Greenhouse ATS guide.

No, and modern systems actively penalize it. Workday's parser flags unnatural repetition. Lever's semantic matcher treats diminishing-return curves on repeated keywords, so the 10th mention of "Python" scores no more than the 3rd. iCIMS is the most exploitable of the five but still caps the benefit. Optimize for placement (summary + skills + 1-2 bullets), not raw density. Anything beyond that is wasted.

Industry benchmark is 75%+ for most mid-level roles and 80%+ for competitive senior roles. Entry-level positions clear at 60-70%. These numbers apply to scores computed against a specific job description, not a generic "how ATS-friendly is my resume" score. The job-description-agnostic score is less meaningful; always test against the target role.

Three reasons: different keyword extractors pull different keyword lists from the JD, different match logic (exact vs semantic) credits variations differently, and different component weights amplify different strengths. A structure-heavy resume can score higher on a tool that weights sections at 25% than on one that weights them at 15%. Treat absolute scores as directional. If two independent tools both clear 75%, trust the result.

Placement dominates density once a keyword appears at least once. Placement multipliers (up to 3.0x for job title, 2.5x for summary) compound, so a single keyword placed in three high-multiplier locations outscores the same keyword repeated five times in low-multiplier locations. The practical rule: make sure every top-3 JD keyword appears in your summary, skills section, and at least one recent bullet. Do not mention it 10 times.

Lever uses modern AI-driven parsing with embeddings for semantic matching, which credits related terms. Workday uses a mix: rule-based for structured fields (dates, education) and modern language models for keyword extraction. Taleo remains primarily rule-based and exact-match. iCIMS combines Boolean keyword logic with a machine-learned skills taxonomy. The trend is toward more semantic capability across all platforms, but the pace varies.

The Bottom Line

ATS score is the output of five weighted components: keyword match (40-50%), placement (10-15%), section completeness (15-20%), parse success (10-20%), and relevance boost (5-15%). Weights vary by platform. Greenhouse is the outlier; it does no algorithmic scoring at all. Workday and Taleo reward exact keyword match and strict structure. Lever and iCIMS credit semantic variations. The universal lever with the highest ROI is placement: make sure every top-3 skill from the target JD appears in your summary, skills list, and a recent bullet. That single change moves most resumes from the 60s into the 80s.

When you are ready to verify, run your resume through our free ATS resume checker to see component-by-component scoring against a specific job description.