Methodology
    Updated

    The Deckmetric methodology

    This page documents the Deckmetric scoring methodology: the CVM framework and the weight on each dimension, the nine benchmark VC frameworks we calibrate against, the four-stage analysis pipeline, what we deliberately do not score, and the limitations of the system. Every score Deckmetric publishes is traceable back to a rubric on this page or the linked per-framework page.

    How does Deckmetric score a pitch deck?

    Every uploaded deck is scored against the proprietary Captivate-Validate-Motivate (CVM) framework: Captivate (35% weight), Validate (40% weight), and Motivate (25% weight). The same deck is then benchmarked against 9 published VC frameworks. The pipeline runs in four stages: extraction → classification → scoring → recommendations.

    Headline framework
    Captivate-Validate-Motivate (CVM), proprietary, authored by Sebastian Scheplitz.
    Weighting
    Captivate 35% · Validate 40% · Motivate 25%.
    Benchmark frameworks
    9 published VC frameworks for cross-investor calibration.
    Pipeline
    Extraction → classification → scoring → prioritized recommendations.

    Why a structured framework instead of generic AI feedback

    Generic AI feedback on a pitch deck is unfalsifiable. It tells you what sounds good. A structured framework tells you what an investor partner is actually scoring against and lets you challenge the score. Every dimension on Deckmetric has a stated weight, and once the per-dimension rubric is published below, observable evidence and failure patterns the score is tied to. That is the only kind of feedback a founder can act on with confidence the day before a partner meeting.

    The CVM framework, dimension by dimension

    Each CVM dimension below carries an explicit weight, an observable evidence rubric, and the failure patterns the score is most often docked for. Every score Deckmetric publishes is traceable back to one of these dimensions.

    Captivate measures whether the deck earns a second look in the first 30 seconds. Investors triage decks at speed, so this dimension scores the opening hook, the framing of the problem, and the emotional resonance of the first three slides, the part of the deck that decides whether the rest gets read at all.

    Evidence we score for

    • A first slide that names the company, the category, and the one-line wedge in plain English, no jargon, no buzzwords.
    • A problem statement framed as a costly, specific status quo a target customer is paying for today, not a generic market complaint.
    • An opening hook (a stat, an anecdote, or a surprising contrarian claim) that reframes the problem in a way the reader has not seen before.

    Common failure patterns

    • ×Buried lede, the company is impossible to describe after the first slide.
    • ×Manifesto opening, three slides of vision and TAM before the reader knows what is actually being sold.

    Validate measures whether the deck's claims survive a partner-meeting cross-examination. This is the heaviest dimension because it is the one investors cannot rationalize past in diligence, every assertion about market, traction, willingness-to-pay, and unit economics has to land with evidence the reader can audit.

    Evidence we score for

    • Traction reported as concrete numbers with a time series (e.g., MRR by month for the last six months) rather than directional words like 'rapid' or 'strong'.
    • A bottoms-up market sizing tied to identifiable customer segments and a defensible average contract value, not just a top-down analyst report.
    • Unit economics, gross margin, or willingness-to-pay evidence that an investor can sanity-check without an extra call.

    Common failure patterns

    • ×Hockey-stick projections with no current-period traction to anchor them.
    • ×Customer logos used as social proof when the underlying revenue per logo is small or undisclosed.

    Motivate measures whether the deck closes, whether the reader finishes it knowing exactly what is being asked, why now, and what happens next. This dimension scores the ask slide, the use-of-funds, and the urgency narrative that converts an interested reader into a calendar invite.

    Evidence we score for

    • A specific ask: round size, instrument, target close date, and what the round funds to, not a vague 'raising a seed.'
    • A 'why now' that ties the raise to a market, regulatory, or technology window the reader can verify exists.
    • A use-of-funds breakdown showing the next 18 months of milestones the round is meant to unlock.

    Common failure patterns

    • ×No ask slide, or an ask reduced to a single round-size number with no timeline or use-of-funds.
    • ×A close that ends on the team slide and never tells the reader what action to take.

    Why Validate carries the most weight

    Validate is 40% of the score because it is the only dimension an investor cannot rationalize past in diligence. Captivate gets the deck read. Motivate gets the meeting. Weak validation, however, kills the deal in diligence regardless of how strong the other two are. The weighting reflects that order of failure modes.

    The 9 benchmark VC frameworks

    The CVM grade is the headline. The 9 benchmark VC frameworks are how we tell you which type of investor your current deck is actually best suited to. Each framework is a published rubric assembled from the firm’s own materials. Click through for the per-framework scoring criteria.

    The nine benchmark VC frameworks Deckmetric scores against, with a short note describing each framework’s emphasis and a link to its detail page.
    FrameworkEmphasisRead the rubric
    Y Combinatorevidence-based, traction-focusedRead the rubric
    Sequoia Capitalcomplete narrative arc, market timingRead the rubric
    Andreessen Horowitz (a16z)technical moat, platform potentialRead the rubric
    500 Globalgrowth loops, scalable channelsRead the rubric
    First Round Capitalfounder-market fit, learning velocityRead the rubric
    Tiger Globalgrowth at scale, capital efficiencyRead the rubric
    Family Officedownside protection, cash conversionRead the rubric
    Techstarsmentor-readiness, accelerator narrativeRead the rubric
    Antlerearly-stage clarity and team coherenceRead the rubric

    The four-stage scoring pipeline

    Four stages run in sequence on every uploaded deck. The full list of named sub-processors that touch your deck, our retention windows, and how to delete every artifact in one click are documented in the Trust Center.

    Step 1, Extraction

    Your uploaded deck is parsed slide by slide. Headings, body copy, charts, and supporting numbers are extracted into a structured representation so the scoring engine can reason about each slide independently rather than treating the deck as one big blob of text.

    Step 2, Classification

    Each slide is mapped to its narrative role: problem, solution, market, traction, business model, team, ask, and so on. This step is what allows the engine to detect missing slides, a deck without a market sizing slide is scored differently from one that has it, and to score the order of the deck against the canonical VC-preferred flow.

    Step 3, Scoring

    The classified deck is scored against the CVM framework's weighted dimensions and against each of the 9 VC benchmark frameworks. Every framework has its own dimension weights, so the same slide can score differently across different investor lenses. You receive a CVM headline score, per-dimension breakdowns, and per-framework benchmark scores, every one traceable to the rubric on this page or on the per-framework page.

    Step 4, Recommendations

    Finally, the engine ranks the highest-leverage rewrites, the changes that would most improve your scores, and produces an investor-fit shortlist of which framework profiles your current deck best matches. You leave with a prioritized to-do list, not a wall of feedback.

    What Deckmetric does not score

    We do not score the underlying business viability or predict funding outcomes. We do not score the founder personally. We do not assess whether your market is a good market, only whether your deck communicates the market clearly. We do not score legal, tax, or accounting representations on the deck. And we do not score live deck delivery, the rubric is for the artifact, not the verbal pitch around it.

    Limitations and calibration

    The CVM framework is a structured rubric, not a verdict. Two competent advisors looking at the same deck will sometimes weight the same evidence pattern differently, the explicit dimension weights on this page are how we make our trade-offs auditable. The 9 benchmark frameworks were assembled from publicly shared firm materials (portfolio post-mortems, partner blog posts, published rubrics) and represent our best read of how each firm actually scores; where firms have not published explicit rubrics, the framework reflects the most consistent themes across their public commentary.

    See the methodology applied to your deck

    Upload your pitch deck and get a CVM grade plus the 9 benchmark framework scores, every score traceable back to a rubric on this page.