This page documents the Deckmetric scoring methodology: the CVM framework and the weight on each dimension, the nine benchmark VC frameworks we calibrate against, the four-stage analysis pipeline, what we deliberately do not score, and the limitations of the system. Every score Deckmetric publishes is traceable back to a rubric on this page or the linked per-framework page.
Every uploaded deck is scored against the proprietary Captivate-Validate-Motivate (CVM) framework: Captivate (35% weight), Validate (40% weight), and Motivate (25% weight). The same deck is then benchmarked against 9 published VC frameworks. The pipeline runs in four stages: extraction → classification → scoring → recommendations.
Generic AI feedback on a pitch deck is unfalsifiable. It tells you what sounds good. A structured framework tells you what an investor partner is actually scoring against and lets you challenge the score. Every dimension on Deckmetric has a stated weight, and once the per-dimension rubric is published below, observable evidence and failure patterns the score is tied to. That is the only kind of feedback a founder can act on with confidence the day before a partner meeting.
Each CVM dimension below carries an explicit weight, an observable evidence rubric, and the failure patterns the score is most often docked for. Every score Deckmetric publishes is traceable back to one of these dimensions.
Captivate measures whether the deck earns a second look in the first 30 seconds. Investors triage decks at speed, so this dimension scores the opening hook, the framing of the problem, and the emotional resonance of the first three slides, the part of the deck that decides whether the rest gets read at all.
Validate measures whether the deck's claims survive a partner-meeting cross-examination. This is the heaviest dimension because it is the one investors cannot rationalize past in diligence, every assertion about market, traction, willingness-to-pay, and unit economics has to land with evidence the reader can audit.
Motivate measures whether the deck closes, whether the reader finishes it knowing exactly what is being asked, why now, and what happens next. This dimension scores the ask slide, the use-of-funds, and the urgency narrative that converts an interested reader into a calendar invite.
Validate is 40% of the score because it is the only dimension an investor cannot rationalize past in diligence. Captivate gets the deck read. Motivate gets the meeting. Weak validation, however, kills the deal in diligence regardless of how strong the other two are. The weighting reflects that order of failure modes.
The CVM grade is the headline. The 9 benchmark VC frameworks are how we tell you which type of investor your current deck is actually best suited to. Each framework is a published rubric assembled from the firm’s own materials. Click through for the per-framework scoring criteria.
| Framework | Emphasis | Read the rubric |
|---|---|---|
| Y Combinator | evidence-based, traction-focused | Read the rubric |
| Sequoia Capital | complete narrative arc, market timing | Read the rubric |
| Andreessen Horowitz (a16z) | technical moat, platform potential | Read the rubric |
| 500 Global | growth loops, scalable channels | Read the rubric |
| First Round Capital | founder-market fit, learning velocity | Read the rubric |
| Tiger Global | growth at scale, capital efficiency | Read the rubric |
| Family Office | downside protection, cash conversion | Read the rubric |
| Techstars | mentor-readiness, accelerator narrative | Read the rubric |
| Antler | early-stage clarity and team coherence | Read the rubric |
Four stages run in sequence on every uploaded deck. The full list of named sub-processors that touch your deck, our retention windows, and how to delete every artifact in one click are documented in the Trust Center.
Your uploaded deck is parsed slide by slide. Headings, body copy, charts, and supporting numbers are extracted into a structured representation so the scoring engine can reason about each slide independently rather than treating the deck as one big blob of text.
Each slide is mapped to its narrative role: problem, solution, market, traction, business model, team, ask, and so on. This step is what allows the engine to detect missing slides, a deck without a market sizing slide is scored differently from one that has it, and to score the order of the deck against the canonical VC-preferred flow.
The classified deck is scored against the CVM framework's weighted dimensions and against each of the 9 VC benchmark frameworks. Every framework has its own dimension weights, so the same slide can score differently across different investor lenses. You receive a CVM headline score, per-dimension breakdowns, and per-framework benchmark scores, every one traceable to the rubric on this page or on the per-framework page.
Finally, the engine ranks the highest-leverage rewrites, the changes that would most improve your scores, and produces an investor-fit shortlist of which framework profiles your current deck best matches. You leave with a prioritized to-do list, not a wall of feedback.
We do not score the underlying business viability or predict funding outcomes. We do not score the founder personally. We do not assess whether your market is a good market, only whether your deck communicates the market clearly. We do not score legal, tax, or accounting representations on the deck. And we do not score live deck delivery, the rubric is for the artifact, not the verbal pitch around it.
The CVM framework is a structured rubric, not a verdict. Two competent advisors looking at the same deck will sometimes weight the same evidence pattern differently, the explicit dimension weights on this page are how we make our trade-offs auditable. The 9 benchmark frameworks were assembled from publicly shared firm materials (portfolio post-mortems, partner blog posts, published rubrics) and represent our best read of how each firm actually scores; where firms have not published explicit rubrics, the framework reflects the most consistent themes across their public commentary.
Deep dives on how the CVM rubric scores each of the slides investors weigh hardest.