Cohort-Based Unit Economics: Why Monthly Snapshots Lie and How to Build a True P&L by Acquisition Cohort
Your company's monthly revenue is growing 20% year-over-year. Your unit economics are deteriorating. Both statements are true simultaneously — and you'll never see the second one in an aggregate P&L.
TL;DR: A company can show 20% aggregate revenue growth while every underlying acquisition cohort is deteriorating -- a phenomenon driven by Simpson's Paradox, where old high-quality cohorts mask the declining economics of newer ones. Building a P&L by acquisition cohort instead of monthly snapshot reveals these hidden trends 12-18 months before they surface in aggregate metrics, when fixing them costs 2-3x less.
The Number That Lies to Your Face Every Month
Your monthly P&L says revenue grew from 1.44M. That is 20% year-over-year. The board is happy. The investors are happy. The team gets congratulatory Slack messages.
Here is what the monthly P&L does not say: every cohort of customers acquired in the last nine months is less profitable than the one before it. Your newest customers cost 40% more to acquire, retain at 15% lower rates, and generate 22% less gross margin per dollar of revenue than customers you acquired eighteen months ago.
Both things are true. Revenue is growing. Economics are deteriorating. The growth is simply large enough — for now — to obscure the rot underneath.
This is not a hypothetical. It is the default condition of most scaling SaaS companies. And the reason it goes undetected is structural: the standard monthly P&L is designed to answer "how did the whole business do this month?" It is incapable of answering "is each successive generation of customers better or worse than the last?"
That second question is the only one that matters for long-term survival.
Monthly aggregates blend old, high-quality cohorts with new, lower-quality ones. The old cohorts prop up the averages. Growth in new customer count keeps the topline moving. And the CEO presents a slide that shows up and to the right, blissfully unaware that the foundation is cracking.
Caution
A company can show improving aggregate metrics for 12-18 months while every underlying cohort is deteriorating. By the time the aggregate numbers turn, the damage is structural and the fix requires 2-3x the investment it would have taken to correct early.
The fix is not complicated. It is a different way of slicing the same data. But it requires abandoning the comforting fiction that a single monthly number tells you the health of your business.
Simpson's Paradox Is Eating Your SaaS Metrics
In 1951, Edward Simpson formalized a statistical phenomenon that had been known informally for decades: a trend that appears in several different groups of data can disappear or reverse when those groups are combined. It is one of the most counterintuitive results in statistics, and it runs through SaaS metrics like termites through drywall.
Here is a concrete example. Suppose you have two acquisition channels: organic search and paid social.
Channel-Level Retention vs. Blended Retention — A Simpson's Paradox Example
| Period | Organic Retention | Organic % of Cohort | Paid Social Retention | Paid Social % of Cohort | Blended Retention |
|---|---|---|---|---|---|
| Q1 2025 | 92% | 70% | 68% | 30% | 84.8% |
| Q2 2025 | 90% | 55% | 65% | 45% | 78.8% |
| Q3 2025 | 88% | 40% | 62% | 60% | 72.4% |
| Q4 2025 | 86% | 30% | 60% | 70% | 67.8% |
Look at the blended retention column: 84.8% down to 67.8%. It looks like the business is falling apart. A monthly P&L reader might conclude that product quality is declining, or that the support team is failing, or that competitors are winning.
But look at each channel individually. Organic retention went from 92% to 86% — a modest decline. Paid social went from 68% to 60% — also a modest decline. Neither channel is collapsing. What changed is the mix. Paid social grew from 30% to 70% of the cohort. The lower-retention channel now dominates the average.
The aggregate number is mathematically correct and directionally misleading. Both retention rates declined slightly, but the blended rate plummeted because the composition shifted. A monthly P&L cannot distinguish between "our product is getting worse" and "we're acquiring a different mix of customers." A cohort P&L can.
Insight
Simpson's paradox is not an edge case in SaaS. It is the default condition of any company that is changing its acquisition mix — which is every company that is growing. If you are not decomposing metrics by cohort and channel, you are almost certainly misattributing causation.
This matters because misattribution leads to misallocation. If you think the product is the problem, you invest in product. If you think the support team is the problem, you invest in support. If the actual problem is that your newest acquisition channel brings in fundamentally different customers, the correct investment is in channel economics — adjusting spend, improving targeting, or accepting a different margin profile for that channel.
You cannot make the right capital allocation decision from an aggregate number. Full stop.
Cohort Accounting Fundamentals
A cohort is a group of customers who share a common acquisition characteristic, typically the month or quarter they were acquired. Cohort accounting means tracking all revenue, costs, and derived metrics for each cohort separately, from the moment of acquisition through the customer's entire lifetime.
The fundamental unit shifts from "the business in March" to "the January 2025 cohort in its fourteenth month."
This is not a new idea. David Skok wrote about cohort analysis in the early 2010s. Bill Gurley talked about it at Benchmark. But the gap between acknowledging its importance and actually building a cohort P&L into your financial reporting infrastructure remains enormous. Most companies that claim to do cohort analysis run an ad-hoc query once a quarter. That is not cohort accounting. That is a science fair project. The discipline of analytics engineering -- version-controlled, tested, documented SQL models -- is what transforms cohort analysis from an ad-hoc exercise into a reliable financial system.
True cohort accounting requires four components:
First, a customer-level acquisition record. Every customer must have a fixed acquisition date and a fixed acquisition channel. This sounds trivial. It is not. Companies with self-serve signups, sales-assisted conversions, and multi-touch attribution often cannot answer the simple question "when did this customer become a customer?" without a 45-minute debate.
Second, a revenue allocation model. Every dollar of revenue must be attributable to a specific customer and therefore to a specific cohort. For subscription businesses this is straightforward. For usage-based models it requires event-level billing data linked to customer identity.
Third, a cost allocation model. This is where most companies give up. Revenue is easy to attribute. Costs are not. CAC can be allocated by channel. COGS is harder — how do you divide infrastructure costs, support costs, and success costs across cohorts? We will cover specific allocation methods shortly.
Fourth, a temporal tracking system. You need to know not just what a cohort generated in total, but what it generated in month 1, month 2, month 3, and so on. This "cohort age" dimension is what turns a flat customer list into a living picture of economic health.
Without all four components, you have a customer database. With all four, you have a cohort P&L.
Building a Cohort P&L: From Revenue to True Margin
A cohort P&L has the same line items as a standard P&L — revenue, COGS, gross margin, operating expenses — but each is calculated per cohort and tracked across cohort age. Let us build one from the ground up.
Revenue by Cohort
Start with monthly recurring revenue (MRR) for each cohort at each month of its life. A cohort acquired in January 2025 with 200 customers paying an average of 30,000 MRR. In month 2, some customers churn. Some expand. The net is a new MRR figure for that cohort.
Monthly Revenue by Cohort Age (Per 100 Customers Acquired)
Notice the pattern. Q1 2025 starts at 11,500 — 23% lower initial ARPU — and churns faster in the early months. If you only looked at total revenue, you might see growth (more cohorts stacking on top of each other). The per-cohort view shows a clear generation-over-generation decline.
COGS Allocation by Cohort
Cost of goods sold in SaaS typically includes hosting infrastructure, third-party API costs, payment processing fees, and customer support labor. Allocating these to cohorts requires a tiered approach:
Direct costs — payment processing fees, per-seat licensing costs, usage-based API charges — can be attributed to individual customers and therefore to specific cohorts.
Semi-direct costs — customer support labor, onboarding costs, success manager time — can be attributed to cohorts using time-tracking data or ticket volume weighted by customer cohort membership.
Shared costs — hosting infrastructure, platform maintenance — must be allocated using a reasonable driver. Revenue-weighted allocation is the simplest defensible method: if the Q1 2025 cohort generates 18% of total revenue in a given month, it absorbs 18% of shared COGS.
CAC Amortization
Customer acquisition cost is not a monthly expense. It is a capital investment in a future revenue stream. Treating it as a current-month expense — which is what a standard P&L does — distorts profitability timing. The same temporal mismatch motivates CLV-based bid strategies that optimize for lifetime value rather than immediate conversion cost.
In a cohort P&L, CAC is amortized over the cohort's expected lifetime. If a cohort cost 6,667 of amortized CAC each month. This transforms CAC from a lumpy acquisition-month expense into a steady cost that can be measured against the revenue it produces at each stage of the cohort's life.
Cohort P&L — Q1 2025 Cohort (200 Customers, First 6 Months)
| Line Item | Month 1 | Month 2 | Month 3 | Month 4 | Month 5 | Month 6 |
|---|---|---|---|---|---|---|
| Revenue | $30,000 | $28,800 | $28,000 | $27,400 | $27,000 | $26,800 |
| Direct COGS | $4,500 | $4,320 | $4,200 | $4,110 | $4,050 | $4,020 |
| Allocated Shared COGS | $3,600 | $3,456 | $3,360 | $3,288 | $3,240 | $3,216 |
| Gross Profit | $21,900 | $21,024 | $20,440 | $19,902 | $19,710 | $19,564 |
| Gross Margin % | 73.0% | 73.0% | 73.0% | 72.6% | 73.0% | 73.0% |
| Amortized CAC ($160K / 30mo) | $5,333 | $5,333 | $5,333 | $5,333 | $5,333 | $5,333 |
| Cohort Contribution | $16,567 | $15,691 | $15,107 | $14,569 | $14,377 | $14,231 |
| Cumulative Contribution | $16,567 | $32,258 | $47,365 | $61,934 | $76,311 | $90,542 |
The cumulative contribution line tells you when this cohort has paid back its acquisition cost and begun generating true profit. In this example, the Q1 2025 cohort's $160,000 CAC is fully recovered by approximately month 10. Everything after that is profit contribution.
Now imagine running this same table for Q4 2025 — lower starting revenue, higher churn, same or higher CAC. The payback period stretches to month 16. Or month 20. Or never. That is the kind of truth a cohort P&L forces you to confront.
The Cohort Waterfall: Seeing What Aggregates Hide
The cohort waterfall is a visualization that stacks each cohort's revenue contribution over time, showing how total revenue is composed of layers from each acquisition vintage. It is the single most revealing chart in SaaS finance, and almost nobody builds it.
Revenue Waterfall by Acquisition Cohort (Monthly Stacked View)
In a healthy business, older cohorts form a stable, thick base, and newer cohorts add incremental layers on top. The base should be roughly flat or slightly growing (from expansion revenue). If you see the older cohort layers thinning rapidly, you have a retention problem. If you see the newer cohort layers starting thinner than their predecessors, you have an acquisition quality problem.
The waterfall answers a question no other chart can: "If we stopped acquiring customers today, what would revenue look like?" The answer is the sum of all existing cohort layers as they decay. If that sum drops by 40% in twelve months, your business is a treadmill — running fast to stand still.
The most dangerous pattern in a cohort waterfall is one where total revenue is growing but each new layer starts thinner and decays faster than the previous one. Total revenue rises because you keep adding layers. But each layer contributes less. You are buying growth with increasingly expensive, increasingly perishable revenue. This is the SaaS equivalent of a Ponzi scheme, and it is disturbingly common in companies that optimize for topline growth without monitoring cohort-level economics.
Identifying Toxic Cohorts Before They Destroy Margins
A toxic cohort is one whose lifetime contribution will never exceed its acquisition cost. It is a cohort you paid to acquire and will lose money on for its entire existence. Every company has them. The question is whether you know which ones they are.
The early warning signs appear within the first 90 days:
Day-30 retention below threshold. For most B2B SaaS, if a cohort's 30-day retention falls below 85%, the expected lifetime value almost certainly cannot recover the CAC at standard acquisition costs. Survival analysis can model the full time-to-churn distribution for each cohort, giving you a far richer picture than binary retention rates alone. The threshold varies by business model, but the principle is universal: early retention is the single best predictor of lifetime economics.
First-month engagement depth. Customers who do not reach a meaningful activation milestone in their first 30 days churn at 3-5x the rate of those who do. If an entire cohort has low activation rates, the cohort is toxic regardless of what retention looks like in month 1 — the churn is deferred, not avoided.
Support ticket velocity. A cohort that generates 2x the average support tickets in its first 60 days is signaling poor product-market fit for that customer segment. High support cost + high churn + standard CAC = guaranteed negative contribution.
Practical Application
Build a "Cohort Toxicity Score" using three inputs: 30-day retention rate, 30-day activation rate, and 60-day support ticket ratio. The formula is:
where is the normalized 30-day retention rate, is the normalized activation rate, and is the normalized 60-day support ticket ratio. Any cohort scoring below 60 on a normalized 0-100 scale should trigger an immediate review of the acquisition channel, sales process, or targeting criteria that produced it.
The critical insight is timing. A toxic cohort identified at day 30 can be addressed — you can adjust the channel spend, change the targeting, fix the onboarding. A toxic cohort identified at month 12, when the aggregate numbers finally turn, has already burned through its CAC and dragged margins down for a year. The entire value of cohort monitoring is in the speed of detection.
Here is what a toxicity comparison looks like across cohorts:
Cohort Toxicity Score by Acquisition Quarter
The Q2 2025 cohort scores 58, below the toxicity threshold. Aggregate metrics won't reflect this for months. But the cohort P&L already knows.
LTV:CAC by Channel and Vintage
LTV:CAC is the most cited metric in SaaS. It is also the most frequently calculated incorrectly.
The standard LTV formula for a subscription business under constant churn is:
This produces a single number. That number is an average of averages of averages. It blends customers acquired through organic search (low CAC, high retention) with customers acquired through paid channels (high CAC, variable retention). It blends cohorts from 2023 (mature, stable, possibly from a different era of your product) with cohorts from last quarter (immature, unknown trajectory).
A blended LTV:CAC of 3.0x can mask the reality that organic is running at 6.0x and paid social is running at 1.1x. The blended number tells you the business is healthy. The segmented number tells you one channel is subsidizing another.
LTV:CAC Ratio by Acquisition Channel and Cohort Vintage
| Channel | H1 2024 Cohorts | H2 2024 Cohorts | H1 2025 Cohorts | Trend |
|---|---|---|---|---|
| Organic Search | 5.8x | 5.4x | 5.1x | Slight decline (market maturity) |
| Content / SEO | 4.9x | 4.5x | 4.2x | Steady decline (rising content costs) |
| Paid Search (Brand) | 3.8x | 3.5x | 3.2x | Moderate decline (CPC inflation) |
| Paid Search (Non-Brand) | 2.4x | 2.0x | 1.6x | Significant decline (competition) |
| Paid Social | 1.8x | 1.3x | 0.9x | Below breakeven threshold |
| Outbound Sales | 3.2x | 2.8x | 2.5x | Declining (market saturation) |
| Partner / Referral | 4.5x | 4.6x | 4.7x | Improving (network effects) |
| Blended | 3.6x | 3.1x | 2.7x | Masking channel-level collapse |
Here is a SQL query that computes LTV:CAC segmented by channel and vintage:
SELECT
acquisition_channel,
DATE_TRUNC('half_year', acquisition_date) AS vintage,
COUNT(DISTINCT customer_id) AS cohort_size,
SUM(lifetime_revenue) / COUNT(DISTINCT customer_id) AS avg_ltv,
SUM(acquisition_cost) / COUNT(DISTINCT customer_id) AS avg_cac,
ROUND(
(SUM(lifetime_revenue * gross_margin_pct) / NULLIF(SUM(acquisition_cost), 0)),
2
) AS margin_adjusted_ltv_cac
FROM mart_customer_cohorts
WHERE acquisition_date >= '2024-01-01'
AND cohort_age_months >= 6 -- only mature cohorts
GROUP BY acquisition_channel, DATE_TRUNC('half_year', acquisition_date)
ORDER BY vintage, margin_adjusted_ltv_cac DESC;The blended LTV:CAC declined from 3.6x to 2.7x. A board presentation might flag this as "something to watch." But the channel-level view reveals that paid social has crossed below 1.0x — meaning every customer acquired through that channel will cost the company money over their lifetime. The $400,000 quarterly spend on paid social is not an investment. It is a donation.
Meanwhile, partner/referral is the only channel improving. A rational capital allocator would be shifting budget aggressively from paid social to partner development. But you cannot make that decision from the blended number. You need the matrix.
Insight
The purpose of segmenting LTV:CAC by channel and vintage is not precision — it is decision-making. A blended ratio tells you the weather. A segmented matrix tells you which fields to plant and which to abandon.
The Blended Metric Trap: How Growth Masks Deterioration
This is the central pathology of aggregate financial reporting in growing companies. I call it the Blended Metric Trap, and it works like this:
Imagine a company that acquires 100 customers per quarter. The Q1 2024 cohort generates 5 less ARPU and retains 1 percentage point worse. By Q1 2025, the new cohort generates $130 ARPU with 91% retention.
Now imagine the company also grows acquisition volume by 20% per quarter. Here is what happens to the aggregate numbers:
The Blended Metric Trap: Aggregate Revenue vs. Cohort-Level ARPU
Total revenue goes from 96,000. Up and to the right. Spectacular growth. New cohort ARPU goes from 125. Down and to the right. Consistent deterioration. Average ARPU declines slowly from 133, buffered by the large base of high-ARPU older cohorts.
The board sees the green line. The CFO might notice the orange line if she squints. Nobody sees the red line unless someone is explicitly tracking new cohort quality.
This is the trap. Growth in volume outpaces deterioration in unit economics, making the aggregate numbers look fine — or even good — while the underlying engine is losing efficiency. It is the business equivalent of a car whose fuel efficiency is dropping by 5% every month, but you don't notice because you keep driving shorter trips.
The math is unforgiving. At some point, the volume growth rate falls below the deterioration rate. When that crossover happens, aggregate metrics turn negative. But by then, the deterioration has compounded for quarters or years. The fix is not a quick adjustment. It is a painful restructuring of channels, pricing, or market focus.
The companies that avoid the trap are the ones that monitor the red line — new cohort unit economics — with the same intensity they monitor total revenue. If new cohort ARPU declines for two consecutive quarters, that is a five-alarm fire, regardless of what the total revenue line says.
Payback Period Analysis by Cohort
CAC payback period is the number of months it takes for a cohort's cumulative gross profit to equal the cost of acquiring that cohort. It is the most operationally useful metric in cohort economics because it directly answers: "How long is our money trapped?"
The payback period for a cohort is the smallest such that cumulative gross profit exceeds CAC:
where is the gross profit generated by the cohort in month .
A company with 10M of capital permanently deployed in "working inventory" — customers who haven't yet repaid their acquisition cost. If the payback period stretches to 18 months, that working capital requirement jumps to $15M. The cash flow implications are material.
Here is what payback looks like when segmented by cohort vintage:
CAC Payback Period by Cohort Vintage
| Cohort | CAC per Customer | Month-1 Gross Profit | Monthly Decay Rate | Payback Month | Capital Efficiency Grade |
|---|---|---|---|---|---|
| Q1 2024 | $480 | $110 | 1.2% | 5 | A — Fast payback, low risk |
| Q2 2024 | $520 | $105 | 1.5% | 6 | A — Acceptable |
| Q3 2024 | $580 | $98 | 2.0% | 7 | B — Monitoring range |
| Q4 2024 | $640 | $92 | 2.4% | 9 | B — Approaching threshold |
| Q1 2025 | $710 | $86 | 3.0% | 11 | C — Capital concern |
| Q2 2025 | $790 | $78 | 3.8% | 15 | D — Unsustainable |
The Q1 2024 cohort paid back in 5 months. Capital was locked up for less than half a year before generating free cash flow. The Q2 2025 cohort takes 15 months — three times as long. This means the company needs three times the working capital to support the same acquisition volume. If acquisition volume is also growing, the cash requirement compounds.
This is how SaaS companies run out of money while growing 40% year-over-year. They are acquiring customers whose payback periods are stretching beyond their cash runway. The aggregate P&L shows growth. The bank account shows a cliff.
The payback analysis also reveals something more subtle: the risk profile of each cohort. A 5-month payback means the company is "in the money" quickly. Even if something goes wrong — a competitor launches, a recession hits, churn spikes — the cohort has already repaid itself. A 15-month payback means the company is exposed to 15 months of uncertainty before breaking even. Every month of payback is a month of risk.
Subscription vs. Usage-Based Cohort Dynamics
Cohort analysis works differently depending on your revenue model, and the distinction matters more than most operators realize.
Subscription cohorts follow a predictable pattern: high initial revenue (the contracted MRR), decay through churn, and potential recovery through expansion (upsells, seat additions). The revenue curve for a subscription cohort looks like a bathtub — it drops, flattens, and may rise. The economics are analyzable early because the contracted revenue is known upfront.
Usage-based cohorts follow a fundamentally different pattern: low initial revenue (the customer is still ramping), growth as usage increases, a plateau at mature usage, and eventual decline. The revenue curve looks like a hill. The economics are not analyzable early because the peak usage — and therefore the peak revenue — is unknown for months.
This has profound implications for cohort P&L construction:
In a subscription model, month-1 revenue is close to peak revenue. CAC payback can be estimated with confidence by month 3. A toxic cohort reveals itself quickly.
In a usage-based model, month-1 revenue might be 20% of eventual peak. CAC payback projections made at month 3 are unreliable. A cohort that looks toxic at month 3 might be perfectly healthy — it just hasn't ramped yet. Conversely, a cohort that looks healthy at month 3 (high early usage) might represent customers who are burning through a proof-of-concept and will disappear at month 6.
Caution
Usage-based cohort analysis requires a minimum of 6-9 months of data before the economics are stable enough for comparison. Making channel allocation decisions based on 90-day usage cohort data is the equivalent of judging a restaurant by its appetizer.
The solution for usage-based models is to define a "maturity threshold" — the cohort age at which usage patterns stabilize — and only compare cohorts that have reached that threshold. This means your newest cohorts are always in a provisional state. You can monitor them, but you cannot reliably compare them to mature cohorts or use them to make capital allocation decisions.
Hybrid models (base subscription plus usage overage) combine both dynamics and require tracking two revenue streams per cohort separately: the subscription component (analyzable early) and the usage component (analyzable late). Blending them into a single revenue number per cohort defeats the purpose of the analysis.
Implementation in SQL and dbt
Theory without implementation is decoration. Here is how to build a cohort P&L in a modern data stack using SQL and dbt.
The foundation is a cohort_spine — a table that assigns every customer to an acquisition cohort and tracks their age in months:
-- models/staging/stg_cohort_spine.sql
WITH customer_first_payment AS (
SELECT
customer_id,
DATE_TRUNC('month', MIN(payment_date)) AS acquisition_month
FROM ref('stg_payments')
GROUP BY customer_id
),
month_series AS (
SELECT
generate_series(
(SELECT MIN(acquisition_month) FROM customer_first_payment),
DATE_TRUNC('month', CURRENT_DATE),
INTERVAL '1 month'
)::date AS calendar_month
)
SELECT
c.customer_id,
c.acquisition_month,
m.calendar_month,
DATE_PART('year', AGE(m.calendar_month, c.acquisition_month)) * 12
+ DATE_PART('month', AGE(m.calendar_month, c.acquisition_month))
AS cohort_age_months
FROM customer_first_payment c
CROSS JOIN month_series m
WHERE m.calendar_month >= c.acquisition_monthFrom this spine, build the cohort revenue model:
-- models/marts/cohort_revenue.sql
SELECT
s.acquisition_month,
s.cohort_age_months,
COUNT(DISTINCT s.customer_id) AS customers_alive,
SUM(r.revenue) AS cohort_revenue,
SUM(r.revenue) / NULLIF(COUNT(DISTINCT s.customer_id), 0) AS arpu,
COUNT(DISTINCT s.customer_id)::FLOAT
/ NULLIF(FIRST_VALUE(COUNT(DISTINCT s.customer_id))
OVER (PARTITION BY s.acquisition_month
ORDER BY s.cohort_age_months
ROWS UNBOUNDED PRECEDING), 0) AS retention_rate
FROM ref('stg_cohort_spine') s
LEFT JOIN ref('stg_revenue') r
ON s.customer_id = r.customer_id
AND s.calendar_month = DATE_TRUNC('month', r.revenue_date)
WHERE r.revenue > 0
GROUP BY s.acquisition_month, s.cohort_age_monthsAnd the cohort P&L itself:
-- models/marts/cohort_pl.sql
WITH cohort_costs AS (
SELECT
acquisition_month,
SUM(acquisition_cost) AS total_cac,
COUNT(DISTINCT customer_id) AS cohort_size
FROM ref('stg_customer_acquisition')
GROUP BY acquisition_month
)
SELECT
cr.acquisition_month,
cr.cohort_age_months,
cr.cohort_revenue,
cr.cohort_revenue * 0.78 AS estimated_gross_profit, -- adjust margin %
cc.total_cac / 30.0 AS monthly_amortized_cac, -- 30-month amortization
(cr.cohort_revenue * 0.78) - (cc.total_cac / 30.0) AS cohort_contribution,
SUM((cr.cohort_revenue * 0.78) - (cc.total_cac / 30.0))
OVER (PARTITION BY cr.acquisition_month
ORDER BY cr.cohort_age_months) AS cumulative_contribution,
cr.retention_rate,
cr.arpu
FROM ref('cohort_revenue') cr
JOIN cohort_costs cc
ON cr.acquisition_month = cc.acquisition_monthThe key dbt design decisions:
Use incremental models for the cohort spine. It grows with every new month. A full refresh on a company with 100K+ customers and 36+ months of history is expensive.
Parameterize the amortization period. Use a dbt variable (var('cac_amortization_months', 30)) so the finance team can adjust without editing SQL.
Build separate models for channel-level cohorts. The queries above group by acquisition_month alone. Duplicate them with an additional acquisition_channel dimension for the channel-level analysis described in earlier sections.
Test your cohort spine for completeness. Every active customer should appear in the spine for every month they were active. Missing rows produce silent errors — understated revenue for specific cohort-age combinations that are almost impossible to catch by looking at output tables.
The Cohort Health Dashboard Framework
A cohort P&L is only useful if decision-makers look at it regularly and know how to interpret it. Most cohort analyses die in a Jupyter notebook that the data team ran once. The Cohort Health Dashboard is a framework for making cohort economics a persistent, visible part of operating cadence.
The dashboard has four sections, each designed to answer a specific strategic question:
Section 1: The Vintage Comparison Matrix
Question it answers: "Are newer cohorts better or worse than older ones at the same stage of life?"
Display retention, ARPU, and gross margin for each cohort at months 1, 3, 6, and 12. Arrange as a matrix with cohort vintage on the Y-axis and cohort age on the X-axis. Color-code cells: green for above the trailing four-quarter average, yellow for within one standard deviation, red for below.
This is the "vital signs" view. A finance leader should be able to glance at this matrix and immediately identify if the most recent cohorts are trending red across metrics.
Section 2: The Payback Trajectory Chart
Question it answers: "How quickly are we recovering acquisition cost, and is recovery speeding up or slowing down?"
Plot cumulative gross profit per cohort over time, with a horizontal line at the CAC level for each cohort. The point where each curve crosses its CAC line is the payback moment. If the crossover point is moving to the right (later) with each successive cohort, capital efficiency is declining.
Section 3: The Channel Economics Scorecard
Question it answers: "Which acquisition channels are producing economically healthy cohorts and which are not?"
Display LTV:CAC ratio, payback period, and 12-month retention by channel for the most recent three cohort vintages. Flag any channel where LTV:CAC has fallen below 1.5x or where the trend shows three consecutive periods of decline.
Section 4: The Toxicity Alert Panel
Question it answers: "Do any current cohorts show early signs of negative lifetime contribution?"
Display the Cohort Toxicity Score (described earlier) for all cohorts in their first 90 days. Any cohort below the threshold triggers an alert with a drill-down into the three component scores and the specific acquisition channel responsible.
Insight
The Cohort Health Dashboard should be reviewed weekly by the finance team and monthly by the executive team. It does not replace the standard P&L. It complements it by answering the questions the P&L structurally cannot ask: "Are the customers we're acquiring today as good as the ones we acquired last year?"
Operational Integration
The dashboard is not a reporting artifact. It should feed directly into three operational processes:
Budget allocation. If the Channel Economics Scorecard shows paid social LTV:CAC below 1.0x for two consecutive quarters, the marketing budget for paid social should be reduced or eliminated in the next planning cycle. This decision should not require a special analysis. The dashboard should make it obvious.
Product prioritization. If the Vintage Comparison Matrix shows declining activation rates across all channels, the problem is not acquisition — it is onboarding or product-market fit. The product roadmap should respond to this signal within one planning cycle.
Sales compensation. If outbound sales cohorts show systematically shorter lifetimes than inbound cohorts, the sales compensation model may be incentivizing low-quality closings. Adjusting commission structures to include a 6-month retention clawback aligns salesperson incentives with cohort economics.
The purpose of a framework is not to add more dashboards to an already-overloaded BI tool. It is to create a decision-making reflex: when you see a red cell in the Vintage Comparison Matrix, you know what to do next. When the Toxicity Alert fires, you know who is responsible and what options are on the table. The framework is not the dashboard. The framework is the set of actions the dashboard triggers.
The Uncomfortable Arithmetic
SaaS companies raise capital on aggregate metrics. They hire on aggregate metrics. They celebrate on aggregate metrics. And they die on aggregate metrics — because aggregate metrics are the last to turn.
The arithmetic is simple. If your newest cohort generates 80% of the lifetime value of the cohort you acquired twelve months ago, and your acquisition cost has risen 15%, the unit economics of your marginal customer have deteriorated by roughly 32%. Do that for four quarters and you have cut your per-customer profitability in half. But total revenue, boosted by the compounding base of older, healthier cohorts, might still be up 25%.
The CEO who reports 25% revenue growth without reporting 50% unit economics deterioration is not lying. They are simply reading the wrong spreadsheet.
A cohort P&L is not a more complicated version of a monthly P&L. It is a fundamentally different instrument. The monthly P&L measures the business as it is today — a snapshot of all customers in a single frame. The cohort P&L measures the business as it is becoming — a moving picture of how each generation of customers performs relative to its predecessors.
One tells you where you are. The other tells you where you are going.
Build the cohort P&L. Staff the cohort P&L. Present the cohort P&L. Make it as mandatory as the monthly close. Because the monthly close will keep telling you everything is fine right up until the moment it tells you everything is broken. And by then, the cohorts that broke it will be twelve months old, baked into your base, and impossible to fix without the kind of restructuring that ends careers.
The data was always there. The question is whether you had the architecture to read it.
Further Reading
- Simpson's Paradox on Wikipedia — Why aggregate metrics lie
- dbt Documentation — Building cohort models with dbt
- Baremetrics Cohort Analysis — SaaS metrics platform
References
-
Simpson, E. H. (1951). "The Interpretation of Interaction in Contingency Tables." Journal of the Royal Statistical Society, Series B, 13(2), 238-241.
-
Skok, D. (2012). "SaaS Metrics 2.0 — A Guide to Measuring and Improving What Matters." For Entrepreneurs. Matrix Partners.
-
Fader, P. S., & Hardie, B. G. S. (2010). "Customer-Base Valuation in a Contractual Setting: The Perils of Ignoring Heterogeneity." Marketing Science, 29(1), 85-93.
-
McCarthy, D., & Fader, P. S. (2018). "Customer-Based Corporate Valuation for Publicly Listed Driver-Based Businesses." Journal of Marketing Research, 55(5), 617-632.
-
Gupta, S., Lehmann, D. R., & Stuart, J. A. (2004). "Valuing Customers." Journal of Marketing Research, 41(1), 7-18.
-
Zuora. (2024). "Subscription Economy Index." Annual Report.
-
OpenView Partners. (2024). "SaaS Benchmarks Report: Product-Led Growth Edition."
-
Pacific Crest Securities. (2023). "Annual SaaS Survey Results."
-
Bagnall, A., & Lines, J. (2015). "An Experimental Evaluation of Nearest Neighbor Time Series Classification." Data Mining and Knowledge Discovery, 29(5).
-
Pfeifer, P. E., & Carraway, R. L. (2000). "Modeling Customer Relationships as Markov Chains." Journal of Interactive Marketing, 14(2), 43-55.

Founder, Product Philosophy
Murat Ova writes at the intersection of behavioral economics, marketing engineering, and data-driven strategy. He founded Product Philosophy to publish research-grade analysis for practitioners who build products and grow businesses — without the hand-waving.
Related Articles
Anomaly Detection in Revenue Data: Isolation Forests vs. Prophet-Based Decomposition
A 4% revenue drop on a Tuesday could be a payment processor outage, a pricing bug, or just normal variance. The difference between these explanations is millions of dollars — and your monitoring system can't tell them apart.
Business AnalyticsBayesian A/B Testing in Practice: When to Stop Experiments and How to Communicate Results to Non-Technical Stakeholders
Frequentist A/B testing answers a question nobody asked: 'If the null hypothesis were true, how surprising is this data?' Bayesian testing answers the question that matters: 'Given this data, what's the probability that B is actually better?'
Business AnalyticsProduct-Market Fit Quantified: A Composite Score Using Retention Curves, NPS Decomposition, and Usage Depth
'You'll know product-market fit when you feel it' is advice that has burned through billions in venture capital. Here's a quantitative framework that replaces gut feeling with a composite score — and it starts with retention curves, not surveys.