Your conversion rates are (probably) wrong
Here is a tighter structural version. This removes repetition, merges overlapping sections, and sharpens the argument so you can rewrite it with more authority.
The Problem: Funnels Lie by Omission
The question “What % of users who do A eventually do B?” assumes a clean path.
Real user behavior is nonlinear, multi-session, and multi-device.
Simple A→B funnels collapse complexity into one number.
Teams rarely align on:
- What counts
- Over what time window
- At what unit (user/session/event)
Conflicting reports are usually modeling disagreements, not data issues.
Replace Events with Edges
A journey is not one conversion; it is a sequence of transitions.
Model each transition as an edge:
Source → Target.Edges are modular. Each can define:
- Detection mode (direct vs later in session)
- Time limits
- Filtering rules
- Selection logic
Journeys become compositions of edges rather than fixed funnels.
This gives you flexibility without redefining the whole funnel each time.
Detection and Time Rules
Two things define whether an edge is valid:
Ordering rule
- Direct: Target immediately follows Source.
- Session: Target happens later in the same session.
- Across sessions: Target can happen in future sessions.
Time constraint
- Max duration between Source and Target.
- Optional min duration.
- Can span sessions.
Without time rules, “eventually” becomes meaningless.
Filtering and Context
Conversion is rarely universal.
You must decide:
- Which users count (plan type, company size, etc.).
- Which event properties matter.
- Whether filters apply at the edge level or the journey level.
The richer the event data, the more precise your measurement. Sparse instrumentation forces broad, ambiguous definitions.
Repeated Events Require a Strategy
Events often happen multiple times.
You must define a selection rule:
- First occurrence
- Last occurrence
- Every occurrence
This affects:
- Time measurement
- Conversion rate
- Deduplication logic
If this is not explicit, teams will calculate different numbers from the same dataset.
Negative Constraints (Invalidation)
Not every path from Source to Target should count.
You may need to invalidate edges when:
- A specific event occurs in between.
- State is reversed (e.g., cart emptied).
- A user resets the flow.
Without invalidation rules, you overstate success.
Linear Time Only
Treat all journeys as forward in time.
Avoid circular visualizations. “Going back” is still forward movement in time.
Linear models are simpler to reason about and compare.
Journeys Are Composed, Not Discovered
A journey is a defined set of edges plus optional journey-level rules:
- Max total duration
- Session-level filters
- Minimum required steps
If you care about multiple endpoints, define multiple journeys. Shared edges can anchor consistency across them.
Be Explicit About What You’re Counting
Every conversion metric requires three decisions:
Counting unit
- Users
- Sessions
- Event occurrences
Scope
- Within session
- Across sessions
Deduplication
- Every match
- First only
- Last only
Most reporting disagreements stem from mixing these silently.
Core Point
There is no single “true” conversion rate.
There is only a model:
- Defined edges
- Explicit timing
- Clear filtering
- Selection rules
- Counting logic
The quality of your decisions depends on how well that model reflects actual user behavior.
Every product team starts with the same innocent question:
“What percentage of users who do A eventually do B?”
On paper, it’s straightforward: Count the A’s. Count the B’s. Divide. Done.
But the moment you try it on a real product or digging deeper, the cracks show. Users don’t move through your product the same way. They zig-zag across devices, get interrupted mid-flow, drop out for days, come back when you least expect, and convert through paths you never mapped.
Some start a trial and vanish. Others bounce between marketing pages, a sales demo, and your app before committing. Some abandon checkout, click a retargeting ad, and then convert on mobile.
When you try to force all of that into a simple A→B funnel, you get:
- Conflicting “conversion rates” from different teams.
- A/B tests that appear to work in one report and fail in another.
- Optimisation efforts chasing the wrong problems entirely.
The problem isn’t that funnels are bad. It’s that we need a better mental model—one that matches how people actually behave and supports answering questions to allow us to improve our product experiences.
My goal is this model encourages you to stop and think about what your conversion rate reporting is really representing. Are they what you expect? Some of these concepts will seem obvious to seasoned analysts or growth teams. But, I found myself needing to capture the full scope of them to get my own thoughts in order - hopefully you find it useful!
Think in edges, not events
Instead of treating “conversion” as one jump from start to finish, break it down into the transitions between events.
An example journey:
Signup → Verify → Profile → Team → Create
In this model, that’s not one conversion—it’s four edges:
Signup → VerifyVerify → ProfileProfile → TeamTeam → Create
Whilst sophisticated teams are likely measuring this as an overall funnel, tracking conversion between each step before a final conversion rate, it is important to remind ourselves of these independent edges. The reason for this is that each edge can have its own:
- Rules for how quickly it must happen.
- Conditions that can invalidate it.
- Decisions about whether to count the first, last, or all occurrences.
Thinking in edges instead of events lets you assemble any journey you want without locking yourself into rigid funnels. Modern data tools are good at capturing these events, but often struggle to allow for flexibility in measuring and creating edge definitions.
What is an edge?
At its simplest, an edge is just: Source Event → Target Event. Where is the starting event and Target is the destination we care about. The order is important here as the Source must occur before the Target. Example: Source: View Product, Target: Add to Cart.
If you treat each user session as a directed sequence of events the following user session we can match them against this conversion goal:
✅ Valid: [Browse, View Product, Browse More, Add to Cart] ❌ Invalid: [Add to Cart, View Product]
The next question to ask is how tightly the Source and Target should be linked. When we are determine whether an edge in a user session matches we need to specify the detection mode.
- Direct: Target must immediately follow Source
- Session: Target can occur anywhere after Source in the session
An example: User session [View Product, Browse, Add to Cart, View Product, Checkout]
View Product → Add to Cart: Valid in Session mode, Invalid in Direct modeAdd to Cart → View Product: Valid in Direct mode
Real journeys have multiple edges, it’s rare (but not impossible) to be only caring about a user they do only two concurrent steps in your product. Going back to your earlier example we might want to define our journey and conversion analysis as:
Signup → Verify(Direct) - We want immediate verificationVerify → Profile(Session) - Users might explore firstProfile → Team(Session) - Team setup could happen laterTeam → Create(Direct) - Create should follow team setup
A session must satisfy ALL edges to match the journey. This is just the start of the fuzziness that starts to creep in when you start defining these rules.
What about time?
The next question to ask is if time matters during these journeys. Each of your users works on their own schedule and we need to support that in our rules. Conversions have natural time dynamics: Impulse buys occur within minutes; B2B onboarding can take weeks; Habit formation needs to minimum gaps between actions.
Each edge needs a Time Constraint. This sets a timeframe that the user needs to complete the subsequent event (either direct or in a session!) to be considered valid.
| Edge | Time Constraint | Why? |
|---|---|---|
Signup → Verify | Max 1 hour | Email verification should be quick |
Verify → Profile | Max 1 day | Give users time to explore |
Profile → Team | Max 7 days | Team coordination takes time |
Team → Create | Max 1 hour | Ready teams create quickly |
This is especially powerful when analyzing across multiple sessions. A user might:
- Session 1:
Signup → Verify - Session 2 (next day): Profile setup
- Session 3 (week later): Team → Launch
Which then introduces a whole other level of complexity of tracking user events and behaviours across sessions.
Filtering
Not every user and customer is the same and it is important to be able to differentiate between them. This lets you ask: “What’s the conversion rate through our enterprise onboarding flow?”. There are a few ways to achieve this, two I’d like to call out are edge-specific properties and specific events. Which you choose depends on tooling and analysis.
Edge-specific properties allow you to add data to each event which the edge can uses as a filter. For example:
`Signup → Verify`
- Filter: signup.plan = "premium"
`Profile → Team`
- Filter: profile.company_size > 10
- Filter: team.members_invited >= 3
`Team → Create`
- Filter: create.project_type = "production"
It can get tricky to match the downstream event e.g the planned signed up, but this does allow for deep level property filtering. A simpler alternative is to use verbose event names, for example instead of:
View Product
- product_id: 123
- type: shoe
You would define the event as View Product: Shoe 123. This allows you to create natural hierarchies and leverage fuzzy searching or regex style matching on your event names to be as granular or specific as you want, for example:
View Product
├── View Product: Shoe 123
├── View Product: Dress 456
└── View Premium Product
Sometimes this can be better for teams that don’t have a centralised data instrumentation team or layer to allow for flexibility in naming and property conventions. Which you use is up to you, but the important thing to be aware is the more data you pass into these events and edges, the better you can answer questions of how your users behave. Not having these properties is leaving a lot of uncertainty in any analyses you perform.
It is also worth mentioning that a user can also have properties, for example something like sign_up_date . Often these are added to an identified user after the event has occurred. Within the user session you also have properties that carry across each event like web browser.
What about if the same event occurs multiple times?
Users often bounce backwards and forwards between the same pages. A good example is user jumping between the pricing page and product features page before starting a sign up flow. In E-commerce, users often jump between viewing products, adding to cart, interacting with their cart, back to viewing more products before eventually checking out.
Edge’s need a selection strategy. When the same source event appears multiple times in a user session (or across sessions) we need a way to pick which one we care about. The three primary selection strategies are:
- First Only: We measure from the first time the user performs the event.
- Last Only: We measure from the last time the user performs the event.
- Each: We count every occurrence (can get messy doing additions later on!)
Which you use is dependent on your use case, but it is important to pick one. For example given this User session [View Product, Browse, Add to Cart, View Product, Checkout], if we want to measure View Product → Checkout:
- First Only would use View Product (A), ignoring the second occurrence. Any time or edge propertie filters would apply between the first View Product and the Checkout event.
- Last Only would use View Product (B), similarly ignoring the first occurrence.
- Each would count two conversions, one from View Product (A), another from View Product (B). This is not a common way of measuring conversion rates, but it does have it’s place in some specific analysis. It can get confusing as it results in essentially double counting.
Edge invalidation
What shouldn’t occur is sometimes as important as what does occur. For example we might want a rule such as Cart → Purchase should be invalidated if the user removes all items in between. A contrived example, but these negative constraints prevent you from counting “conversions” that aren’t real wins.
These are tricky to build out in journey analysis as you need to have a deep business understanding of how specific events could negatively impact the conversion you care about. I’ve not come across this often in discussions, but if it is important to you, make sure you find a way to exclude these constraints.
Time only goes forward
If a user had three events in their session: View homepage → view pricing page → View homepage, you could describe them as going “backwards” between the home and pricing page. Here be dragons. Treat all events as going forward in time. I’ve seen some tools that would show this journey as a graph, with an edge between Home and Pricing, then a return line back out the top of Pricing into Home. You now have a circle - how do you measure a conversion rate in a circle? It gets very messy from a visualisation perspective as you aren’t sure what is and isn’t included due to these cyclical loops. At Adora, we decided early on to treat everything as straight line, even if a user would describe it as ‘going backwards’, you need to treat time as only going forward and as such all journeys become linear funnels. It makes it much easier to reason about.
Composing journeys
When building a funnel or journey analysis each edge between events can have specific rules. Given that journeys are sets of edges we can apply additional overall to help answer more specific questions. These journey filters take precedence over the edge specific rules and are often the best place to add in session filters, such as customer type. This allows you to have more generic edge rule definitions then split out the same funnel/journey into different cohorts.
Some examples of journey rules:
- Max total duration (e.g., the whole checkout process must happen within 24 hours)
- Minimum steps required
- Session filters (e.g., premium users only, mobile sessions only, exclude test accounts)
I’m using funnel/journey interchangeably here as it is simpler to approach journey analysis - treating everything as a linear funnel of events (see time only goes forward).
If you have branching paths and care about multiple endpoints, treat each of those as an independent funnel. Sometimes the start of these funnels might be shared which is helpful to have a consistent understanding of user behaviour. A good example is you have a journey setup to track sign up conversions. You might then want to track Sign Ups to different feature usage in your product. If you treat the Sign Up journey consistent, you have flexibility to define how you treat the specific feature usage conversion down stream (some might need to be straight after sign up, others features are ok if they are used within a week across multiple sessions).
Decide what you’re measuring
Here’s where most analytics confusion starts: Teams mix up what they’re counting and where they’re looking.
We split it into three decisions:
- Counting Unit – Are we counting users, sessions, or event occurrences?
- Scope – Does it need to happen within a single session or across sessions?
- Deduplication – Do we count every match, just the first, or just the last?
Example:
- “What % of users purchase in their first session?” →
users+within_session - “What % of sessions with checkout complete purchase?” →
sessions+within_session - “What % of users eventually upgrade?” →
users+across_sessions
It might appear like there is duplication of these definitions with the event duplication section, but the measure and counting rules impact what sessions to include in your additions. For example if you multiple user sessions that match your edge and journey definitions, do you include all of them or only one of them?
If you don’t set these rules explicitly, you’ll end up with different teams publishing incompatible numbers. It’s important to be flexible and use all of these decision definitions, as they help us answer very different questions about user behaviour. Make sure your tool allows you to be flexible in your approach.
Bringing it all together
At its core, measuring conversion rates isn’t about finding a single “true” number—it’s about building a model that reflects how your users actually behave. Traditional A→B funnels are too blunt to capture the real complexity of user journeys, and without a more nuanced approach, you risk chasing misleading metrics and making poor product decisions.
By breaking down journeys into edges, you gain the flexibility to account for:
- Time constraints that reflect realistic user behaviour patterns.
- Filters that allow you to zero in on the right audience or event properties.
- Selection strategies that clarify which occurrences you care about.
- Invalidation rules to avoid counting false wins.
- Consistent counting decisions so your team stays aligned.
These building blocks turn your conversion rate analysis from a static snapshot into a dynamic, context-rich view of your product’s performance. Journeys can be composed, split by cohort, measured across sessions, and adapted as your product evolves—all while maintaining clarity and comparability.
If you approach measurement with this mindset, you’ll move away from asking “What’s our conversion rate?” and towards the more valuable question:
“What’s actually happening in our product, and how do we improve it?”
The result isn’t just better numbers—it’s better decisions, better prioritisation, and ultimately, better user experiences.