You can't get there from here
The "physics" of users journeys; before your PLG squads can innovate and 10X results, they have to understand their product's laws of gravity.
This is article 2 of 7 of the “Destroy Your Growth Squads” leadership series on Product-Led Growth.
Start with article 1 to get background on the series and to understand if your organization is ready for PLG. Tools will be available for download on the site in the coming weeks.
Each product journey has different gravitational constants
In the beginning it took me a while to grasp how product-led growth experimentation was different from the testing and optimization driven by marketing and sales teams. (Note: we’ll talk specifically about how PLG is different from testing by ‘normal’ product management in articles 4 and 5).
As we ran through in the last article, product-led growth is a motion to amplify the core value a user experiences in the product by proactively trying new product journeys, removing obstacles in the user’s path to value, and being laser focused on the value-driven journeys that matter most to users. This effort is critical to how a product performs - but this also needs to be carefully calibrated in order to be tracked and measured effectively. Side note: this is why test ideas rarely travel well as-is between products; and by “rarely” I mean “never.” Before a PLG team can act, they need to understand the rules of the game of the product in question.
The rules of the game
One of the earliest mistakes I made in running a product-led growth organization was testing in the wrong areas. Yes, I had the primary KPI, and yes, my growth squads followed the values of PLG as outlined in the last article; but that wasn’t enough to set us up for success. Given the complexity of product journeys, especially when broken down by different audience or user types, there is a universe of real estate on which you can experiment. PLG squads are investigative and action-oriented by design, so at the time I thought it best to let the squads determine the heat map of where to focus. What I had failed to grasp in those early days was that a little bit of up front work is necessary for the squads to be able to separate signals from noise and see where the real problem areas are; the playbook from even similar products and journeys did not apply as-is.
A “product journey diagnostic” will save you time by calibrating where to aim your PLG squads. This diagnostic will also force you to acknowledge and validate the assumptions you have going into PLG about the inflection points and product market fit. It will prevent churn, abandoned engineering sprints, and circular debates down the line.
The product journey diagnostic will not give you answers or solutions, but will help you and your team define the right questions to ask. It’s a powerful thing to do before launching your PLG team.
Defining your product’s physics & laws of gravity
Think of this effort as a way to properly estimate the escape velocity for your areas of improvement. Even the best innovative ideas need a reference point from which they can be measured. Here’s what you get from the diagnostic:
affirmation that you are targeting the correct engagement-related KPI, or recommendations for a different primary KPI if needed;
outline of a decision-making framework regarding which part of the journey and which audiences should be the focus of PLG;
a forum for aligning stakeholders, which is crucial if you're going to actually pull off any testing and experimentation over an extended period.
It’s no coincidence that the diagnostic pulls elements from how product - market fit is defined. You’ll see this in action below.
But I just want to dive in with PLG
As with any product development effort, having effective frameworks in place will save you from a lot of churn in the long run.
In my experience, the difference is night and day in terms of the test velocity achieved and resulting win rate between PLG teams set-up after completing a full journey diagnostic and PLG teams that did not benefit from the preliminary end-to-end work. Over the past 7 years, it looks like this:
Improvement in PLG teams* when powered by regular strategic journey diagnostics:
Average # of tests per quarter: +200%
Win rate: +150%
*same overall PLG squad structure and comparable opportunity areas
When failure to launch is a win
A few years ago I spun up a new product growth squad for a new product. At the time, I had several other squads running across different products, and we had started to deliver meaningful impact. I could not have been more excited. My hope was to continue our expansion into additional products. Nothing was holding us back.
One of the first things that became apparent was that our toolkit of activation and engagement tests had to go out the window: this product and its audience was different enough from the other products we had been working on that we had to rethink how we approached things.
When we did the product journey diagnostic, we learned that the physics of the user journey for this product were completely different - in terms of how new users were finding the product, what their expectations were, how they navigated the first use experience (and, tellingly, how they didn’t), how they wanted to learn, and when they realized value.
In a methodical fashion, the PLG squad pulled apart the experience and leveraged user research, data science, design, and engineering in order to recommend where PLG should focus.
That’s great. That’s exactly the approach I’m outlining below and in this series overall.
But…
But for a variety of reasons, the key stakeholders on cross-functional teams could not align on our proposal. We couldn’t land on what success would be for this new PLG squad. The reality of the physics of the journey were not shared; despite our findings about the crux of the problem from an engagement and retention lens, stakeholders lobbied for ancillary or superficial areas of focus. It was like we were in parallel universes.
The danger here would have been to do the crowd-pleasing effort of chasing areas of focus based on popular request, not based on the fundamental math of the journey.
Ultimately, I closed this squad down before we ever launched a test. This was the win, saving everybody from wasted energy and inconclusive results. The product just wasn’t ready for PLG. The diagnostic can help you navigate situations like this.
Overview of a product journey diagnostic
A product journey diagnostic is not just a journey map. It’s not a screen-by-screen capture of onboarding surfaces. It’s not a marketing channel review. It’s not a product roadmap review. It’s not a GTM plan. It can include all of these things, but it’s not just any one of them.
The diagnostic is a qual & quant-driven process to understand where user needs are not being met and where the PLG muscle should focus. Not everything raised in a product journey diagnostic will be addressed by PLG; but everything PLG has in its mission should be covered by the diagnostic. It's a tool to keep users in the center of everything; and forces the identification of an actionable set of ideas based on user needs. This is the genesis of Growth Product Management.
A tool to help you understand where specifically in the overall journey to focus.
To really hone in on the visual above, it’s not as if any one PLG squad will cover the entirety of the area circled in yellow; and the reality is that the area in the circle is actually multi-layered, with different audiences bringing different expectations into the product.
If the product-led growth squads have already been formed, then they would drive this diagnostic effort. If the squads have not yet been formed, then a dedicated growth product manager would drive the effort with cross-functional support; and the end of this diagnostic will entail the creation of the growth squad (more on this in article 4).
To keep this simple, I’ll outline the effort for one growth squad covering one product; but this effort can be expanded to include multiple squads and multiple different product journeys.
How to do a product journey diagnostic
At the beginning, the focus is on cross-functional stakeholders: getting their alignment on the need for a PLG team and getting their sign off on the problem PLG is trying to solve.
With our primary KPI in-hand (per the last article, we’re not ready for PLG if we don’t have the primary KPI identified), we’re ready to begin the diagnostic.
80/20
For this process, the aggregate view (with a composite understanding of the user) is the enemy. But at the same time, we can’t get so specific as to get lost in the infinite permutations of the journey and use cases. There’s a balance we’re striking between specificity and universality: in order to enable us to get specific as to what a user is experiencing in the journey, we’re going to have to put guard rails up around how many different slices of the product journey we can cover.
How to begin the diagnosis: subway maps
The base layer of the diagnostic is the subway map.
Here we’re working with the mental model of:
a) getting very specific in order to truly understand the user journey; but
b) only covering a few different permutations of user journeys in order to make this manageable.
Keep it simple
The subway map work will focus on three specific cuts of the user journey, with a critical eye towards:
the context of the user in this journey
the goal of the user
where & when the user first realizes value from the product
You already have your journey metric and business outcome in hand (from article 1). So you should be able to create this view now:
If you can’t create that view in the green box above, stop reading and get that solved.
A, B & C
Next you’ll sketch out the composite view of three different users: what we’ll call audiences A, B and C. For each, there are three areas of focus.
In most companies, audiences are already defined - typically as marketing segments, for example. We’re not talking about marketing segments here. What we are talking about is answering:
What is the user trying to do?
How are they doing it today?
Is this successful for them?
How often do they need to solve this problem?
To do list for fleshing out your A, B & C audiences
Audience definition: Product research should take the lead in drafting these audiences, and the cross-functional team should help to hone in on the specifics.
Each audience will be defined by a parameter specific to your product; i.e., if your product is mobile only, and heavy on Android, your three audiences could have at least 2 Android users and 1 iOS user; another parameter could be delineating a free vs. paying user; or a net new or returning existing user.
Demographic data can help flesh this out; are your users students, knowledge workers, small business owners, etc.
These parameters will be very specific to your product and the importance of each will be determined by your KPI.
Assume you’d need ~3 such parameters per each of the 3 audiences.
Now that you have that profile sketched for A, B & C, we’ll go a level deeper:
What is each audience trying to do? For each audience, what is their definition of success?
Be specific here. The point of these journeys you are drafting isn’t what your product, marketing or sales teams want: it’s what the user is trying to do.
This may cause you to realize that you don’t know your users as well as you think you do - in that case, this exercise will help you calibrate on some of the missing pieces of your knowledge.
What user problem is being solved? This is the “Value” lens.
Think of it this way: how would the user solve this problem without your product?
User research should play a key role in defining the area in the gray box above.
Once you have the above drafted for the 3 audiences of A, B, and C, you can list out the step-by-step journey as bullet points.
Note also that these criteria circled in blue below are directly mappable to your product’s product - market fit framework.
Bullet point list of the journey
Depending on your audience and product, I’d try to limit the subway map to no more than fifteen bullet points, ideally closer to ten. If it’s longer than that, you are starting too early in the journey. Also note that every additional step will create exponential work later on in the process.
Each journey between audiences doesn’t have to start at the same point, but it does have to start at a natural beginning point from the user’s point of view; likely, each of the first steps or first couple of steps for each audience will be outside of your product and marketing team’s remit: a user doing research or discovery on their own, for example.
After you’ve drafted the journeys for A, B & C in bullet point format
Questions to ask yourself at this point:
are you covering a good enough portion of the potential audience with the A, B, and C journeys you have outlined? Or do you have to go back and re-think one of those audiences?
are you anchoring the journeys around the primary KPI?
are you capturing the right journey steps that lead to the user reaching value?
If the 3 journeys are looking too similar
Try differentiating them this way:
make at least one of the three between A, B, and C a happy path: the ideal journey path you are envisioning.
make at least one of the three between A, B, and C a realistic path, to the best of your knowledge; (i.e., the things that go wrong). Follow it through to a conclusion. Examples can be a product that requires online access and the user loses WiFi; or something happens where the user has to abandon the workflow, perhaps the user even contacts support or goes to social media to try and remedy / solve. Customer Support data can position what the main challenges would be for these audiences. Again, user research should validate this with qualitative inputs.
make one path refer to how the user problem is being solved today (i.e., via a competitive product, or by doing an older workflow, etc.). You must understand how users actually do the task today. One of the biggest problems teams run into here is when they make assumptions about how the job is being done today without validating them.
Lessons learned from doing subway maps over the years
Limit yourself to three audiences to get started. Force this. Later in the clean-up process we’ll revisit if an additional one is needed.
Be as specific as possible within the realm of viable utility. This is just the skeletal structure of the product journey diagnostic. You’ll be building on top of this with more detail (we’ll discuss in the next article).
You may have to iterate on the definition of each audience, using data to help justify the specifics.
It’s vital to follow the 80/20 rule here and not try to capture everything.
Second half of the diagnostic: journey forensics
In the next article we’ll finish this preliminary work. We’ll be changing from physics… to a murder-mystery metaphor. Trust me, it’ll make sense.
This first part of the process, outlined above, can be very useful as it forces the hard discussions around what journeys are the most important for the product – but remember, this in and of itself is just part of the process.
After the next article we’ll move on to how to create a growth squad in article 4.