Building a learning machine
Part 1: Introduction
In the startup world, one of the most common tropes is the idea of learning from users. Gathering insight and implementing it into your product is essential to building the right thing, and building it well. The problem? It’s often much harder to do than it sounds, particularly in early stage companies moving quickly with little process and even less resource.
Having worked in a wide range of startups worldwide, I’ve noticed a few common themes that prevent companies from really leveraging user insight…
Where moving quickly is essential, everything becomes about balancing resource and effort with quality or scope. For that reason, a large amount of strategy work in early stage companies boils down to a simple question: What is good enough?
ie: What does a minimum viable product need to do? How polished does our brand need to be? How scalable should our engineering processes be? etc.
Understanding users is no different, and it’s a hard balance to strike. The question becomes: When do we know enough about users to have confidence in our general direction?
To simplify, there are generally two schools of thought…
Design Thinking |
Lean startup |
Often advocated by design and research agencies, the type of research implemented in effective design thinking involves in depth, largely up front research. | Advocates of the lean startup aim to ship products as quickly as possible and then learn from them in the wild. They live by the mantra of ‘move fast and break things’. |
Methods: Interviews and focus groups, ethnographic studies, prototype testing, co-designing with users, task diaries, surveys. | Methods: Product analytics, metric tracking, user surveys, ad-hoc customer learnings, user testing, NPS score. |
Pros: In depth understanding. Discover what to build, not just how. | Pros: Beat competitors to market. Learn from real, active users. |
Cons: Time and effort. Expertise required. Insights generated in bulk after a long timeframe. | Cons: Risk building the wrong things or for the wrong people. Reduce effectiveness of design. Create product debt. |
On one end of the scale, design research is in depth and can provide more confidence in decisions, but is expensive, cumbersome and to product people looks a lot like ‘the dreaded waterfall’. On the other, a narrative based on speed has lowered the bar to a dangerous level where companies risk building the wrong things ineffectively. What’s more, the ‘learn’ phase too often gets lost in the melee of running a live product.
All too often in both of the above approaches, the process of collecting learnings and generating insights stops there. People sit back with a smile on their face, pleased at developing all this insight and doing what they’ve read startups should do. Then they store their learnings in a document buried on a drive somewhere and promptly forget about them. Or even worse, they are stored only in people’s heads.
Companies often seem to be lacking a clear process of how to actually incorporate learnings into product development and ensure they have a measurable impact. Without this, it’s hard to tie them to business goals and product metrics, or know how to prioritise them
What’s more, there’s also a gap in the other direction. Learning is too often haphazard and self directed, rather than aimed at specific product goals and issues. This misses the fact that learning should be a cyclical process, one that aims for constant growth of understanding and refinement of product. It should also be as much about understanding how well you are solving problems or taking opportunities as it is about gathering the learnings you need to do those things.
When talking about generating product insights, different team members often focus on different things. Designers focus on learning from users, product managers on analytics, business leads on business goals, developers on engineering metrics such as uptime, load time etc.
That’s ok, we have specialism for a reason. The problem comes when these are looked at too separately, with completely different work-streams and systems. It makes it difficult to see the connections between them all and even more difficult to prioritise them. They end up battling with each other for space in a product’s roadmap, rather than building upon each other in harmony.
To be clear, all of these approaches can generate great and equally important insights. If looked at together, they’re also often much more connected than they first appear to be.
As a product design & strategy consultant, the most valuable work I’ve done in companies has been all about trying to solve these problems. It’s led me to develop what I call The Learning Machine, a framework that aims to develop a process and culture of both learning, action and measurement within a company or product team.
It has the following goals and benefits…
The learning machine is a system based around gathering learnings, distilling the patterns in them to uncover what’s important, and then finding ways we can act on them and measuring the effectiveness of those actions. Over the course of my working life it’s become an all encompassing framework for looking at the development of products.
It combines elements of design, product strategy and business to move away from the concept of purely ‘user centricity’ and towards the wider idea of ‘insight centricity’ – the difference being that useful insights can come from lots of different places.
It’s a step by step process to show how this can actually be implemented on the ground in fast moving companies, particularly in the world of early stage startups.
Over the next two posts in this series, I’ll cover the following things:
Part 2: Generating insights |
1. Collecting & organising learning Where to look for learnings and how to manage a large amount of data from different sources. |
2. Generating insights How to move from data to useful insight. |
3. Prioritising insights How to prioritise insights and understand where to focus. |
Part 3: Taking action & measuring effect |
1. Develop hypothesis How to start defining the action that an insight leads to |
2. Prioritise hypothesis How to prioritise the multiple actions you could take |
3. Evaluation & continuation How to know if you’re moving the needle and what to do next |
It’s important to note I will be spending very little time (if any) on software and tools. This is a system of thinking that can be implemented using many tool-stacks, dependent on the individual and company preference. Instead, this series will encourage you to think about products in a new way and understand the practical process to implement this thinking.
With that in mind, sign up below to be notified when I publish Part 2: Generating Insights.
Want to make something happen?
Get in touch.