LEANSTACK

  • Lean Canvas
  • Lean Academy
  • Blogs
  • Create Free Account
  • Sign In

LEANBLOGS

  • Most Recent
  • Lean Startup
  • Business Modeling
  • Journey 100K

How We Build Features

Ash Maurya - Jul 25

After launching your Minimum Viable Product (MVP), it’s quite likely that customer uptake won’t be immediate. In fact it should be expected:

Your MVP is the minimum feature set that lets you start learning about customers.

When you first launch a product, lots of things can and do go wrong. But when that happens, a typical reaction is to want to build more stuff – especially when it comes disguised as a customer feature request.

While listening to customers is key, you have to know how.

Blindly pushing features is almost never the answer. Features have costs that go beyond building costs such as ongoing maintenance, documentation, added complexity, etc. Because unused features are really a form of waste, it’s important to only keep those features that have a positive impact on your key metrics. Otherwise, left unchecked, it’s very easy to undo all the painstaking effort you put into reducing the scope of your MVP in the first place.

Even though all this makes logical sense, managing features in practice is still quite hard. I wrote a post on a similar topic a year ago titled: “3 Rules for Building Features” which represented some early thoughts on how to do this.

In this post, I’m going to build on that foundation and outline what our current process looks like.

Visualizing the Feature Lifecycle

Features versus bug fixes

The first step is distinguishing between features and bug-fixes. By feature, I really mean a Minimal Marketable Feature (MMF).

MMF was first defined in the book “Software by Numbers”: as the smallest portion of work that provides value to customers. A MVP is made up of one or more MMFs.

A good test for a MMF is to ask yourself if you’d announce it to your customers in a blog post or newsletter. If it’s too tiny to mention, then it’s not a MMF.

Features as their own iterations

Next, we build and track features independent of release or traditional iteration boundaries.

Time-boxed iterations are used in a typical Agile Software Development process to define release boundaries, but the problem starts when features over-run this boundary which is fairly common – especially when you additionally want to track the longer term effects of features. Having implemented 2 week release cycles for a number of years and then switched to Continuous Deployment, I find it unnecessary to take on the added overhead of tracking features this way.

Instead we track every feature as it’s own iteration. Rather than focus on velocity and planning games, we track end-to-end cycle time on features. We use Continuous Deployment to incrementally build and deploy features and a single Kanban board to visualize the feature lifecycle which I’ll describe next.

Meet our Kanban board

For those unfamiliar with Kanban, it is a scheduling system that was designed by Taiichi Ohno, father of the Toyota Production System, and is a way for visualizing the flow of work. It has more recently been adapted for software.

A Kanban board is to feature tracking what a Conversion Dashboard is to metrics tracking. Both let you focus on the Macro.

We extend the basic Kanban board by adding a number of sub-states shown below:

LEGEND
1: We clearly state the current macro metric we want to achieve at the top which helps prioritize what we work on.
2: We add an explicit state for validated learning.
3: We constrain the number of features we work on based on the number of developers. This prevents us from taking on new features without first validating that the features we just pushed were good.
4 and 5: The top row is for stuff currently being worked while the bottom row is for work that is ready to moved to the next stage. This will become clearer in a moment.
6: The stages marked in green are places where we solicit customer feedback.

The basic idea is that features start on the left-hand side of the board and move through stages of product and customer development before they are considered “Done”. In a Lean Startup, a feature is only done after it’s impact on customers has been measured.

Processing Feature Requests

I mentioned that we treat features differently from bug fixes. Here’s a “Getting Things Done” (GTD) style workflow for how we process new work requests that come in either internally or via customers:

Bug fixes either get fixed and deployed immediately or they go on our task board. All features requests end up on our Kanban board where they are then processed using a 4-stage iteration process that I’ll walk through next:

1. Understand Problem

The first stage begins with a weekly prioritization of backlog items waiting to be worked based on the macro metric we’re currently trying to improve. So for instance, if we have serious problems with our sign-up flow, all other downstream requests take a backseat to that.

We pick the highest priority feature in the list and the first thing we do is setup a few customer interviews to understand the underlying problem behind the feature request. Not what the customer wants, but why they want it. Every feature starts with a “NO” and needs value justification to be deemed “worth building” before we commit to building it.

After these interviews, the feature is either killed or moved to the next stage.

2. Define Solution

Once we understand the problem, we then take a stab at defining the solution starting with just the screens which we demo to these same customers. This usually results in a few design iterations that help define the solution we need to build.

3. Validate Qualitatively

Once we know what to build, we then start building the rest of the feature using a continuous deployment process. Continuous Deployment combined with a feature flipper system allows us to push these features to production but keep them hidden from customers until we are ready. When the feature is code complete, we do a partial rollout to select customers and validate the feature qualitatively with them. If we surface any major issues, we go back to address them.

4. Verify Quantitatively

Once the feature passes qualitative validation, we then roll it out to everyone and start gathering quantitative metrics. Because quantitative metrics can take time to collect, we start work immediately on the next high priority feature from the backlog. Splitting the validated learning stage into 2 phases (first qualitative, then quantitative) allows us to achieve the proper balance between speed and learning.

Only if the feature demonstrates a positive impact on the macro metric within a reasonable time window, does it stay in the app. Otherwise it is killed and removed.

See the Full Presentation

Tools We Use

Here are the tools we use to implement this product development system:

1. AgileZen for our Kanban board.
2. heroku for continuous deployment.
3. github for source code management.
4. Jenkins for continuous integration.
5. rollout for our feature flipper system.
6. Vanity for split-testing.
7. Hipchat for tying all the above together through persistant chat rooms and notifications.

Go Only As Fast As You Can Learn

Since the goal of a startup is finding a plan that works before running out of resources, we know that speed is important. But it’s not an excuse for turning into a feature pusher. You have to balance speed with learning by building a continuous feedback loop with customers – not just at the tail ends but throughout the product development cycle.

Written by Ash Maurya on Jul 25 2011

Want to raise your odds of success? LeanStack can help!
Create your first canvas now

  • Michael Vax

    Great post Ash
    There is a decimate need to step out of boundaries of Agile iteration

  • Trevor Owens

    Great post Ash. I love your “open box” approach by just laying everything out and telling your audience exactly how you do it. That’s awesome! One point I might improve upon is by defining an MVP as the minimum amount of work you need to do to get to a pivot. This is better than saying “to start learning about customers” because it doesn’t give entrepreneurs an excuse to take things slowly. By saying the goal of an MVP is to reach a pivot, founders acknowledge there is theoretically one or several “ideal MVPs” that will maximize learning and minimize effort. It also defines the maximum amount of learning possible as a that which is needed to reach a pivot.

    Would love to chat further with you on these topics and others!

  • BillSeitz

    So months could pass and your software would still end up back at MVP because nothing you built moved the needle, right?

  • BillSeitz

    I guess the precursor question/comment is: 
    * you start out with a vision of what you think is “feature-complete”
    * you pick 5% of that to qualify as MVP to validate key assumption. You build, you launch, you start to collect data.
    * You look at your data, decide which AARRR metric to focus on improving first. (It’s probably either Acquisition or Activation.)
    * You start receiving customer requests, you do some interviewing.
    * Then you have to pick from the customer requests plus that 95% of your own unbuilt features, which one to do first that you believe will have the greatest effect on the 1 metric you picked above.

    And that becomes the input into the process above. Right?

  • Wes Winham

    How do you combat the added latency that waiting on qualitative validation gives? I’m kind of gun-shy on anything that I know is going to add latency when we’ve worked so hard to reduce it through the process. I know this is very likely my fear-of-change brain parts coming up with these objections, but maybe someone can help me.

    How do you handle scheduling of qualitative review sessions? Do you find that it’s hard/easy to get 10 minutes of peoples’ times to talk about features? What about features that you’ve seen coming up in the sales/cust-dev cycle that aren’t tied to any specific current customers?

    Do you usually up front say “we’ll give feature A 15 days and make a judgement then” or do you just revisit running tests periodically to see how things are going? Do you find yourself in a situation where your kanban board is dominated by items in column 6 waiting for quantitative validation?

  • BillSeitz

    And then the follow-up becomes, at one point do you conclude that you’re on a pointless-increment path and that it’s time to Pivot?

  • Snorre Gylterud

    this post made me remember a tool I participated in creating a couple of years back: MMF Planner. MMF Planner is a project planning tool for projects using the Incremental Funding Method (IFM) from Software by Numbers. It’s a more economical approach, but might come in handy in a combination with the Kanban. If interested check out https://github.com/jodal/mmfplanner

  • Wes Winham

    Generally, I do prefer to have requests from actual customers default to a higher priority than any speculative ideas we have. As long as it’s within the scope of the current vision, any time a customer expresses a particular pain, I try to let that override my own intuition of what their pain is. That’s kind of a fuzzy subject though and I think feature prioritization will always be mostly a guessing game (and thus the focus on qualitative/quantitative validation on the back end). 

    My struggle is with including the validation as part of the overall process without ballooning overall work in progress to the point where it feels like we’re thrashing. I don’t know if I’m missing some key insight or if it’s just one of those things that you have to slowly improve as an organization.

  • Ash Maurya

    Cool thanks for sharing!

  • Ash Maurya

    I’m available for a chat anytime. 

    One thing I’ve been a little weary of lately is the overuse of “pivot”. Not as a term but specifically as a cop-out over “perseverance”. 

    Part of learning is dealing with things not working and identifying “why”. Sometimes you pivot but something you just fix stuff that doesn’t work aka bugs. 

  • Ash Maurya

    Thanks!

  • Ash Maurya

    Great points Wes… Lots of questions that could make mini-posts.

    On added latency, I’d say it’s a question of going back to defining what real progress means to you. Is it pushing bits to fulfill your vision or building something people want? If it’s that latter, then you need to seek out both qualitative and eventually quantitative validation before going too far down the rabbit hole. 

    It’s always easy to get customers to take a call for a feature they requested but the key to getting qualitative feedback beyond them is building relationships early with your users/customers. This is where customer development shines. It lets you do this organically so that by the end of the process the early adopters who are using your system are almost as invested in your product as you are…

    I would pass “internal features” through the same process described above i.e. articulate a real customer problem (justification) for the feature, put it in front of customers and see how they react. 

    Yes, I do like to time-box my experiments but the time period is relative based on the number of users/feedback cycle, etc. Same on quantitative validation state. When you first launch, you typically don’t have a lot of users so features end up staying longer in this state. Hopefully, that improves over time. The key, however, to tracking the impacts of features over time is using cohorts. 

  • Ash Maurya

    I agree with the most of the precursor comment except the part of picking 5% to qualify as MVP. It sounds like you are compromising on the product when you should be distilling down towards delivering a unique value proposition (UVP). When you hit on the right UVP, feature parity is less important. 

    Your goal with the MVP should be identifying and delivering just-enough of a product to address a customer problem. After launch, I spend 80% of my effort towards ensuring that new users realize that UVP (existing features) versus building new features. 

    Every new feature has to be justified much like your MVP through the process above.

  • Ash Maurya

    The decision to pivot is a hard one and not one to be taken lightly. As I mentioned to Trevor Owen’s comment below, I find too many founders using pivot as a cop-out over “trying harder”. 

    Eric Ries has a section on this in his upcoming book (The Lean Startup) that describes a process for running “Pivot or Persevere” meetings. You need to make an informed versus emotional decision. 

  • Pingback: justinmwright.com | Engineering Mistakes and Process Improvement()

  • Ricardo Trindade

    Great post. In my case, my site uses a freemium model and I struggle between finding the correct balance between features for my costumers (the ones who pay) and the users. Isn’t it “wrong” to release a user feature and then simply remove it when your testing shows it isn’t improving any metrics? Won’t the users feel they are being cheated, even though they aren’t spending a single penny for that feature? Thanks in advance!

  • Ash Maurya

    I’ve got a lot to say on freemium but I’ll start with making the distinction of freemium being more of a marketing tactic vs a business model. A perfect freemium system should work like a free trial (i.e. get users to grow into customers) with the added psychological advantage of “free” (which is huge). 

    With that, I’d recommend that you only listen to customers and not users. Only once you understand how customers really use your system, are you in a position to correctly architect/balance the free plan to provide just-enough benefit for your users. 

    As to removing features, it depends on how it’s done. I typically don’t do a marketing announcement until the feature proves itself to be useful. Partial rollouts, customer interviews, etc. all help to lower the risk along the way. 

    But in the case where a feature gets rolled out to everyone and still doesn’t move the needle, I wouldn’t hesitate removing it provided you publicize your reasons. Most customers appreciate less clutter and simpler software and you’d be surprised how supportive they can be for fixing mistakes.  

  • Henri Liljeroos

    Do you have a certain pool of users you use for interviews in stages 1 and 2? I find it a bit troublesome to gather a group of users for every single feature. How do you do this in practice?

  • Ash Maurya

    If you follow customer pull i.e. if the feature was requested by one or more customers, start there. That’s the easiest way to get a motivated user on the phone because it was their idea. Frame the conversation around learning to understand their root problem, then ask them if you could show them the mockup/push the feature to them when it’s ready to make sure it solves the problem.  

    Otherwise, yes it’s important to segment your customers. I have a list of  “power-users”/early adopters that I ping whenever we want to start building a new feature. I start by describing the problem and see if it resonates, then follow the rest of process. 

  • Pingback: Startup MS – Links da semana (01/08 até 08/08)()

  • Lindsay Brechler

    What do you do if you kill the feature – either before development or during quantitative analysis?  Do you notify the customer that requested (or is now using) the feature?

    Also, do you have a target ratio for bug fixes-to-features in development?

  • Pingback: The Achilles Heel of Customer Development()

  • Ash Maurya

    Yes in both cases. The first is easier. If while we building the feature we deem it “not worth building” for a number of possible reasons, we stop and explain our reasons why. 

    We did this recently with when we removed some early work on “interviews tracking” in Lean Canvas. We realized that this was too big a problem to tackle for a small sliver of a startup’s lifecycle. There were also some good enough “cobbled up” alternative solutions we ourselves were using. So we informed our user base we weren’t building this feature anymore, explained our reasons why, and gave them possible alternative solutions they could use instead. 

    I  use a 80/20 general rule where 80% is for improving existing features (which may be bug-fixes but also usability, flow improvements, etc.) and 20% is for new features.

  • costume sur mesure

    Hello,
       and if we have a public website like http://www.MonCostumeSurMesure.com how can we find key users ?

  • Pingback: Go Only As Fast As You Can Learn « Highlights()

  • Chris Cornutt

    I’d love to see an expanded version of the “Features vs Bugs” section. I’m a developer and it’s too easy to get caught up in the weeds of the code and miss what should be considered a “feature”.

  • Ash Maurya

    Sure thing Chris – will do a follow up post on that.

  • Pingback: How We Build Features – Expanded (Q&A)()

  • Pingback: August 24 – Recommended Reading | Nicholas Muldoon()

  • Pingback: Scaling Flow in a Lean Startup()

  • Anonymous

    Another Great post   with  Product life cycle charts ….

  • Pingback: Product Manager Interview Questions: Leadership and Team Skills()

  • Pingback: The Lean Stack()

  • Pingback: The Lean Stack – Part 1()

  • Pingback: O Lean Stack em Português – Parte 1 | Bizstart Blog()

  • Pingback: Produktentwicklung auf die schlanke Art « Produktmanagement und Vermarktung von Internetanwendungen()

  • Pingback: What I’m learning about hiring good product managers « As I learn …()

  • Pingback: Der Lean Stack (Teil 1) - //SEIBERT/MEDIA Weblog()

  • Pingback: Running lean by Ash Maurya | Ywan van Loon()

  • Pingback: The Lean Stack - Parte I. Oficial Ash Maurya Castellano »()

  • Pingback: Engaging the Solution Team | Ryan Norris()

  • Henryk Witoszewski

    For Kanban board try out kanban tool. It works for us.

  • jesse

    Would like to know more about how you tie github and kanban together

  • Ilanr1

    Ash:

    I cam across your post only today, and i am quite fascinated. Well done, and greatly helpful!
    Re this “how we build features” post: as you were building software products, i noticed the absence of a scrum tool, or any indication of using a scrum process. Given the agile and PRACTICAL nature of all your methods and processes, can you please shade a light about your agile software development experience and the perspective for excluding (or not?) scrum methods?

  • Pingback: Cách xây dựng tính năng | Dick Appota()

TOOLS
  • Software
  • Lean Canvas
  • Validation Plan
  • Experiment Report
BOOKS
  • Running Lean
  • Scaling Lean
TRAINING
  • Online Courses
  • Workshops
  • Bootcamps
Copyright © 2016 LEANSTACK • Privacy • Terms