An Inception is a collaborative design process for a group of stakeholders to share the same understanding about an upcoming project. It is an activity focusing more on “why we do it” rather than “how we do it”. The methods of conducting an inception continually evolve as we face different situations with each client. A good Inception practitioner should always be thinking about improving the methodology. In this article, I will detail our Inception experience at Technovation and discuss three improvements we made.
#1 No first day Elevator Pitch
Convention told us that on the first day of Inception we should have all participants share the same product vision. The Elevator Pitch is a good tool to get everyone on the same page, but there are some risks:
- Do we really know what users we should be targeting at the very beginning?
- Do we know our product value, given we haven't talked about real scenarios yet?
- Do we really understand what features we should provide?
Do not jump into solution discussions before you are clear enough about what problems you are attempting to solve. Shouldn’t that be our guiding principle? But how do we do that when every time we go to a client inception, the very first thing we always do is to work on an elevator pitch, a.k.a product vision.
The Elevator Pitch, should instead be a retrospective at the end of the “definition” part of your inception, to check whether everyone is on the same page, after all those workshop sessions. But not a part of Day 1.
#2 Persona is overrated
The necessity of the Persona should be de-prioritized. We spend so much time with our clients creating those “user pictures” by imagining customers' names, occupation, gains, fears, hopes, etc. And after the Inception we find ourselves questioning, "Where did those personas go?". I agree that it is a nice way to collaborate with the client and focus on the user, but the bottom line is that you cannot make everything up about the user - you should at least base it on some facts, and real customer behavior. Considering we always de-value user research, I don't think those facts and data on user behavior are usually ready and available at inception.
So how do we get around it? We do a user goal session instead of a long persona session. Here’s how:
Stakeholder map practice: This maps high interest and high influence, with the focus area being the top right quadrant. It is a traditional approach to prioritize the important users of the system.
User relationship matrix: This determines the importance of the user role by discussing how one user type engages with another. It also helps us understand both their interaction.
- User goal brainstorming: This is a good customer journey driven by a well-defined user goal. By identifying these user goals we can set up the right goal for to be achieved by the product.
Dependency analysis: User goals have to be prioritized. Understanding the dependency between the goals, helps us make more logical decisions about the priority, rather than intuitive voting. After this exercise, we get a list of the most important user goals for the most important stakeholders. We will use then them as an input for our customer journey mapping session.
#3 Be careful about voting for priority
We like voting. However one of the biggest challenges we receive from the design community is "good design cannot be voted". My response to that is this - before the product release no one knows what is good or bad, so a good design at this stage should be the cheapest and quickest design that helps you define good or bad. That's why we like voting (the cheapest and quickest bit). But design thinking says "it is always good to balance analytical and intuitive thinking". We should thus think how to add more analytical methods to the prioritization process.
First, the design process should be a convergent journey. Identify the most important stakeholders, user goals, challenges, etc. Every step of this approach is a prioritization, even if we don't call it out as such.
Second, the system analysis approach helps - methods like Root Cause Analysis, System Feedback Analysis, etc. These methods use a more logical way, rather than unclear “business value” to define priority. For example, when doing a dependency analysis on identified core features and using dependency as a key metric to define priority, we understand that the Team Dashboard would be less important than the Team Setup, because one first needs to have a team set up.
Below is the deck based on our Design Thinking driven Inception with Technovation