Monday, March 21, 2016

Convergence Techniques

Convergence Techniques
There are countless ways to build a convergence process, but I suggest they all follow the general model I presented. Whether a simple single pass with basic criteria, an artful process where the steps and stages may not even be clear or definable, or a scientific process with dozens of passes and a high degree of measurement and rigor, it’s important to be intentional and conscious during convergence. To better understand some of the options, it may be helpful to briefly discuss a few of the basic kinds of test and criteria that can be used. Here are a few suggestions for options in convergence. I have attempted to roughly sort them based on rigor and complexity with the most simple and easiest, but maybe least robust, methods first.


Simple audit. This may be the most basic form of reducing ideas and probably one to use early in the process rather than later. It’s a simple audit of all of the ideas and opportunities that emerged from the divergence process. This is done by a single individual, maybe the project leader or client lead. Using the initial design criteria or a simple set of requirements, a single pass or sort divides the ideas into two or three buckets. I like to create labels (something like awesome, average, and ridiculous work). This can save a lot of effort later when more costly or rigorous tests would better be run on smaller sets of ideas. It cuts off the low hanging fruit, moves forward the obvious choices, and can be a simple gut check of the process. I like to keep the process fluid and open to moving and idea from one bucket to another later.


Group consensus. In some situations, it may be better to give this task to a group, especially if the process is contentious in any way. I have used long-standing committees, decision making bodies, or specially appointed selection groups to perform this task. The process is much the same as the simple audit, but the results must be achieved through consensus or voting if necessary. The group process could be done in steps or passes at the list of ideas. Initially, the group needs to understand the task at hand and what success looks like. Then it must agree on the criteria and the number of buckets or sort piles. And finally, it can sort. The last step is endorsing or agreeing on what resulted. Larger groups could function in subgroups and compare the groups’ results as a twist. Simple audit and group consensus are often used methods.


Bi-dimensional comparisons. This sounds more complicated that it is, but a good technique to make choices from a mid-sized list of options is to plot them based on two key criteria. Often I like to use effort and impact along the X and Y axes of a grid then compare the quadrants. The axes can be rough scales from 1 to 10 or more specifically measured quantities if these measurements have been taken. It’s fun to label the quadrants too, and a low effort and high impact idea may pass through the filter most easily, high effort and low impact ideas may be eliminated, and the two remaining quadrants may require further thought or discussion. The method can be applied along with a simple audit or group consensus to help build a more powerful test. In reality, any two criteria can be put along the axes. Here are some combinations I have used in the past: cost-value, simplicity-impact, usability-functionality, function-form, outcomes-duration, or delight-cost.


Rating systems. A more rigorous testing method is to apply rating scales or systems. There is an entire discipline for testing and measurement and I won’t go into the methods here; know that you can learn about these if you need to. Simply put, a set of measures can be established for the key testing criteria and assessments of each idea are generated. For example, a 10 item rating scale might be developed using statements of agreement like I think this idea has a great chance of success and a group of individuals evaluate each idea. Basic statistics can be calculated on all of the items across all of the ideas and comparison methods can yield information about where cut points should be drawn, which ideas move forward in convergence and which ones do not. Different kinds of raters can also rate, like customers, users, managers, the public, and so on to add variety and dimensionally to the statistical analyses. As you can see, the rigor, precision, time, and cost of the testing is increasing.


Large-scale participatory events. This may be a less common method, but it is one that I’ve used to considerable success in my work with organizations. When considering significant strategies or organizational changes, getting broad input and buy-in during the convergence process can make the difference between acceptance and rejection of an idea. For example, I have staged day-long or multiple day events where a large number of early ideas are considered, built out, filtered, and rebuilt successive times with the result of reducing the number of choices in consideration and simultaneously adding detail to ideas that move forward. For example, working with one client over a multi-day event, we went from 25 opportunities that had some detail, down to 14 big ideas with a basic plan and cost structure for each, down to 7 key strategies with implementation plans. These stages can happen in a continuous event or be more separated in time. The key is that a large portion of (or all of) an organization or user group is there to witness and participate in the convergence.


Failure scenarios. For a convergence process where there is less empirical and historical data and evidence, we can use the scenario process to help give insight to which ideas should move forward and which ones should not. One method I like is called a failure scenario, and the converse success scenario. Basically, the first step is to develop stories about or situations in which ideas could fail or succeed and then sort the expected results of implementing an idea into one of many possible future outcomes. A final step is to review outcomes of the scenario sort to see if it makes sense. Scenario planning is sometimes more art than science, like comparing political or economic actions in large complicated systems. But there are examples where failure scenarios can be more quantitative such as materials testing, like a plastic, ceramic, or metal for application in a vehicle.


Voice of the customer. Consumer or user input is both a divergence and convergence technique. For divergence, we are looking for new ideas or builds on existing ideas from users. For convergence we are looking for what appeals, wows, or makes sense from the people who will be most impacted by the implementation of an idea. An example here is the software development or selection process when design choices can be selected, eliminated, or altered based on usability testing from a group of users. A second example might be consumer products like food or cosmetics, where the success of an item in a hyper competitive marketplace depends on subtle perceptions of a large number of consumers. The focus group is a specific example of this technique.


System constraint testing. Like a set of hurdles, technical or regulatory systems can set constraints of hopeful solutions. Sometimes, but not always, using these constraints are helpful ways to narrow down potential solutions, the downside is that these very same constraints can stifle innovation. Here the criteria can be written as requirements and tests can be done on possible solutions. If they do not meet requirements, they can be eliminated. An example here is the software development or selection process or regulatory changes in a governmental system.


Standardized tests. When we are dealing with a very large number of choices and when the decision process gets repeated over and over, we may be able to use standardized testing to help with convergence. Standardized tests assume that there is a distribution at play, a mathematical formula that repeats that can be applied and exploited for selection. For example, we know that most human traits follow what is called the normal distribution. Here there is an average and variation around that average that is commonly known. Height is an example, there is an average height for adult males, maybe around 5’10”, and a certain portion like two-thirds of adult males are between 5’6” and 6’4”. Very few adult males are shorter than 4’10” and taller than 6’10”. Tests can be constructed to determine a score for quality that follows a distribution and cut points can be developed that serve as filters (like the SAT for university admissions I mentioned earlier). The advantage here is that we can learn from history what traits and qualities have succeeded before. Standardized testing systems are complex and expensive filters.


Cost modeling. Typically a later stage convergence method, cost modeling allows for a fiscal analysis for competing ideas of costs and revenues as they are known at the time or could be estimated. Key outcomes like net revenue and return on investment can be calculated and compared across ideas. Ideas with costs that exceed resources may be eliminated for consideration. Cost modeling requires ideas that have a lot of detail built into them. This method may be a final filter in the process, say if three good options remain and the most revenue positive of the three would be selected for implementation. The method requires special analytical skills and significant data about production, markets, supply chains, and other financial metrics.


Business modeling. I offered up Osterwalder’s business model canvas as a divergence technique where intentional abstract models are built to explore and explain how value can be created through focused effort. The canvas allows you to explore both revenue and expenditure sides of the value propositions that organizations offer to customers. I have also used canvases in the convergence process. Perhaps less data driven than cost modeling, what the canvas may lack in precision it offers significant strategic breadth. I’ve found that groups like to build business models for ideas and the canvas is easy to work with. Early in the convergence process, many business models can be quickly generated and compared. Later in the process, as detail is added, more stringent criteria can be applied and the best value propositions selected for survival in convergence and lesser value propositions put aside.


Full feasibility study. A full feasibility study is a costly and complicated endeavor and it should be relegated to the last stages of the convergence process. Feasibility studies are also reserved for the most complicated ideas and strategies. I would hardly commission a feasibility study for my supermarket yogurt selection, but I would consider it before splitting a corporation’s key strategic business unit in two. The construction of feasibility studies is worth an article, or book, of its own, so I’ll leave you with little detail here. Here are a few pointers I’ve learned. Sometimes a feasibility study is completed on the one remaining best idea that resulted from the convergence process, a separate stage prior to implementation. While it may be possible to do a feasibility study quickly, in my experience it takes months or longer, so keep this in mind when planning the convergence process. And finally, there are many interchangeable components in studying feasibility, so look to keep the entire process as streamlined as you can while still getting the results you need.


Taking the prototyping route. Early in the convergence process, I like to challenge myself with this question: to prototype or not to prototype. The reason is that it changes how convergence goes. Several of the design models I presented are built to favor prototyping over a convergence-implementation pairing (see Ambrose/Harris, IDEA, and Plattner). I can go either way on this, but I like to be intentional about the decision and I do like to combine them. There is significant power in taking the final handful of choices and options from the convergence process and subject them through further development in the prototyping process. Here ideas are refined, developed, and adapted in further tests, but differently than in the convergence phase. Prototyping can be messy, require relentless iteration, and may combine any number of methods I’ve presented here for convergence. There are also some key differences – more on this next month.


In conclusion. There are endless testing protocols for new ideas, products, and solutions across industries. For example, the development and bringing to market drugs and pharmaceuticals requires lengthy trials and testing. Delivering a final bridge to cross a river to a city client has equally high stakes but different final criteria based on sound design from years of trial and error and computer modeling.


Most applications in organizational strategy do not get the same treatment that you find in product development or the pharma industries. They are more expeditionary. Regardless of the application, however, convergence takes a large number of choices and narrows it down until you left with something you can do something with, something that can be acted upon that brings you closer to meeting your original need.


The rigor you apply along way, the number of tests and their accuracy, the breadth and scope of testing, and the costs required depends on the stakes involved in the solution and the potential negative impacts of making bad choices. I hope you take some of my ideas and suggestions and apply them to your own convergence. Please stay in touch with any stories of success or learning.





Robert Brodnick, Ph.D.
530.798.4082

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.