Disruption and Overgrazing

See Part 1 here: Viability in Pools of Possibility

Why do we care so much about disruption? Disruptive innovations still represent a holy grail that companies clamor after. Despite disagreements over the definition of disruption and a myriad of other criticisms, the allure of commercializing anything that could fall under the nearly mystical umbrella of disruptive innovations is too strong to resist. After all, historically disruptive innovations dominate industries and markets, and yield monumental returns. But disruptive innovations are only a subset of the larger category of revolutionary innovations: the ones capable of causing paradigm shifts.

There is one commonality among these truly disruptive innovations: they open up new realms of possibilities and activate future branches of development. When focusing on defining, evaluating, and forecasting disruptive innovations, the focus seems to be on what these innovations replace, rather than the possibilities they activate. I analyze disruption within the context of the innovation commons to clarify how disruptive innovations come about and the role they play.

Within this ecosystem, technologies can be grouped by species, genus, family, and on. Because technologies interact and cross boundaries, defining the exact boundaries for each group is difficult. In the innovation commons, the distance between two technologies is measured by the number of sub-components and sub-problems they both share. They compete for resources by satisfying specific consumer needs. Two species of technologies compete when they both attempt to solve a similar enough problem for the same set of problems.

Each species of technology competes for the same resource: consumers. When a new technology enters this system, if it can cross a performance threshold set by the established technology, it can compete for the same set of consumers the established technology is satisfying. This will occur if the utility from using the new technology (U_{N}) minus the price of using the new technology (P_{N}) is at least equal to the utility (U_{E}) minus the price (P_{E}) of using the established technology.

U_{N} - P_{N} \geq U_{E} - P_{E}

When a new technology can cross this basic performance threshold, then it has a chance of being disruptive. Disruption is not binary, it is a spectrum. Disruption could mean the complete replacement of the established technology by the new technology (Netflix and Blockbuster), or the two may coexist (Uber and Taxis). The new technology could either causes the extinction of the old technology, or the new technology could carves out its own niche as a unique species. When the two technologies compete with one another, three key factors influence the threat of disruption:

  1. Dimensions of Competition
  2. Spillover Effects
  3. Feasibility of Improvement

Dimensions of Competition: 

To compete for the same set of consumers, the new technology and the established technology have to satisfy a similar core need. However, the new technology can complete the same job in a radically different way. Wikipedia acted as encyclopedia, but the way it provided that service was radically different than the previously dominant Encyclopedia Britannica. The same is true for Netflix and Blockbuster, Uber and Taxi services, and many other instances of disruption or technological change.

The less overlap there is between the dimensions that the new technology offers and those the established technology offers, the more disruptive the new technology will be. Established technologies can exploit shared dimensions and respond more effectively to the entrance of a new technology.

However, when the new technology offers a new dimension (such as Netflix’s introduction of online video streaming), the incumbent has limited capacity to respond. The incumbent is tied into developing its current set of attributes. It evaluates success based on how consumers respond to this set and deals with the sub-problems that this set of attributes create.

Netflix negated any first mover advantage Blockbuster had with its existing set of attributes (storefront distribution) by being the first mover in a different dimension (online streaming). At what level is the new technology differentiated? Is it a new species? A new genus, or family? The higher the level of differentiation, the more disruptive the new technology will be.

Spillover Effects 

Christensen’s definition of disruptive innovations is still highly relevant. Disruptive innovations frequently originate in niche markets that highly value certain attributes that the new technology offers. The weight that niche segment consumers place on these special attributes is high enough that consumers can tolerate mediocre or poor performance along other dimensions.

However, for the technology to be disruptive, it has to eventually be appeal to consumers in some mainstream segment. The spillover between developing attributes niche market consumers value and value creation in a mainstream segment influences the disruptive threat the new technology poses. Google Glass found a few niche markets, such as surgery, but developing features surgeons valued did not create value for mainstream smartphone users. Hence, Glass was not disruptive.

Commercializing new technologies in niche markets does not always lead to disruptive technologies. Just as incumbents and industry leaders get tied into the value network of their consumers technology, new technologies can get tied into the value network of a niche market. If niche market consumers are willing to pay enough, they can constrain the developmental future of a new technology. While a new technology may still be able to succeed in disrupting niche markets, to have an impact in mainstream segments, attributes niche consumers value must overlap to some degree with those mainstream consumers value. Monitoring the spillover effects of new technologies also helps highlight future opportunities.

Feasibility of Improvement 

When the new technology competes along different dimensions than the established, the feasibility of advancing along those dimensions must be taken into consideration. The established technology, existing in a defined space, addressing defined problems, can more accurately project returns to research and development. This phenomenon exists because, as discussed in Part 1, collective action reduces uncertainty.

The novel dimensions the new technology competes along are less defined, and advancing along those trajectories leads variable returns.

Clayton Christensen describes some of the typical roadblocks that inventions find on the path towards becoming an innovation:

  1. The momentum barrier (customers are used to the status quo)
  2. The tech-implementation barrier (which could be overcome using existing technology)
  3. The ecosystem barrier (which would require a change in the business environment to overcome)
  4. The new-technologies barrier (the technology needed to change the competitive landscape does not yet exist)
  5. The business model barrier (the disrupter would have to adopt your cost structure)

To this list, I would add a political barrier (political rules and regulation act as hindrance, common in the healthcare industry), and refine the definition of the momentum barrier to include a social/cultural element. While Christensen applied this framework to evaluate whole technologies, the same underlying concepts can be used to address the barriers to improvement individual components face.

The disruptive threat will vary depending on the quantity and severity of the various barriers to development a new technology may face. By analyzing barriers along each individual dimension, we can better analyze how the developmental future of a new technology differs from that of the existing one. This breakdown reveals opportunities where the established technology is weak, and highlights areas where it is strong. In this manner, disruption can not only be better forecasted, but it can be engineered.

The Origin of Disruption – Overgrazing 

But where do disruptive innovations come from? Disruption simultaneously causes displacement, while generating new possibilities. To understand this, consider the developmental history of the existing pool of innovations and technologies. Recall that in the innovation commons, around this existing collective is a bubble of tolerable risk. The closer to the center of this bubble a new idea is, the better its chances of becoming an innovation are. Over time, clusters of innovations spawn future inventions that follow their own developmental trajectories. These trajectories trend towards a logarithmic pattern. Growth stalls as critical components run into delays.

If a certain tree of trajectories is pursued far enough without activating new directions and branches, the ratio of plateaued trajectories to growing trajectories will increase over time. The collective inclination to pursue “safe” opportunities can lead to a dearth of novel originations. Everyone is drawing from the same technology stock to keep innovating, and this stock can be “overgrazed.” In the innovation commons, overgrazing is rarely fatal. It would be very difficult to exhaust even the number of possibilities generated from a small collection of components. Overgrazing is a much more subtle phenomenon.

Overgrazing leads to “disruptive innovations,” using the original definition of the word: a discontinuity, displacement, and disequilibrium. These unanticipated innovations emerge from points at the edge of the bubble of uncertainty tolerance, and sometimes beyond. They employ a dramatically different approach, and most importantly: open entirely new avenues of development — new domains for exploration. When the majority of trajectories are tightly clustered, overgrazing in one spot, only a small portion of the area of the bubble of uncertainty will be covered. This feature is what makes disruption both hard to spot and hard to originate as incumbents.

Coming from the edge, disruptive innovations tend to activate sets of trajectories whose components are very “far” away. There is very little overlap between this new set and the old set. The new set, being novel, originates from a higher base and usually offers better performance, functionality, quality, and quantity than the old set. When disruption occurs as a response to overgrazing, negative technological shocks will be higher. More resources will have to be repurposed to meet the needs of the old technology, more workers will need to learn new skills, and the developmental history of existing branches of technology will grind to a halt.

Areas where technological trajectories cluster will be ripe for disruption. Consider a single set of consumers with a defined set of needs. When competition for these consumers is concentrated between firms that employ technologies of the same species, differentiation at a higher level (and subsequently disruption) is more probable.

Christensen’s initial theory of disruption posits that disruptive innovations first are initially low end products. Because of this, he argues that Uber (and perhaps even Tesla) are not disruptive. I disagree about Uber and Tesla, and my construction of disruptive innovations explains the low end low end aspect in the following way:

  1. Mainstream consumers value the entire set of attributes an established technology offers as a whole, demanding performance across multiple dimensions.
  2. Niche consumers highly value one or two key attributes that are the new dimensions the innovation offers. These dimensions must have an ability to solve jobs the existing company does in a new way.
  3. Integrating these new dimensions into the full set of attributes the existing technology offers is initially difficult and so from the perspective an existing technology (or consumer using the existing technology), the performance of the new technology seems low end.
  4. The new technology finds a smaller market and environment in which to survive and continue evolving within.
  5. Depending on the innovation, the set of attributes it offers improves over time.Christensen’s various barriers to disruption will determine how difficult it is to develop, but eventually the new dimensions do such a good job of adding value that the new technology meets the performance threshold set by the established technology.
  6. At this point, the new tech competes with the established tech for the same consumers. The result of this competition depends on the difference in value creation, distance of dimensions, spillover effects, feasibility of improvement, and other factors that impact the disruptive threat of a new technology.

Displacement and Activation 

Fundamentally, with disruptive technologies: “old displaced technologies fade from the collective, their ancillary needs are dropped. The opportunity niches they provide disappear with them.” (Arthur, 2013) Disruption plays an essential role in the cycle of creative destruction. By labeling these technologies as “disruptive” we emphasize the destruction aspect: the disequilibria, displacements and discontinuities they cause. Instead, we should focus on the creation aspect of disruption: how disruptive innovations active new branches of developmental possibilities.

Such a shift in emphasis will naturally re-focus attention on the key influencers of disruptive threat that I outline. Focusing on new modes of creation should lead people to question how the same problem can be addressed in different ways, along different dimensions. This attitude should encourage innovators to search for markets with spillover effects: where developing a technology along a single dimensions creates value in multiple markets. Addressing the feasibility of improvement should always be important, but in this context, the emphasis is on how to develop a new technology — not how to replace an old one. By better defining the origin and outcomes of disruption, we increase our ability to forecast and engineer disruptive technologies.

Viability in Pools of Possibility

Concerns about the declining pace and quality of innovation are legitimate but misplaced. Innovation is not slowing. The size of the space of the possible has increased, but we are exploring only a portion of it. VR technology has existed in some capacity for years, and driverless cars already exist. However, in both of these cases, realistic integration of these technologies into our daily lives lags behind. The origination of ideas and the commercial integration of those ideas represents two distinct processes: invention and innovation. In recent years, the gap between these two stages has been decreasing in some areas (internet technologies, nanotech, and biotech to name a few), but has been increasing in others. To understand why, it is helpful to think about technological progress as an evolutionary process.

There are many parallels between technological and biological evolution. One parallel is the importance of selection. In biological systems, survival of the fittest and other rules ensure that the only species who survive are those that are fit for their environments. In technological systems, survival depends on finding product-market fit. The parallel continues. A species’ fitness in its environment depends on the relative fitnesses of surrounding species and how they interact. Similarly, a technology’s product market fit depends on the actions of the other technologies in the market.

Here, the analogy breaks down. Successful technologies go on to interact with, and define a surrounding civil society. This civil society is composed of the societal infrastructural, cultural, political, and economic systems. These interactions refine the selection criteria, as only inventions that fit both the technological and civil markets survive.

Innovators search over a common pool of inventive possibility for ideas with civil and market viability. How innovators explore and define this pool constitutes the rules of the innovation commons. The nature of the innovation commons and its rules simultaneously enables and constrains technological change. Inventions that lie “close” to existing innovations share clearly defined civil and technological problems and will find it easier to become innovations. Inventions that do not share common components are further from the existing collective of innovations. Inventions in this latter category are naturally riskier, oftentimes requiring both technological and socio-political shifts and reallocations of resources. These “fringe” inventions remain underdeveloped, making the innovation system as a whole seem stagnant.

 

Innovation and Collective Action 

Inventions are combinatorial. When a new idea is developed, it spawns a certain number of problems that may be technological or civil in nature. If these problems are not addressed the new invention may never become an innovation. A single innovator (either an individual or an organization) may be able to solve some of these problems independently, but many of the problems will depend on exogenous factors beyond the innovator’s direct control. The invention’s chance of succeeding depends on the actions of other innovators in the environment and the set of technological and civil problems that they are solving.

Google Glass would have had a better chance of success if a robust app ecosystem had sprung into existence early on. Nest’s value proposition increases exponentially when it is part of a connected network of devices that work together. Tesla and other companies that can marshal enormous resources and talent can attempt to bypass this collective action problem. But even these companies encounter limitations at some point. As Tesla rolls out the Model 3 and sets its sights on mass-market adoption, it is limited by the distribution of supercharging stations.

When considering civil problems, collective action becomes even more important. In addition to addressing any technological barriers, innovators must consider how an invention fits within the broader cultural and political environment. Tesla still can’t be sold in Texas or Michigan because of protectionist laws requiring cars to be sold through franchised dealerships. Uber’s struggles with regulation are well documented. To address these political roadblocks to innovation an innovator would have to organize and coordinate groups with diverse interests. Technologies that fit well within existing civil structures will have an easier time attracting attention and becoming innovations.

The Innovator’s Problem 

An innovator searches over an inventive landscape of possibility, discovering and creating solutions that change the landscape. Some of these inventions find commercial success and achieve market engagement. A smaller subset disperses widely into society. The recursive cycle of creation, selection, and survival constantly changes the landscape, creating uncertainty.

Working together reduces uncertainty. You and your fellow innovators are explorers in the unknown, working on different problems. The more overlap there is between the problems that groups of innovators are solving, the faster those problems get solved. The distance between two inventions is given by the number of sub-components and sub-problems that they share. To succeed, a critical number of problems must be solved. The limits of what a lone entrepreneur can accomplish means that each new idea needs to attract a critical mass of entrepreneurs, innovators, and investment before it can succeed. The existing collective of entrepreneurs working on solving problems defines a discovered portion of the space of possibility. A new idea’s chances of succeeding depend on the density of sub-problems it generates and the distance between that idea and the existing collective. The “difficulty” of solving the new set of problems will depend on their distance to the existing sets problems. For example, the set of problems that Amazon solved directly facilitated the rise of the self-publishing industry. Difficulty and distance interact and shape one another as part of a miniature complex system nested within the larger invention/innovation system.

The “rules” of innovation define the manner in which innovators search through the realm of possibility, testing and refining inventions while responding to the way that dispersed innovations change the landscape.1 More importantly, these rules define the maximum allowable distance and density of problems associated with a new idea. Beyond a certain point, too many problems will go unaddressed. The critical threshold of solved problems will never be reached.

Inventions that lie beyond that maximum distance won’t reach maturity and become innovations. Occasionally, highly skilled lone inventors with massive resources and large tolerances for risk manage to carve out sustaining niches for radically novel ideas. But most ideas outside of this bubble will fail. Failure is important in selecting for success, and failed inventions still add to the technology stock in the world, creating more components that can be drawn from while contributing to advancing other technology trajectories. However, we must distinguish between when an invention fails because there is a better idea and when it fails because the necessary supporting structures have yet to be developed.

For inventions to become innovations, they must surpass both technological and civil barriers. The technological structures determine what technologies are feasible, but the civil systems determine their market viability and the rate of dispersion. These two systems must be considered together. Inventions that survive integrate into and define civil environments. However, civil structures determine what is viable and constrain how innovators develop technologies.

The complex interactions between these two systems simultaneously facilitates and hinders innovation. The struggle between diversity and specialization is central to the innovator’s problem. An individual asks: should I explore a new area? Or should I continue to specialize in this area that I understand. Exploration of diverse new areas ensures that innovation and invention address a myriad of problems. Conversely, thorough specialization improves the quality of the answers and increases the complexity of problems that can be addressed.

Because of the collective action nature of innovation, innovation systems tend to trend towards specialization, oftentimes unintentionally. The closer innovators work, the more they reduce uncertainty. Each successful innovation generates its own subset of problems that attract innovators.2 These new branches of possibility are by definition close to the existing pool of innovations. Shared complementarities, overlapping problems, cheap labor and defined civil systems all work to reduce uncertainty. In this manner, technological trajectories cluster in noticeable ways. VR technology is suddenly relevant because declining costs are emerging in a civil environment primed for VR technology. The network of digital and civil communications built over the last thirty plus years and the integration of social media into daily life built the infrastructure for a society with a distinct digital reality. Four main environments are primed for virtual reality technologies: technological, social, cultural, and political.

How VR will manifest, and which manifestations will integrate and disperse remains to be seen. But the manner in which technological trajectories clustered built the necessary foundations for VR technologies. The creation of this digital infrastructure has come at the expense of developing infrastructure for other branches of innovation simply because we live in a resource constrained world. The nature of constraints on resources helps tip the scale towards specialization over diversity. When inventions fail because they lack supporting systems, or can’t quite cross that critical threshold, entire branches of technological and civil possibilities go underdeveloped.

Lags Between Invention and Innovation

Close attention must be paid to the health of the innovation commons and the implicit incentive based rules that percolate around. Monitoring technological clusters makes predicting the future easier by clarifying how things fit. A skilled monitor would spot the major inventions that lag developmentally. These represent opportunities to monitor and facilitate, and ways to generate more possibilities. When inventions fail because they lack supporting systems, or can’t quite cross that critical threshold, entire branches of technological and civil possibilities go underdeveloped.

Imagine a diver searching the ocean for treasure. At first, she starts on the surface, sending out probes to identify potential hotspots. In the beginning, she can see for miles around her, but identifying opportunity is difficult and costly. When she finds an area that seems promising, she begins to dive. If she finds the treasure she is looking for, she dives deeper, eager to discover more. At some point, she may have to build supporting structures to help mine and gather the treasure: apparatuses to help her dive deeper, machines to retrieve treasure found at the murky depths, and on. But the deeper she dives, the narrower her field of vision gets. The more supporting infrastructure she needs, the costlier switching to explore a new area will be. In this manner, it is easy to understand how the diver would could get stuck in a location that seems to be optimal at the moment while missing out on other potentially richer areas of possibility.

If the pace of innovation is slowing, is the nature of innovation and invention changing because of a lack of ideas? Or does the pace of innovation seem to be slowing because we are moving towards the end of certain branches of technological development? My belief is in line with the latter of these two diagnoses. In recent years, outside of the realm of internet technologies, lags between invention and innovation seem to be increasing. The existence of appropriate technological and civil systems to support internet technologies facilitated the dispersal and integration of these technologies. However, the absence of these supporting systems seems to be hindering the development and deployment of other technologies, such as self-driving cars. Lacking the necessary supporting technological and civil organizational structures, inventions that lie at the edges of the existing pool innovations will continue to lag behind and remain underdeveloped. In this manner, the developmental history of technological and civil innovations can act as a constraint on what is possible in the future. 

Innovation is inherently random. While we can understand the past, we cannot predict the future. However, in the present, the shape of the future remains ours to define. As innovations continue to cluster, some critics will naturally worry. The tighter these clusters, the more incremental innovation will appear: Uber and AirBnB pulled antiquated industries into the modern age. Now the Uber or AirBnB of x, y, and z are attempting to follow along defined paths, seeking to replicate the success of those companies. Incremental progress rarely yields truly revolutionary inventions that cause paradigm shifts. These ideas often originate far from the existing collective of innovations and are important because they open and enable entire new branches of technological and civil possibility. When building innovation systems, or designing innovation policy, the biasing influence of developmental history must be factored in. By understanding the structural reasons behind innovative failures, we can move towards developing ways to re-balance diversity and specialization.

How quickly do we want to move forward? How fast can we change? Who benefits from innovation? Who gets to innovate? Who does innovation hurt? The answers to these questions are governed by the the web of relationships and interconnected processes that compose the innovation commons. By attempting to untangle this complex web, we gain degrees of control over the answers to these questions. 

The following essays will explore examples of the innovation commons and her rules.

  1. Disruption and Overgrazing
  2. Moonshots and Market Engagement
  3. Working Backwards and The Slow Pace of Fast Change
  4. Startups (!)
  5. Connectivity and Distance: Theranos and the Business Backer
  6. Patents and the Innovation Fed
  7. A New Age of Autonomous Transportation

References 

  1. The idea of rules builds on work done by Tim Kastelle, Jason Potts, and Mark Dodgson in “The Evolution of Innovation Systems” at the Copenhagen Business School’s 2009 Summer Conference.
  2. Arthur W,B. 2009. The Nature of Technology: What it is and How it Evolves, Free Press, New York, 2009.

Dead Pool: A Study in Disruptive Marketing

From the outset Deadpool was an underdog. Competing in the world of superhero movies, Deadpool had to make the most of it’s $58 million budget. Employing innovative marketing techniques and some unorthodox strategies, they were able to dramatically increase ticket sales on opening weekend. Deadpool’s opening weekend estimates were conservative, placed at $70 million. Implementing their viral / social campaigns, they were able to nearly double its estimated take on opening weekend, shattering records for an R-rated movie, and bringing in a whopping $130 million.

See graph – Zoolander 2 premiered the same weekend as Deadpool with a comparable budget of $50 million.

Their marketing team was able to accomplish this remarkable feat by personifying the marketable product, and inserting this persona into the contemporary social / digital landscape. This practice in brand articulation created an identifiable character, who aligned himself with (or in many cases against) certain aspects of society.

Screen Shot 2016-02-19 at 10.59.48 AMBy asking What Would Deadpool Do (WWDD) when inserted into these environments they were able to create digital assets that were original – and Deadpool based – instead of pulling from the film the film itself. The resulting ad campaign was modern, relevant, and hilariously self aware. Take for example, Deadpool’s Tinder page : (Tinder is a hook up app, commonly used in large cities as a way to meet up with people in their area.) When Deadpool is inserted into the Tinder landscape, the character is able to communicate in an environment that the target audience understands intimately. The key to the success of this ad is not just in its placement, but its ability to speak the language of Tinder.

“Semi-professional bad guy ‘un-aliver’, chimichanga connoisseur, and frequent patron of Saint Margaret’s School for Wayward Girls.” – Deadpool’s bio, with a full complement of emojis to boot.

While many campaigns are able to create modest gains in attendance, Deadpool’s ability to place ads that rang true to their audiences gave them the viral boost that all campaigns strive for.  “Ryan was a huge partner in this,” Marc Weinstock (Fox Domestic President of Marketing) said. “We came up with a bunch of crazy ideas, and he was like, ‘Great! I’ll do it.’ He put on the suit five or six times for full day of shoots on special content.”

April Fools day – Reynolds being interviewed by Mario Lopez. During the middle of the interview Deadpool ‘kills’ Lopez, defining clearly the edgy, counter cultural aspect of the brand, and assuaging the fears of diehard fans who worried that the movie would water down Deadpool’s witty and dark humor. This can be seen as a practice in brand articulation “against __________”. In this case “against Mario Lopez“, in other cases “against Romantic Comedies” or “against the Big Studio“.

Having identified the primary audience for this movie as young, smart, and largely cynical – Deadpool succeeded where other superhero movies have failed by presenting an anti-hero who channeled the scorn of internet culture, and spoke their language too.

 

Story Book

Recently we participated in the Oberlin College LaunchU accelerator program where we were able to test out our model. We worked closely with a company called StoryBook to help them clarify and refine their idea. We took them through our process, working with them to develop their concept into something that people could see the immediate value behind. With our help, they secured $15,000 in funding!

We met the founders of Storybook while participating in LaunchU ourselves. We were excited to use our time at LaunchU to refine our focus, but more importantly, we wanted to work with the other participants in our cohort and add value to their projects.

Storybook’s idea was simple: social media is cluttered. You are bombarded by content from people you don’t really care about. Sharing and reliving memories amongst people you actually care about (instead of the releasing it to the masses for “likes” or “views”) is almost impossible. Wouldn’t it be great if there was an app that let you seamlessly take content you create, share it within an intimate group of people who value that content highly, and then be able to access that content in the future to relive the best parts of your memories?

It is easy to think that this idea seems a bit too similar to Facebook, Snapchat, or some other combination of social media apps. But we saw some real potential in what they were working with. So we gave them a simple proposition: let us try out our process on you, and see what we can come up with!

Stage 1: Define

Where does Storybook have an opportunity to generate real value? To answer this question, we applied the core principle of our framework: blow up the elements of the existing technologies to figure out exactly what they offer, and where there is room for a new product. 

Facebook, GroupMe, and WhatsApp allow you to easily create groups, but seamlessly sharing content to those groups is not easy. Snapchat allows you to seamlessly share content, but sharing amongst a specific group of people is labor intensive. Facebook allows content to be posted, but isn’t meant to allow you to download content that others post for personal use later. Dropbox allows you to share access to high-quality content, but you still have to go through the labor-intensive process of selecting photos and uploading them.


Stage 2: Design

We worked with Storybook to identify three key pillars for their app: seamless group creation, painless ability to share content to these groups, and an interactive system to curate content to be stored and relived later.

Honing in on these features, we built some prototypes that would allow Storybook to visualize a simple user interface and create an app that really differentiated itself from the competition.

Screen Shot 2016-02-22 at 5.04.49 PM

When you open the app, you are taken to a native camera screen, and when you take a picture, you can choose which group (or groups) to send it to. That picture is then immediately uploaded to a centralized location where it remains for up to 48 hours. During this process, group members curate content. Low-valued content disappears, and the best memories are stored forever!

Stage 3: Develop

To really make their app work, Storybook needed to develop a fun, interactive system to curate content. We suggested Storybook capitalize on the new trend of Tinderization and helped them develop a basic algorithm to decide which photos would be stored.

A swipe up gives the photo a like, a swipe down downloads the photo to your phone and gives it a like (in case there are some photos you really want that the group doesn’t), and a swipe left dismisses the photo.

Because content is being shared amongst an intimate group, you can trust that the people you share amongst won’t misuse the content you share. The final step was to develop a simple algorithm to serve as a base for deciding how many votes a photo needed to be stored. We collaborated with Storybook to develop the following algorithm:

A photo is stored as long as: number of likes > (number of group members)*(photos uploaded per group member)/average activity.
A photo will be uploaded if the number of likes > mean number of likes. Depending on how far away the median is from the mean, more or less than 50% of photos will be saved.

Stage 4: Deploy

Using our process, Storybook managed to focus in on their idea, and develop it further, and talk about it in a way that clearly conveyed their simple, elegant solution to this annoying problem. It was thrilling to see Storybook take the foundational framework we provided and run with it – developing it further on their own.

When they won the $15,000, we were ecstatic.  Our process worked! And in this case, we were able to increase the value of their product by more than 15,000% 😉

Working with Storybook helped us crystallize our four tiered consulting process, and allowed us to take an enormous step forward in figuring out exactly what we could offer, and how we could go about offering it. Storybook’s case demonstrates exactly why we are so interested in partnering with incubators and accelerators.

Working with Storybook helped us crystallize our four tiered consulting process, and allowed us to take an enormous step forward in figuring out exactly what we could offer, and how we could go about offering it. Storybook’s case demonstrates exactly why we are so interested in partnering with incubators and accelerators.

Incubators and accelerators offer access to a wide variety of early stage startups with interesting ideas. However, as we so firmly believe: innovations and ideas do not succeed based on merit alone. By working with companies in the early stage of their innovation lifecycles, we can apply our methodologies for maximum impact. Our process works best when we can form a lasting partnership with a company and collaborate with them to adapt as we test and revise our assumptions. This period of adaption is a symbiotic one: as we help our client adjust, we also evolve our own practices and processes.

We’re excited to see where Storybook goes next, and help them realize the full potential of their innovation.

Contact us to learn more!

Google Glass Methodologies

Explanation of methodologies used in our study Shattered Glass: Killing Google Glass

Claim: Our calculations tell us that to appeal to average smartphone users, Glass would have to be 210% more valuable than smartphones are.

************

There are two ways we can estimated u_{E1}

1) Taking Q_{E}^{I} from market data

P_{E} = u_{E1} - \frac{Q_{E}^{I}}{S_{1}}

\implies .372 = u_{E1} - \frac{102,000}{150,000}

\implies u_{E1} = 1.052

2) Estimating Q_{E}^{I} with our equation Q_{E}^{I} = S_{1}\frac{n_{E}}{n_{E}+1}(u_{E1} - \bar{c_{E}})

P_{E} = u_{E1} - \frac{n_{E}}{n_{E}+1}(u_{E1} - \bar{c_{E}})

\implies u_{E1} = 0.987

This gives an upper and lower value for u_{E1} that we can use. Let us use the average of these two values as our estimate, implying that u_{E1} = 1.019. For Google Glass to appeal to the average smartphone user, it must be that:

u_{N1} - P_{N} \geq u_{E1} - P_{E}

\implies u_{N1} - 1.5 \geq 1.019 - 3.72

\implies u_{N1} \geq 2.148

2.148/1.019 = 2.10

************

Claim: At a price of $1,500, Google Glass would have to expect that niche market consumers valued Glass 315% more than they valued smartphones.

************

In 2013, the estimated cost of manufacturing Google Glass was $210.

P_{N} = u_{N2} - \frac{n_{N}}{n_{N}+1}(u_{N2} - \bar{c_{N}})

\implies 1.5 = u_{N2} - \frac{1}{2}(u_{N2} - .210)

\implies u_{N2} = 3.15

************

Claim: To ensure that they were moving towards mainstream market viability, when Glass develops a feature that increases its value in niche markets, that feature also needs to increase the value of Glass to mainstream consumers by about 50% of the value it added for niche consumers.

************

Let \phi = \frac{\partial{u_{N1}}}{\partial{u_{N2}}}

To be moving towards mainstream market viability, the naturally disruptive strategy must be moving to the right faster than the disruption threshold as u_{N2} increases. Let us use surgeons as our niche market to get an estimate for S_{2}.

According to the American College of surgeons, there were 135,854 surgeons in the U.S. in 2009. Data from 2013 are unavailable. This gives us an estimate that: S_{2} = 140.

\frac{n_{N}}{n_{N}+1} [S_{1}\phi + S_{2}] > S_{2} - S_{2}\phi

\implies \frac{.001}{.002}(150,000\phi + 140) > 140(1-\phi)

\implies \phi > .483

************

Shattered Glass: Killing Google Glass

In applying our model to explore why Google Glass failed, we found something surprising: Glass didn’t just die. Google killed it.

Rewind the clock to 2013. The media hype surrounding Google Glass was at a peak, and Google was well underway in its marketing campaign to generate demand before supply. From Google’s perspective, Google Glass was the next big thing: anyone (and everyone) who didn’t want to be a part of this would have to be crazy. However, at a price tag of $1,500, mainstream consumers would not have purchased Google Glass.

Glass had the potential to be highly valuable in small, niche markets, but commercialization in these markets would not have increased its value to mainstream markets.

Thus, Google chose to kill Glass and attempt to force it to evolve internally.

Glass was billed as the smartphone’s successor and hoped to disrupt that mainstream market. In 2013, 120 million smartphones were sold in the U.S.

The 4 producers of high-end smartphones accounted for 85% of this market. 149.2 million Americans owned smartphones, the average price of a smartphone was $372, and the average cost of manufacturing a smartphone was $218.19. With this information, we can offer an answer for the following question: at a price of $1,500, under what circumstances would the average user of high-end smartphones switch to Glass?

The fundamental principles of our theory of disruption relies on a simple inequality: that consumers will consider buying the new technology when the utility they get from the new technology minus the price of purchasing the new technology is at least equal to the utility minus the price of the existing technology: u_{N1} - P_{N} \geq u_{E1} - P_{E}

Our calculations tell us that to appeal to average smartphone users, Glass would have to be 210% more valuable than smartphones are. Was such a goal achievable?

In an essay for Wired, Glass user Matt Honan wondered where Glass would be accepted in everyday life:

“I won’t wear it out to dinner, because it seems as rude as holding a phone in my hand during a meal. I won’t wear it to a bar. I won’t wear it to a movie … Again and again, I made people very uncomfortable. That made me very uncomfortable. People get angry at Glass. They get angry at you for wearing Glass.”

For the average smartphone user, what is the marginal value that Glass offers? Almost every job that Glass can complete, a smartphone can do as well. The difference is that Glass allows for an extra element of convenience. However, the marginal utility of that convenience is small, while the psychic and social cost of wearing a piece of Glass is high. For the average user, the benefits of Glass are simply not enough to outweigh the concerns. Meeting 100% of the value that smartphones create for average users will be a difficult proposition, let alone passing 200%. Before Glass can even think about disruption, it needs to improve the marginal value that it adds over smartphones considerably. Suppose for a moment that Glass was being developed separately from Google and did not have massive cash reserves to rely on. To finance continued investment in the product, Glass would have to commercialize in a smaller niche market.

Glass had and still has tremendous potential in specific markets, where apps specifically designed to augment reality have a tangible impact. A study conducted by Stanford Medical Schools and VitalMedicals found that surgeons using Glass had markedly better performance outcomes. A UK Study is using Google Glass to help patients who suffer from Parkinsons and researchers have been working with volunteers age 46-70, developing apps specifically intended to suit the needs of Parkinson patients. Speaking about the advantages that Google Glass offers over smartphones, a volunteer on the study Lynn Tearse said “with Parkinson’s, negotiating a touch screen is really difficult.” Glass has also been able to help patients when they freeze, take calls, and perform other actions.

The utility from using Glass to satisfy specialized needs is very high. At a price of $1,500, Google Glass would have to expect that niche market consumers valued Glass 315% more than they valued smartphones. Such a goal was certainly attainable.

Google faced a challenge: commercializing a product in specialized niche markets is a delicate art. To successfully do so, Google had to ensure that it developed the Glass in a way that appeals to niche consumers to sustain revenue streams and generate income for future research and development. However, developing functionality for niche consumers, does not necessarily translate into increasing the value that Glass offers to average smartphone users. This dilemma caused the ultimate demise of Glass.

To ensure that they were moving towards mainstream market viability, when Glass develops a feature that increases its value in niche markets, that feature also needs to increase the value of Glass to mainstream consumers by about 50% of the value it added for niche consumers.

Would an average smartphone user find a specialized app designed to help surgeons about half as useful as surgeons found that tool? Amidst many other benefits, a Stanford Study found that surgeons using VitalStream (an app designed to allow surgeons to use Google Glass) recognized critical desaturation 8.8 seconds faster than the control group (who didn’t wear Glass). In the operating room, 8.8 seconds can be a crucial difference between life and death. The value added for surgeons is enormous. It is hard to imagine that the average user would value this same feature 50% as much as surgeons do.

Google was faced with an inconvenient truth. While Glass had the potential to create tremendous value in niche markets, commercialization of the product in those markets would not increase Glass’s value in mainstream markets. The new dimensions of competition that Glass introduced were simply not very important to mainstream consumers. At best, Glass in 2013 was an incremental innovation.

Glass had the opportunity to shift their focus from mainstream markets to focus their efforts on a few lucrative niche markets where they could have had enormous impact. Using the revenue generated from these smaller markets, Glass could have financed further development of the product with the hope that one day, perhaps, they would discover a ways to create value for mainstream consumers.

However, Google chose not to address the needs and demands of its niche market consumers. Let us continue with our example of Glass in the operating room. Privacy and security concerns are delaying adoption of Glass by medical professionals. The device automatically uploads information to the cloud when connected to the Internet, meaning that confidential patient data could accidentally be uploaded.There is a need to prove that Glass is safe and secure for patients before the medical community will accept it. Given these deficiencies, the price tag of $1,500 is too high to warrant investment in the technology by hospitals. As one Stanford Surgeon stated:

“while $1,500 may not be much compared to an MRI machine, many hospitals recently invested in tablets, so Glass will need to provide significant value beyond what iPads can offer in order to justify a second round of tech investment.”

The Stanford study demonstrating the potential impact of Glass suggests that overcoming this proof-of-value hurdle is certainly feasible.Google could have commercialized a stripped down version of Glass in niche markets such as the operating room. Why, then, did Google decide to kill Glass instead of developing specialized versions for niche markets? The answer to this question highlights a worrisome trend amongst Silicon Valley giants. Entranced by a desire to disrupt large markets, these giants shy away from developing technologies that have a more limited scope – despite the profound impact these technologies could have in smaller communities.

With its massive supply of cash reserves, Google doesn’t need to generate revenue to fuel its research efforts. Instead, Google can finance development of Glass through other means, in the hope that one day they will internally discover a way to disrupt the market currently dominated by smartphones and Apple.

Historically speaking, this is a bad practice. When innovations are commercialized in niche markets, they develop organically. These innovations evolve in unique and unsuspected ways. These organic innovations end up changing the way people live in far more meaningful ways than those forced to evolve in a closed environment. In many ways, the fate of Glass speaks to the newfound arrogance of Silicon Valley.

Innovations that have potential in small niche markets (such as Glass) should still be developed and commercialized in the markets they can dramatically impact (such as the operating room) even if they may never be disruptive in the mass market. Entranced by a desire to disrupt, we have forgotten that innovations solve problems: no matter how big or small they are. In 2016, rumors of an enterprise version of Google Glass, for use in environments like the operating room, have begun to surface. Perhaps Google has realized its mistake. Or perhaps Google is just responding to improvement in VR technology and preparing for imminent threats. Our model demonstrates that if Google had resisted the lure of disruption, it could have released simplified versions of Glass in niche markets much earlier.

Glass head of business development Chris O’Neill said: “We are not going to launch this product until it is absolutely ready.” What this really means is that Google won’t launch Glass until they think they have a shot at disrupting the iPhone, and the smaller communities that could benefit dramatically from Glass just have to wait.

Forecasting and Influencing Innovation

New ideas and inventions succeed when a critical number of their technological, social, cultural, political, economic, or environmental problems are sovled.

We dissect an idea, and consider the set of problems that it generate for two purposes.

1.) To forecast the chances of that idea succeeding

2.) To guide that idea towards maturity and increase it’s chance’s of success.