From a January 2008 article,
Heads-Up Design:
Connecting Design Decisions Directly to the
Bottom Line
by Wayne Mackey, Principal - Product
Development Consulting, Inc.
Executive Summary
A heads-up display, originally used
in military aircraft and increasingly common in automobiles, puts a
transparent view of important data directly in a user's line of
sight. Heads-up displays in automobiles show a driver essential
information such as speed and fuel levels on the windshield without
interfering with the view. Drivers can absorb critical facts without
taking their eyes off the road.
In much the same way, Heads-Up Design
puts important bottom-line information transparently in front of
designers so they can navigate the design process without
distraction. Until now, information about the impact of everyday
design decisions on profitability either has been lacking altogether
or has required designers to stop design work to hunt for it, the
equivalent of taking their eyes off the road.
Heads-Up Design is a new approach to
synthesizing information that encapsulates price and cost
considerations within the design process itself and complements and
extends front-end product definition processes such as
voice-of-the-customer. Product designers gain early information
about the impact of everyday design decisions on profitability that
ties directly back to customer needs. Because the approach is simple
and requires no additional software or training, it's transparent to
the design process, allowing designers to focus on design and
leaders to see ahead clearly.
Foresightful
Innovation to Improve Gross Margin
Maximizing gross margin or
profitability (the difference between the cost of building a product
and the price customers pay for the features delivered by the
product) is a goal of most for-profit businesses. When competition
or a changing business environment threaten to erode margins,
companies often respond by cutting costs or raising prices. However,
these seemingly sensible short-term approaches cannot form the basis
of long-term future growth. Sooner or later, a competitor will
undercut the price or costs will be trimmed to the bone.
Many companies recognize that the
answer to improving gross margin is continued innovation. By
providing ever-increasing value to customers, you keep margins high
through prices that fairly represent the value customers receive.
Companies have implemented voice-of-the-customer efforts or rigorous
product definition to determine exactly what customers value. But
many have been frustrated by the next step: how to tie customer
value to the bottom line. How can you know ahead of time whether a
particular innovation will positively affect profitability? Will it
sell? What do you do in the meantime? Even with a relatively short
product cycle, you could wait months or even years before you have
the market feedback needed to analyze the impact of your decisions.
To understand whether a new product
innovation will contribute to improving gross margins, you need to
link everyday product design decisions directly to gross margin.
Heads-Up Design uses a new design
vocabulary of price and cost to relate
price to features and functionality and
cost to implementation (hardware, manufacturing, marketing). The
system then creates a customer value curve that puts
bottom-line decision-making power into the hands of product
designers to take the guesswork out of improving gross margin. The
customer value curve translates financial impact directly into terms
that product designers can understand and act upon. This foresight
enables everyone to see exactly how various design decisions will
contribute to -- or detract from -- the bottom line.
Making the Connection to Product
Design
While you may shy away from putting
influence over the bottom line into the hands of design staff, the
fact is that designers already make bottom-line decisions -- usually
unconsciously and without understanding the impact of those
decisions. Designers make technical tradeoffs every day and managers
constantly allocate and reallocate time, staff, and dollars to
various projects.
What if your company could make these
decisions based on how much a project will contribute to
profitability? Using Heads-Up Design, tradeoffs become explicit. You
can evaluate design decisions based on whether the resulting product
will contribute to the company achieving its targeted gross margin.
Like the driver whose windshield display shows speed creeping into
unsafe territory, designers and managers can respond to early
information (without taking their eyes off the design process) to
adjust course before committing to the wrong direction.
Establishing the Target Margin and
Looking Differently at Features
The first step in applying such a
system is to understand your strategic business needs in relation to
the product/service portfolio you are building and set a target for
the gross margin you want to achieve. You then do the front-end work
necessary to determine what customers need and value, including
voice-of-the-customer research and a customer-driven product
definition process. The output of those activities -- a product
concept, definition, and feature set -- becomes the basis for your
Heads-Up Design work.
You begin to view product features
differently than you may be accustomed to doing. Instead of
representing products in terms of features and related benefits,
Heads-Up Design looks at each feature and sub-feature of a product
as embodying a discrete piece of value to the customer. Let's look
at the application of Heads-Up Design in a hypothetical company that
is creating a data collection and management system for use by
medical staff to log in-room patient interactions. Your customer
research (in the form of in-person interviews with nurses, doctors,
hospital administrators, and patients) has revealed the top needs
that might be filled by such a system. You have brainstormed and
translated these needs into a set of features that include:
a simple but flexible interface,
security,
speed, and
built-in error-checking
Each of these features can be further
broken down into sub-features. Looking at the product this way
enables you to examine value at a very granular level. The
interface, for example, includes the sub-features of:
uncluttered data entry screens
(which enables accurate data entry),
point-and-click data entry (which
minimizes the typing required), and
configurable user profiles (which
enable different users to customize and save their preferences)
Each sub-feature gets its own
customer value curve,
which directly translates what the customer will pay for different
levels of designed-in performance. It embodies the idea that there
is an optimum engineering specification target for each sub-feature
after which returns will not be proportional to the amount of effort
invested. Targets are important because overdesign is as much
of a challenge on the road to achieving desired gross margin as is
underdesign. Often, companies spend a lot of money designing
things that the customer won't pay more to purchase.
To drive cost -- the other side of
the gross margin equation -- down to the sub-feature level as you
have done with price, you use the same breakdown of features you
came up with initially and then look at the materials and processes
required to create and deliver each feature. What portion of the
sub-feature point-and-click data entry
is implemented in each hardware and
software module? That percentage, in the aggregate, determines the
cost targets for every hardware and software element of the product.
The revelation of Heads-Up Design
happens when you put together the two sides of the equation: the new
way of examining price (what the customer is willing to pay) and the
cost (what it takes to build it). By employing a design process that
accounts for both, you achieve a direct view of your gross margin on
each sub-feature of the product. As your design and testing proceeds
toward production, each potential change in sub-feature
specification value translates directly into a change in the
product's gross margin, and you can make trade-offs accordingly. You
have, essentially, embedded bottom-line effects right into the
design process.
Advanced Degree Not Required
Designers want to design, not spend
time learning things that don't enhance their ability to create.
Project managers want maximize the return on their voice-of-the
customer investment throughout the design and development cycle. The
beauty of Heads-Up Design is that it relies on simple tools that tie
everyday design decisions back to the customers' needs. The only
software required is a spreadsheet. No training is needed, just a
short series of facilitated workshops.
Will design for gross margin eliminate
competitors? No -- but it can give your company a big advantage by
embedding financial success in every part of your product design.
From the book, The PDMA ToolBook 3 for New Product
Development
Metrics That Matter to New Product Development
Measuring Actions, Getting Results
by Wayne Mackey, Principal - Product
Development Consulting, Inc.
Almost every company attempts to
measure its new product development (NPD) efforts in some way, yet
industry research shows that very few are satisfied they are
measuring the right things. Further, product development
practitioners often find metrics burdensome, disconnected from their
real work, or an infringement on what they view as an inherently
creative craft. This chapter is written for both product development
practitioners and leaders to help them form a common language around
measuring the difference between successful and unsuccessful product
development. We begin by offering definitions of some metrics terms
to provide a common language of discussion, then go on to define six
keys to metrics success along with the associated mis-steps
development organizations often make. We introduce the concept of a
metrics tree and explain how using it as a tool for implementing
metrics offers an array of valuable benefits. We conclude with a
brief case study. Putting this all together allows you to set
metrics that really matter to product development...
Publication date: October 2007 by John Wiley &
Sons. You can buy it at
Amazon.
From the book, Value Innovation Portfolio Management -
Achieving Double-Digit Growth Through Customer Value
by Sheila Mello, Wayne Mackey, Ron Lasser &
Richard Tait
INTRODUCTION
MANAGING THE PRODUCT PORTFOLIO FOR
CUSTOMER VALUE: TRANSFORMING BUSINESS
DRIVERS FOR NEW PRODUCT DEVELOPMENT
Every executive who has sat through
presentations by eager product managers touting hockey-stick growth
curves for proposed products knows that financial projections alone
may not provide a meaningful assessment of a product's potential
market success. Yet, given no viable alternatives, most will either
shoot from the hip or resign themselves to simply going by the
numbers, using metrics such as projected market share and growth,
net present value, and cost/benefit analysis -- even when such
economic measures involve an uncomfortable amount of guesswork.
There is a better and simpler way. The key to choosing products that
contribute to sustainable profitability lies in changing the
business focus of portfolio management from financial metrics to
customer value. Paradoxically, by putting aside financial data and
giving more weight to customer value data when making product
portfolio decisions, companies can in fact improve financial
performance by identifying products with the potential to delight
customers. Customer value, defined as the customer's perception of
how well a solution meets their needs, is the only proven course to
drive profit: The greater the value of the solution to the customer,
the more likely the customer will buy it - and pay a premium price
for it. And, unlike many financial projections (and contrary to the
beliefs of many executives), customer value is based on something
real, which you can accurately measure to yield trustworthy results.
REDEFINING YOUR PORTFOLIO ALONG THE
VALUE
DIMENSION
Just as an individual's dogged
pursuit of happiness for its own sake may in fact engender misery, a
myopic quest for profit may not actually yield long-term
profitability. So how can companies transform their approach to
portfolio management from being profit-obsessed to customer
value-driven?
The first step is to examine how the company defines
portfolio management and how well -- or poorly -- its product
portfolio meshes with its business strategy. In our work with scores
of companies, we have encountered many that divorce portfolio
management from business strategy. They consider only how much of
the R&D budget they will allocate to each project to maximize the
calculated value of the total portfolio. While R&D resources do have
dollar values and efficient resource allocation is a valid concern,
in our experience this view leads to over-reliance on financial
metrics and a narrow focus on individual products and their revenues
and costs. By contrast, those companies approach portfolio
management holistically, considering how their allocation of
resources maps to strategy -- and include in those decisions all
functions in the company, from sales and distribution to support and
manufacturing -- make sounder product portfolio decisions and create
products that better match customer needs.
We have observed another
serious problem. Portfolio management, product realization, and
business strategy can become disconnected from each other when
decisions about which products to build are divorced from the
company's vision and mission. Several factors contribute to this
disengagement:
* There's process: Product development often is
managed using the phase/gate review process, a methodology concerned
with schedule and resource management. Phase/gate reviews can be
isolated from the portfolio and unsuited to the dynamic nature of
products and markets.
* There's turf: Companies believe: Product
development is an R&D thing. R&D is not part of business strategy,
which is a business thing. Product management is for lower levels of
management. Departmental functions (often referred to as silos) in
the organization take ownership of -- or set up barriers to --
successful product development efforts.
* There's personality: Business
managers think, We're on the business side and product development
is too technical. R&D looks at marketing and thinks, What do they
know? Marketing is an art, not a science.
The overarching problem is that, for
many companies, portfolio management is not aligned with new product
development. Further, while many companies create cross-functional
teams at the operational level, senior level teams often don't
operate cross-functionally.
When deciding which projects to fund
in the portfolio, executives must consider not only cost, but also
the customer value delivered by the project in relationship to the
company's strategic goals. The intersection of high customer value,
high strategic value (aligned with the strategy of the business unit
or enterprise), and optimal investment intensity (the level and
profile of resources invested in a new product or venture) is the
sweet spot for new portfolio projects...
Published: September 2006 by J. Ross Publications.
The book is available to order online
from Amazon.com:
http://www.amazon.com/gp/product/1932159576/ref=pd_rvi_gw_1/104-8508494-5478318?ie=UTF8
From PDC's Discoveries Newsletter 6/04....
"You Can't Stop the
Waves, But You Can Learn to Surf"
... Optimizing Complexity in Product
Design
by Wayne Mackey, Principal - Product
Development Consulting, Inc.
"Complexity is, therefore, in
part, the study of pervasive innovation in the universe."
-- Mark W. McElroy, appearing in Journal of Knowledge
Management, Volume 4, Number 3, 2000.
Complexity in nature is
fascinating, and the application of complexity theory to business is
on its way to becoming an established discipline. However, most of
us could benefit from making things simpler, not more complex. The
same goes for product development professionals, who face pressures
-- from the general tendency toward incorporating new technology
into products to market demands for more and more features -- to
create increasingly complex products. This might be acceptable if
complex products always worked better and sold better than their
simpler counterparts, but they don't.
As if that weren't enough, the
product development process itself is becoming more complex,
the result of three trends related to handoffs of one sort or
another. Concurrent development and the changing nature of teams
mean more internal handoffs. Codevelopment (more than one
company or division creating a product) means more external
handoffs. Greater reliance on outside suppliers for complete or
near-complete products means handoffs at the supply-chain level. And
what does a handoff represent if not a rich opportunity to drop the
ball, cross the wires, or royally botch things up?
So what's wrong with a little
complexity?
Complex products can
increase the possibility of failure and drive down product quality.
Complexity in product definition, design, and development decreases
efficiency and makes them harder to manage.
We're not suggesting you return to
the days of completely internal product development to eliminate
external handoffs. Nor should you randomly chuck features in an
attempt to slim down your offerings, since radical reductionism
carries its own risks. The solution lies in determining the right
amount of complexity for your industry and product -- and
changing your business to meet that target.
Best-in-class companies seek to
understand what drives complexity so they can mitigate its negative
effects. Evaluating product development processes in light of their
complexity can open up whole new realms for process improvements. In
each area discussed in this article, we assess what can go wrong,
then offer solutions based on our experiences with leading
companies. These companies balance complexity to their advantage,
applying assessment and benchmarking to identify industry best
practices. Your company can do it too -- read on to find out how.
Key Questions:
Is your company's product development process too complex? Are you
using the best processes to generate requirements? What are the
critical few product specifications necessary to meet stated and
unstated customer needs? How do you organize your product
development teams most efficiently? How much testing do you need to
do before going to market?
Customer requirements
Assessment:
All product development begins -- or should begin -- with a
consideration of customer requirements, ideally gathered through a
customer-centric process. The first area to examine for complexity,
therefore, is requirements generation. Is your company adept at
identifying the critical few product specifications to meet
stated and unstated customer needs? Does the complexity inherent in
your product address problems that your customer really cares about
-- and will pay to solve -- or is the complexity irrelevant to
customer needs? Will your manufacturing organization be able to
handle this level of complexity? What is the complexity level of
product specifications for other companies in your industry?
Solution:
Look at how you turn customer requirements into
specifications. Is it alchemy, or do you use a process that gives
adequate consideration to both stated and unstated customer needs,
and assigns value to specifications based on the degree to which
they address true customer concerns?
Take a look at your product
specification documents. If they dive too quickly into detail, or if
they proscribe approaches to problem solving at the design level,
they are adding complexity to your design process. PDC has found
that, for industry-leading companies creating moderately complex
products, a two-tiered requirements definition works best. The
top-level document is short -- no more than 10 pages -- and outlines
basic functionality. At the second level, the document is more
detailed. Generated with input from those involved in gathering
customer input, this document, about 30 pages in length, should
provide designers a clear picture of what the product should
do, not how to do it.
Customer insight:
"We have implemented a new way of documenting and prioritizing
market requirements and are giving all of our design engineers a
closer look at the customer environment through training and field
visits." -- Paul Adam, Manager, Product Development Productivity,
Medrad, Inc.
Functional
vs. project management
Assessment:
Debates rage. Which is
best, a management structure organized along functional lines with
clear delineations between engineers and marketers and each
reporting to a functional manager, or a collaboration of team
members who focus their particular expertise on a specific project?
Or is there a way to straddle the fence and get the best of both
worlds?
Solution:
The road to "the best
of both worlds" is littered with corporate bodies. Each structure
has pros and cons and each may work best for your company at a
particular point in time. To determine what makes most sense for
your organization, examine your "symptoms." Are missed deadlines and
cost overruns the biggest problem? Have you noticed infighting among
different disciplines? These indicate the need for a stronger
project orientation. Make the project king. If, on the other hand,
cost and schedules are okay, but you notice skill deficits --
engineers with design challenges they can't solve or who are not
incorporating the latest technology -- then you may need to move
back to functional orientation, giving technology staff a "home
room" where they can hone skills or get technical training. The
engineering process, rather than individual projects, needs to
dominate.
Cross-functional teams
Assessment:
In the "old days,"
engineers designed products based on specs they received from
marketing, then tossed the design over the wall to manufacturing.
Today, cross-functional teams are in -- but how cross-functional is
cross-functional enough? Are the numbers in line for your industry?
Too many functions over-involved too early can slow the design
process. Involve too few disciplines early on, and you may find
yourself redesigning the product after unearthing problems down the
line. Either way, the wrong functional ratio costs in lost time and
dollars.
Solution:
Use staff ratios to measure how cross-functional your teams
are, then compare with others in your industry and modify your
ratios to bring them in line with industry standards. A product
development team might have a manufacturing ratio of 7:1 (seven
designers to every person from manufacturing) at the mid-point of
their development. At the same point in development, a supply chain
ratio of 12:1 ensures that designers will incorporate crucial
information about suppliers and supply chain issues. Best-in-class
companies maintain optimum ratios throughout the product development
cycle. Companies succeed at reducing complexity at the back end of
the design process when they understand the need for staff ratio
analysis and understand that they need to populate teams with
members from various disciplines (create cross-functional teams).
While it may be more challenging initially to manage a
cross-functional team, the resulting designs turn out to be more
manufacturable, simplifying the overall process.
Generalists vs. specialists
Assessment:
Companies where products
are designed exclusively by specialists -- people who do one thing
very, very, well -- have more hand-offs and, consequently, more
complex and longer development cycles. Of course, some tasks are so
technically demanding that you need a specialist to do one
particular thing. The specialist/generalist question goes
hand-in-hand with the question of the optimum number of concurrent
design projects for each team member.
Solution:
Designate the "critical few" true specialists as
subject-matter experts -- knowledge resources for many teams -- and
use them as references rather than project team members. In most
cases, using a lot of generalist designers increases efficiency and
reduces complexity by minimizing the number of hand-offs. Use
relatively few pure specialists. Keep everyone else working on no
more than two projects. According to PDC's research on large
projects, two is the optimum number of projects per team member. A
single project can fail to provide the level of stimulation
necessary to fully engage someone, while juggling three or more
projects inhibits productivity as work quality drops and confusion
among projects lengthens work time.
Defining
roles
Assessment:
"Too many cooks spoil the
soup" may be a reasonable caution for the kitchen, where one or two
people can concoct culinary masterpieces. Complex new products,
however, can require large teams to create. What's needed is not
fewer cooks but a clearer definition of roles and responsibilities
and a master chef to coordinate the activities of everyone else.
That master-chef role -- management and coordination of the entire
project -- should fall to both the program management and systems
engineering (SE) disciplines. Most companies understand program
management, but many struggle with systems engineering. SE is
responsible for making sure all the components of a system,
regardless of who produced them, work together to meet product and
customer requirements.
Software companies understand this
concept because software is developed this way: systems engineers
make sure the software is integrated and works well with all the
necessary operating systems. Unfortunately, at many companies, the
SE function falls to people without formal SE training or experience
such as QA or design engineers. These folks, as skilled as they may
be in their areas, are not trained to take a holistic approach that
considers the dynamics of all aspects of the system.
Solution:
Evaluate the maturity of your SE process (you can use a tool
such as PDC's SE maturity model). Who has had formal SE training?
Who, in the absence of formal training, has experience? Using a
maturity model and skills assessment, make sure team members possess
the right skills. For example, check to see if the SE staff knows
how to use the tools at their disposal. Do they make a formal effort
to modularize design? Are specs well-written so they're
understandable both inside and outside the company? Then, make sure
you implement the SE process with all development projects. Because
systems engineers are involved up front as part of the group
that architects the system as well as later on, they can help avoid
potential problems that may cause costly delays close to launch
time.
Key Questions:
Are your teams cross-functional enough? Are they organized
efficiently? Do you have specialists working on the right parts of
the project? Are your engineers overwhelmed with too many projects?
Is your systems engineering organization mature?
Decision
making processes
Assessment:
One of the most critical
development areas to assess for complexity often appears the least
quantifiable and the least amenable to evaluation: decision making.
How an organization makes decisions can often have as
profound impact on the effectiveness of product development
processes as what decisions are made.
What decisions do you make without
a costly and complex trail of documentation? Which decisions must be
made explicitly and documented? Who has authority for what types of
decisions -- and do they know it?
Today, the consensus method of decision-making is the most common.
On the plus side, consensus decisions usually have buy-in from
everyone, and often are easier to implement. On the negative side,
consensus is the slowest decision-making method, and a single
naysayer can stall a whole project.
Solution:
First, draw lines of decision-making responsibility clearly
and make sure everyone understands them. Identify the boundary
between formal and informal decision-making. At companies executing
best practices, designers understand their bounds of control.
Knowing where the line is also means designers know whom to appeal
to if they don't agree with a decision. It turns out that, within
reason, where you draw the line is not nearly as important as simply
drawing the line. Without a line, people don't know what they have
control over and what they don't. They end up making decisions they
shouldn't make, or not making decisions that ought to be theirs.
Next, determine how quickly your
organization is making development decisions. Are they good
decisions? Companies that are fast on their feet and apply best
practices favor leader-centered, consultative decision making -- the
fastest route to good decisions -- over consensus decision-making.
The leader's job is to gather input, make sure everyone with
something to say on the decision has been heard, and then make a
decision. If team members don't like the decision -- too bad. But
most people accept the results of this process as long as they
understand it up front. The test of a good decision for the whole
team moves from "Do I agree?" to "Was my viewpoint heard and
understood before the decision was made?"
Customer insight:
"Reorganization of product development to a project centric
structure has changed the way we make decisions. We are able to make
decisions much more quickly without degrading the quality of those
decisions." -- Paul Adam, Manager, Product Development Productivity,
Medrad, Inc.
Key Questions:
Do you -- and design team
members -- clearly understand which decisions they can make on their
own and which require input and approval? Is your decision-making
process structured for optimal efficiency? Are you sacrificing speed
up front for complete buy-in on decisions, or are you rushing the
process up front and creating resistance later?
Testing
and documentation
Assessment:
All testing is
non-value-added. Period. Customers don't care how much you test.
They just want to know that the product works. If you can deliver a
quality product that works without testing -- do it. TV sets are an
example. Through refinements in design and manufacturing,
electronics companies create TV sets that work straight out of the
box; individual sets are not tested. In reducing product design
complexity, the goal is to do the minimum amount of testing to
assure delivery of a high-quality product. So, what is the minimum
amount of testing you need to conduct?
Solution:
First, determine what
absolutely requires testing, and what can be taken care of through
process control and simulation (see sidebar, below). Simulation is a
common method of testing and can satisfy testing needs even at
companies in heavily regulated industries such as medical devices,
where strict regulatory requirements govern processes and outcomes.
Simulation may be enough to convince the regulatory body that the
company understands and can manage the risks associated with the
product. For example, when Boeing introduced the 777 in 1994, it
sold and flew the first plane built -- a first in an industry that
is known for extensive testing. Boeing relied on sophisticated
simulations to tell designers what they needed to know about how the
plane would function, which reduced the complexity of the design
process for the Boeing 777.
To reduce testing to the minimum
required to assure delivery of a high-quality product, it helps to
know about best practices for your industry, such as CAD/CAM
simulation. You may also want to consider factors such as the design
capabilities of the factory that will manufacture the product.
(Design-for-manufacturability often ties in with a company's efforts
to implement design for Six Sigma. If your design relies on
processes that are hard to execute, you'll never achieve Six Sigma
quality.)
Customer Insight:
"We identified test effort above and beyond what the benchmark
companies were doing. We have developed 30 standard templates for
requirements verification testing that are already reducing our test
effort considerably." -- Paul Adam, Manager, Product Development
Productivity, Medrad, Inc.
Documentation and the trail of complexity
Assessment:
There is no question that
documentation requirements can add to the complexity of the design
process. Many formal processes require documentation, and
documentation requires your design team to spend time reviewing,
approving, storing, revising, and managing documentation. If there's
one thing your engineers hate more than meetings, it's
documentation.
Solution:
Define what requires
formal documentation and what can be released informally, allowing
for quick changes at a low level. Having the right balance of formal
and informal documentation keeps the development process efficient
by allowing low-level changes to happen quickly while only decisions
with far-reaching impacts are formally reviewed.
Key Questions:
How much do you use
computer simulation as opposed to building models and testing? How
many different aspects do you test? Who does the testing -- the
design team, or a separate test team? How do you document testing?
Do you have the right mix of formal and informal processes? How many
times do you retest? Are you overtesting?
Conclusion
Don't underestimate the
pervasiveness of complexity as a problem in product design and
development. Look for root causes of the symptoms that plague you
most, whether it's cost and budget overruns or out-of-control
requirements. Assess current practices. Benchmark against industry
leaders and competitors. Never stop asking questions -- and
searching for answers -- and you'll be well along the road to
product excellence.
From PDC's Discoveries Newsletter 3/04....
Making Innovation Count
A Framework for Measuring the
Creative Contribution to Product Development
by Wayne Mackey, Principal - Product
Development Consulting, Inc.
"Innovation is the specific
instrument of entrepreneurship. the act that endows resources with a
new capacity to create wealth."
-- Peter Drucker, Innovation and
Entrepreneurship, quoted in Harvard Business Review May/June 1986
A quick search on Google for
"history of innovation" yields nearly 7,000 results, many of which
come from corporations using the phrase to describe their own
activities. Given this, you'd think that most companies understand
the process of innovation and can use it effectively to provide
their businesses "with a new capacity to create wealth," as Peter
Drucker says.
You'd be wrong. Many companies
view innovation as a necessary evil arising spontaneously from an
ill-understood confluence of circumstances, and allow it to run
along its own track with little management oversight. Even
acknowledged leaders in innovation, such as 3M and IBM, struggle
with how to best harness innovation in the context of a
profit-making enterprise.
As with any business process, the
first step in exploiting innovation to serve your company's goals is
to measure it -- an inherently difficult undertaking. Innovation is
a creative process, which, by its very nature, involves doing things
that have never been done before. Yet, if you don't measure it, you
can't manage it, and if you can't manage it, you can't control it,
and if you can't control it, it can sneak up and sabotage your
business before you even know there's a problem.
The message of this article is
that you can measure and optimize innovation. Read on
to find out how.
Innovation in context -- where does it fit?
In general, the innovation
process precedes the more structured product development process.
Innovation, as we define it, might begin with an employee's "aha!"
moment in the shower before coming to work. The employee might be
inspired with an idea for a totally new technology, a completely new
way of providing a service, or even a new way of selling, as in the
case of Dell Computer, which pioneered the idea of customers
specifying build-to-order systems online.
The inspiration then enters two distinct phases. The first, which we
refer to as ideation, is the process of filtering the idea.
The company examines the shower inspiration to see if it makes sense
and to determine whether it's a viable strategic and market fit that
the company wants to back with an investment. During the second
phase -- incubation, in our parlance -- the company begins to
spend money to determine whether the idea is feasible. Both phases
occur prior to the idea's incorporation into a specific product.
After incubation, the idea moves into whatever process the company
uses for product development, and the innovation phase ends.
What's wrong with current ways of
measuring innovation?
The problem is not that
companies don't measure innovation today. Every company does, in one
way or another. Most innovation metrics, however, are immature.
Either they are takeoffs on regular product development metrics or
are based on a single company's past experience. Since neither
approach uses objective facts about what succeeds and what doesn't,
you might as well close your eyes and throw a dart at the wall to
determine how to encourage innovation that leads to successful
products.
A robust innovation maturity model
needs to address both the ideation and incubation phases, as well as
a critical third piece: the management system that enables
innovation to be successful. Without the right controls, funding,
and processes to bring an idea from the employee's shower to the
customer, a company may fail even if the rest of the innovation
system is strong.
As with any metric, it's important
to understand that the act of measuring doesn't actually solve
problems. It does, however, provide information so leaders can make
more informed decisions, giving everyone not only an accurate view
of where they stand, but what they have to do to improve.
Who needs an innovation framework?
Companies whose bottom
lines are driven by innovation are good targets for applying an
innovation framework such as the one developed by IBM and PDC (more
on this to follow). For example, in the heyday of deregulation,
power companies were looking for innovative ways to differentiate
themselves and increase profitability. Now that deregulation has
receded as a business force in the power industry, power companies
have returned to core businesses, and probably would not be good
candidates for applying an innovation framework. In contrast, many
of the technology and service companies that weathered the economic
downturn by cutting R&D budgets now have far fewer products in the
pipeline than is healthy. They need to pump up the volume, but how?
Using an innovation framework would allow them to get more products
into the pipeline not simply by throwing money at the problem but by
spending that money in ways that make sense and minimize risk.
IBM decided to tackle the problem
of measuring innovation directly. Through work with its Emerging
Business Opportunities (EBO), IBM realized it couldn't treat the
development of completely new products and technologies the same way
it treated ongoing product development. The EBO initiative
introduced a different set of rules. For example, it's impossible to
do a clear business case or an accurate market projection before the
breadth and depth of any technology's potential use is known. So
best practices in the EBO phase supplement traditional business
analysis and market projections with efforts like technology
diffusion studies. One of the goals of IBM's EBO phase is to grow
revenue by finding the potential breadth and depth of a new
technology's application. That information, combined with the
current market size of the potential applications, gives insights
into the relative value a new EBO.
Still, even an innovation leader
like IBM, which realized the right way for EBOs to go, had no
objective way to measure the many processes involved. That's when
the IBM corporate EBO process architects contacted PDC. With 14
years of client work and a database of innovation best practices, we
were able to provide the hard data to help IBM sort out what works
and what doesn't. What are the key elements of innovation?
What differentiates success from failure?
Although many people believe that
real breakthroughs come from the lone entrepreneur working in a
garage, PDC's research shows that larger companies have a greater
need to control the innovation process. In IBM's case, for example,
the company had no trouble generating ideas. The challenge was
getting the right ideas into the system and then rapidly turning
them into profitable products -- finding the elusive control we
mentioned earlier.
One of the ways companies are
successful in innovation, of course, is to purchase smaller
companies, and the model addresses this as well.
The nuts and bolts of the model
PDC developed the
innovation maturity model in conjunction with IBM during 2003, using
both proprietary and public research. The model is divided into five
parts and covers 30 aspects of innovation. Why 30? We wanted enough
to provide meaningful information, but not so many points as to make
the model cumbersome. According to Dave Coughlin, Executive
Consultant for IBM's integrated product development team, "We had
lots of discussion about what were the key elements that the model
needed to cover. Originally, we started with 80, then cut it to 60.
We still felt that was too many."
People simply won't use any model
that requires special training or setup, so we kept the model
simple. "We spent time taking surveys from other organizations,"
says Tom Luin, Business Transformation Architect and another key
team member. "They took an hour or two to complete. We knew most
executives wouldn't put in that much time. Ours takes about 15 or 20
minutes, and that has proven very useful. Almost everyone agrees to
take that amount of time to answer 30 questions with multiple choice
answers."
The model is supported with an
Excel spreadsheet including dropdown answers. It begins with an
introductory section that summarizes the company or business unit
being studied and asks about its relative success in innovation.
This information is used to sort the data and statistically
correlate individual elements to success. Each of the next three
sections asks ten questions, to which there are five possible
answers. These sections cover management system, ideation, and
incubation. The final section is an automatically generated report
showing, in chart form, where a company stands in relationship to
the rest of the company and to other companies.
The five potential answers to each
question in the management system, ideation, and incubation sections
describe five specific items that characterize a level of maturity.
It's important to note that all the answers are based on real
and demonstrated practices at companies. This means that
although you might look at the highest level and say, we could do
better than that, you'd be attempting something that no company has
achieved in the real world. Likewise, the lowest level may not be as
bad as it could get, but it represents the worst that any company is
doing in the real world. The key to establishing a relevant baseline
is using objectivity in answering the question.
As with any other benchmarking
scenario, no single company will achieve the highest level in every
area. That would be an unrealistic goal, like taking the gold in
every Olympic event. The genius of benchmarking is that it points
out where your weaknesses lie, so you can take steps to address
them. Using the reporting section at the end of the tool to map your
company against the database for all other companies can be
particularly revealing.
Using the tool
There are a couple of
steps to using the tool. First, someone with a fairly high-level
view of an organization or project fills out the questionnaire,
which usually takes 15 to 20 minutes. Then the company goes through
a process called calibration. This involves a small group of
people, usually between four and 15 -- reviewing the answers to
achieve consensus. Calibration is a significant element of the
process, because it ensures that everyone who answers the questions
understands them in the same way and ensures that everyone comparing
himself or herself to the database is looking at a single version of
the truth.
Interesting -- and potentially
distorting -- things can happen if a company skips the calibration
process. In PDC's work with clients, we have discovered that
approximately 20 percent of the answers change in some way as a
result of the calibration process (which sheds a
less-than-flattering light on traditional benchmarking
questionnaires without calibration built in). Tom Luin of IBM
agrees. "Using individual answers, we can come up with something
that's fairly close to accurate. The collaboration step
allows everyone to come to a consensus about what the answers are
for their business unit. You eliminate the extreme viewpoints. It's
a team-building exercise as well. Everybody gets to say what's
important to him or her. It's a very useful organizational
construct." We also have found that simply knowing others will
review their answers causes people to answer questions more
thoughtfully, although of course there is no way to measure the
effect of this.
Let's look specifically at one
area of the ideation section, portfolio management, to see how the
process works. The following five answers to the question about how
you handle innovation portfolio management, as with any other
section, are based on real company experience and range from poor to
excellent. Here's the basic idea, with answers ranging from worst to
best:
1. |
Funding is ad hoc
or "creative." There is no central portfolio management. |
2.
|
One or two people
do portfolio management. It is centralized, but proposals must
be justified and rejustified as funding often moves from one
project to another based on short-term goals. |
3.
|
Portfolio
management follows a process, usually annual, but data about
potential projects is either incomplete or not credible.
|
4.
|
Portfolio
management is process- and data-driven, but adjustments are made
based on short-term events. |
5.
|
Portfolio
management is process- and data-driven and regularly adjusted,
with clear tie-ins to business and technology strategy.
|
In general, most companies will
probably find themselves starting at level 3. They have some sort of
a process in place, but individual projects offer wild projections
about potential success, showing off the proverbial hockey stick in
management presentations. This makes executives highly skeptical of
the data, which makes it hard for them to justify decisions. The
benchmarking exercise has revealed that the company needs better
data to advance to the next level.
It's usually best to target
improvements one level at a time. However, the process can also
reveal that you might want to jump ahead faster than that -- if,
say, your company were at a level 2 and others in your industry were
at level 4. Again, measurement doesn't solve the problem for you,
but it tells you the size of the problem and how much to worry about
it, allowing you to put resources where they can do you the most
good in your marketplace.
The process also can be used
within a company. Interestingly, one company we worked with allowed
different internal research organizations to compete for dollars. If
a particular group were mature in an area, it would get more money.
This made for a healthy internal competition. Once you can measure
how well a research organization does research, then you can give
people incentives to improve.
A work in progress
At IBM, Luin and
Coughlin's group intends to use the framework internally to evaluate
implementation of an innovation management system, and externally to
compare IBM's activities with other companies.
"We've done some of both already," Luin says. They have taken it to
six or seven companies externally, and they have asked a number of
internal EBO groups to use the framework. "They found it
interesting, but we haven't determined how to turn interest into
action." Part of that is a question of priorities, and part is
related to the challenges of trying to change management culture.
"The idea of talking about and managing EBOs is a culture-shift from
IBM's traditional management style and executive experience."
While IBM already has gained from
using this model, there is more work to be done. According to Luin,
the philosophical feedback received from executives in EBO areas was
very positive. "Almost all of them said, 'This is good stuff, it
should work; it's a different way of thinking about innovation but
it shows promise. We need to pilot it and demonstrate its
usefulness.' We are still very much in the middle of that process."
One thing that would advance the
process is a bigger database of participating companies, which would
help refine the results and ensure they are statistically
significant. We at PDC would like to correlate individual answers
with levels of success to determine which areas are most critical.
That said, this is still one of the only research- and data-driven
methods for managing innovation and a good place for any company to
begin. "We planted the seed and it's starting to grow," Luin says.
Coughlin adds, "We're incubating the idea."
What's in it for you?
To figure out whether your
company could benefit from innovation measurement, you first need to
determine the strategic importance of innovation to your company. Is
innovation a central competitive factor in your industry, or is the
market driven by other factors? If innovation is significant,
are you satisfied with the efficiency of your R&D efforts, or are
you wasting R&D dollars chasing the wrong innovations? Finally, take
a hard look at your current innovation process and how well it
measures the critical elements. If your process has room for
improvement, you may benefit from a formal assessment of your
company's innovation maturity. This isn't an easy road, but it can
transform your business from one in which innovation is just a
marketing buzzword to one in which innovation drives revenue growth.
From Sopheon's
InKNOWvations newsletter 7/02....
Metrics Make a Difference in Product Development
by Wayne Mackey, Principal - Product
Development Consulting, Inc.
Introduction: Don't use a hammer to saw wood
Almost every company attempts to measure its
product development efforts in some way, yet industry research shows
that very few are satisfied they are measuring the right things.
Further, product development practitioners often find metrics
burdensome, disconnected from their "real" work or an infringement
on what they view as an inherently creative craft. These issues are
most often rooted in a basic misunderstanding of what metrics can
and can't do effectively.
Metrics don't fix problems. Ever. Instead, the
power of metrics is in accurately highlighting situations and issues
that can, if handled properly, make a difference. Allowing a capable
product development or leadership team to make early, informed
decisions should be the goal of any metrics approach.
Metrics also make poor policemen - numbers alone
do not change behavior. All constructive action comes from the
people involved, not from the numbers. Benefits are derived from
fewer missteps along the product development timeline and an earlier
resolution of problems.
Lessons learned from industry's best and brightest
So what's wrong with all those product development
metrics that companies are dissatisfied with? First, there are too
many of them. The most important lesson from leaders in the field is
to aggressively seek the "critical few". Metrics systems are often
clogged with measures of meaningless processes or are choked with
"good news" items that will never really require the attention of
the people involved.
Second, they are disconnected from the goals of
the company or the project. These "metrics in search of a goal"
offer no insights to consistently move a project or balance sheet
forward. Finally, they lack teeth. A metric that is not monitored
regularly and acted upon immediately is a waste of time. Period.
Successful product development metrics systems have consistent,
local governance systems that never ignore any metric or its
message. Everyone in the organization knows what is and is not paid
attention to and they invariably react accordingly.
Research indicates that leading companies set
product development metrics based on the experience of the
organization. Ask any product development professional if he or she
can discriminate between good and bad product development on his or
her project and you will almost always get a "yes". Not only that -
they can almost always "smell" product development trouble early.
These are the "critical few" indicators that the team needs to
measure and act upon during development. Successful product
developers have stopped counting drawing changes and adding up hours
of training in favor of having the people closest to the work
determine the key measures in achieving the end goal. An important
side benefit identified in using the experience of the organization
to set metrics is that buy-in to their validity and importance - a
major force in the longevity and health of product development
metrics systems - comes along for free.
A less obvious, but equally powerful metrics
capability is their ability to quickly deploy a strategy and drive
change through an organization. If the boss measures something new,
it becomes important throughout the organization almost immediately.
Goal setting and flow-down below the executive level is important to
project cycle time. It is difficult to be successful without a
direction to head in and a tangible goal. Executive teams tend to do
this well, but the necessary translation to meaningful goals at a
working level is often missing. This is the responsibility of middle
management, which is generally embraced at successful companies. A
structured approach to goal flow-down and formal communication paths
through the organization are often cited as contributors to metrics
success.
Three simple steps to metrics success - it doesn't have to
difficult to be right
Companies that have achieved demonstrable success
in product development metrics have certain process steps in common.
They are often called different things or are broken down into
various sub-categories, but three essential elements are easily
identifiable.
| Goal setting precedes metrics
selection.
Every metric has a clear tie-in to a goal. Everyone
bound by a metric knows why and how it supports a larger goal of
the organization or project. A "metrics tree" detailing the
flow-down of goals from the top of the company is one method of
communication. Broad involvement in a formal flow-down at each
level of the organization is a common thread at successful
companies. |
| Metrics are set with a "causal
action" relationship to the goal.
Instead of measuring the goal, successful companies
measure the things that cause the goal to happen. A useful analogy
is a common diet. If your goal is to lose 15 pounds, measuring
your weight is easy, but doesn't really help you achieve anything
but informed fluctuation. If you instead measure your daily
caloric intake of food and the amount of exercise you do,
achieving your goal of weight reduction is under your control.
Many companies mistakenly attempt to measure product development
by looking at beginning-to-end results, with informed fluctuation
the predictable result. |
| Formal governance is a necessary
evil.
Successful governance consists of targets and
reviews. A target is an objective measure of where the effort
should be throughout the entire implementation period. Progress
must be monitored regularly along the way. Without a formal
governance system, too much depends on hope and heroes. Frequent,
periodic reviews are best conducted at a working or peer level to
minimize bureaucracy, but they must be done in a formal manner. On
a less frequent basis, a higher-level review of progress toward
the ultimate result must also occur to ensure success.
|
Action implications - Standing up to the plate
It is easy and understandable to throw up your
hands and concede defeat in product development metrics, but the
success of industry leaders demonstrates that when done well, these
metrics are a critical component in achieving consistent new product
development results. A good management team can look like a great
management team if they are armed with the right information at the
right times to manage their development efforts. Successful product
launches are a predictable effect of such managed development.
Lessons-learned from industry leaders ranging from medical devices
to consumer goods to defense electronics demonstrate the power of
properly measured product development. It can't be done without an
approach that leverages the experience of the organization in a
structured manner. It can't begin without someone taking the first
step.
Mackey Group, Inc. © 2002 -
2010
|