0
Wisdomize

Further Resources

If you treat innovation like a spreadsheet problem, you will miss half of what makes it valuable.

That's blunt, but true. Organisations keep trying to shoehorn creativity into quarterly reports and then wonder why the next "big idea" withers on the vine. Innovation is messy, human and often non linear, and that is exactly why measuring it well matters. It's not about putting a ruler around ingenuity; it's about understanding the levers that make it repeatable and meaningful.

Why we measure innovation (and what we're usually getting wrong)

Most leadership teams want a single number they can report to the board. Revenue uplift. Patent counts. R&D spend. These are comfortable, neat, and, crucially, easy to compare year on year. Trouble is, they're only half the story. Relying solely on traditional metrics risks pushing organisations toward safe, incremental change rather than genuine differentiation.

I'll offer an unpopular view: people who claim "metrics kill creativity" are often the ones who haven't tried a good measurement system. Measurement can liberate innovation if it's designed to widen the aperture, not shrink it. The trick is in mixing inputs, processes and outcomes, then trusting the conversation those metrics spark.

A useful starting point: inputs, processes, outputs

Think of innovation as a journey. You wouldn't judge a road trip only by the destination. You'd also look at the vehicle, the route, the stops along the way.

  • Inputs: those are your enablers, budget, people, partnerships, time allocated for experimentation, diversity of teams, and tools. Inputs tell you whether you've built the conditions for ideas to surface.
  • Processes: this is how ideas are turned into prototypes and products, ideation cadence, gate reviews, prototyping velocity, cross functional collaboration, and customer validation cycles.
  • Outputs: these are the visible results, market adoption, revenue from new products, patents (if relevant), customer satisfaction, and strategic shifts.

A balanced measurement framework captures all three. Ignore inputs and processes and you will mistake luck for capability. Ignore outputs and you will never be accountable.

A hard fact, and why it matters

According to the Global Innovation Index 2023, Australia's innovation performance sits in the middle ranks globally. That baseline matters: it reminds us that we're capable, but not invincible. We need to measure not just what we produce, but how we produce it, and how the market responds.

Qualitative measurement: the soft stuff that's actually hard

Creative capacity, risk appetite, psychological safety, these are the things spreadsheets don't capture easily. Yet they're the backbone of repeatable innovation. I've seen teams in Sydney and Melbourne with heaps of R&D cash but little to show for it simply because daily behaviours didn't support experimentation. Conversely, small teams with tight measurement of customer learning and rapid iteration punch above their weight.

How do you measure the intangible? Use proxies and mixed methods:

  • Employee engagement scores tied to innovation activities.
  • Narrative assessments from post mortems: what hypotheses were tested, what was learnt, who was involved.
  • Frequency and quality of Customer interactions during development (not just surveys but usability sessions, co creation workshops).
  • Diversity metrics, of background, discipline, and thought, which correlate strongly with breakthrough ideas.

Yes, these are softer. They're subjective. They're also decisive.

Why traditional metrics still matter, but not alone

Patent counts, R&D spend, and ROI are not useless. They're part of the toolkit. The problem comes when they form the only toolkit.

  • Patents: useful for protecting certain IP heavy ventures, but a patent count is not proof of market value. A patent without users is just paperwork.
  • R&D spend: shows commitment but not efficiency. Dollars spent tell you nothing about whether those dollars translated into value.
  • ROI: essential, but short termist. Radical innovation often needs a runway and shows erratic returns for years.

A more mature Organisation uses traditional metrics as outcome checks while leaning into process and input measures to steer behaviour.

Newer frameworks that actually help

Emerging practice in measurement is moving toward holistic, adaptive frameworks. Here are a few I use with clients:

  • Innovation Dashboards: not a single KPI but a balanced set across inputs, processes, and outputs. Dashboards emphasise trends, ratios, and signals (for example, proportion of projects in discovery vs scaling).
  • Innovation Audits: periodic qualitative reviews that probe culture, leadership alignment, and internal bottlenecks. An audit asks: where do ideas get stuck? Where are decisions made?
  • Learning Metrics: number of validated learnings per experiment, cost per learning, time to customer feedback. These shift focus from "did it launch?" to "what did we learn?"
  • Customer Impact Measures: Net Promoter Score (NPS) changes attributable to innovations, customer retention uplift, or adoption rates of new features.

Balanced Scorecard, useful if you use it properly

The Balanced Scorecard isn't new, but applied to innovation it's valuable. Don't use it as a checkbox exercise. Use it to align innovation activity to strategic priorities:

  • Financial: revenue from new products, cost savings from process changes.
  • Customer: adoption, satisfaction, retention.
  • Internal processes: cycle time reduction, defect rates, time in prototype.
  • Learning and growth: skills built, experimentation rates, leadership engagement.

When these lenses work together, you get a more strategic conversation about innovation, not just tactical output reporting.

Common pitfalls and how to avoid them

  • Measuring the wrong thing: Avoid counting activity for the sake of it. Not every idea deserves a KPI.
  • Overloading teams with metrics: Keep dashboards light and meaningful, three to eight KPIs are plenty.
  • Punishing failure: If experiments that don't succeed are penalised, people stop experimenting. Celebrate clever failures that produce learning.
  • Ignoring pace: Speed of learning is as important as direction. Measure time to insight, not just time to launch.

Case snapshots, what works in practice

Company A (customer centric): This Australian retailer built an innovation compass that centred customer feedback. They measured not only sales of new products but the speed and frequency of customer co design sessions. The result: incremental product changes delivered 40% faster and adoption rates that beat predictions. They treated customers as collaborators, not judges.

Company B (process innovation): A manufacturing firm in Victoria focused metrics on internal cycle time and defect rates. By tracking small process experiments and the learning per experiment, they cut lead times substantially and decreased rework. The financials followed.

Company C (disruption attempts): A tech firm separated core business KPIs from its moonshot projects. They measured moonshots on learning velocity and strategic fit rather than immediate revenue. Some projects failed. Some succeeded spectacularly. The Organisation learnt to allocate runway intelligently.

Two controversial opinions you will hear from me

  1. If You are obsessed with ROI in the first 12 months, you aren't serious about disruptive innovation. Short term ROI is a poor guide for projects that reshape markets.

  2. Patents often carry more ego than economic value. In many sectors, speed to market and customer adoption beat legal protection every time.

Both statements irritate CFOs and legal teams. I don't apologise for that.

Data analytics and AI, useful but not omnipotent

Analytics and AI are powerful. They can reveal patterns from customer behaviour, optimise resource allocation, and forecast likely adoption trajectories. But they're not a substitute for human judgement. Algorithms can point to where to probe; they can't tell you why a human chose a product or how an organisational culture will respond.

Use analytics to sharpen your hypotheses and shorten the learning loop. But combine it with qualitative customer insight and leadership alignment.

Practical steps to implement better measurement tomorrow

You don't need a radical overhaul. Start small, iterate fast.

  1. Define the strategic purpose of innovation for your Business. What problem are you solving? Growth, resilience, cost, or disruption?
  2. Select a balanced set of 5 to 7 measures across inputs, processes and outputs. Keep them visible and discussed.
  3. Create a rhythm: weekly team check ins for processes, monthly reviews for outputs, and quarterly innovation audits.
  4. Train leaders to interpret metrics. Data without conversation is noise.
  5. Reward learning, not just success. Share stories of experiments, the good, the bad and the awkward.
  6. Use customer feedback as your north star: High NPS from a new feature beats a patent filed in isolation.

How we approach this in practice

At we run workshops and board level sessions that help teams build practical dashboards and run innovation audits. We usually start in a city, Sydney or Melbourne, working with cross functional teams to map their current measures, then reframe them into a balanced innovation scorecard. It's hands on, not theoretical. And it works because it connects measurements to daily decision making.

One last, slightly contrarian point

Measurement frameworks evolve. The same way you wouldn't expect last year's product to dominate a market today, don't expect measurement systems to be permanent. They should be living, frequently challenged, and updated as your strategy and the market evolves.

Measure to learn. Measure to adapt. Measure to make better bets, not to create bureaucracy.

The challenge is less technical than cultural. Build metrics that nurture the right behaviour. Make them simple enough to be actionable and rich enough to be directional. And remember: innovation will always be a little unpredictable. That's the whole point.

Keep measuring. Keep asking awkward questions. Keep learning.