Sign Up For Our Newsletter

Occasionally we will send out an email newsletter containing some detailed and interesting case studies based on the data we have access to. Sign up below.

Building An Agile Market Research Tool

For the past five years we have been building our app Knowledge Leaps, an agile market research tool. We use it to power our own business serving some of the most demanding clients on the planet.

To build an innovative market research tool I had leave the industry. I spent 17 years working in market research and experienced an industry that struggled to innovate. There are many reasons why innovation failed to flourish, one of which lies in the fact that it is a service industry. Service businesses are successful when they focus their human effort on revenue generation (as it should be). Since the largest cost base in the research are people, there is no economic incentive to invest in the long term especially as the industry has come under economic pressure in recent years. The same could be said of many service businesses that have been disrupted by technology. Taxi drivers being a good example of this effect.

This wouldn’t be the first time market research innovations have come from firms that are outside of the traditional market research category definition. For example, SurveyMonkey was founded by a web developer with no prior market research experience. While, Qualtrics was founded by a business school professor and his son, again with no prior market research industry experience.

Stepping outside of the industry and learning how other types of businesses are managing data, using data and extracting information from it has been enlightening. It has also helped us build an abstracted-solution. While we can focus on market research use-cases, since we have built a platform that fosters analytics collaboration and an open-data philosophy finding new uses for it is a frequent occurrence.

To talk tech-speak what we have done is to productize a service. We have taken the parts of market research process which happen frequently and are expensive and turned them into a product. A product that delivers the story in data with bias. It does it really quickly too. Visit the site or email us support@knowledgeleaps.com to find out more.

Science Fiction and the No-Free-Lunch Theory

In a lot of science fiction films one, or more, of the following are true:

  1. Technology exists that allows you to travel through the universe at the “speed of light.”
  2. Technology that allows autonomous vehicles to navigate complicated 2-D and 3-D worlds exists.
  3. Technology exists that allows robots to communicate with humans in real-time detecting nuances in language.
  4. Handheld weapons have been developed that fire bursts of lethal high energy that require little to no charging.

Yet, despite these amazing technological advances the kill ratio is very low. While it is fiction, I find it puzzling that this innovation inconsistency persists in many films and stories.

This is the no-free-lunch theory in action. Machines are developed to be good at a specific task are not good at doing other tasks. This will have ramifications in many areas especially those that require solving multiple challenges. Autonomous vehicles for example need to be good at 3 things:

  1. Navigating from point A to B
  2. Complying with road rules and regulations.
  3. Negotiating position and priority with other vehicles on the road.
  4. Not killing, or harming, humans and animals.

Of this list 1) and 2) are low level. 3) is challenging to solve as it requires some programmed personality. Imagine if two cars using the same autonomous software meet at a junction at the very same time, one of them needs to give way to the other. This requires some degree of assertiveness to be built. I am not sure this is trivial to solve.

Finally, 4) is probably really hard to solve since it requires 99.99999% success in incidents that occur every million miles. There may never be enough training data.

AI Developer, A Job For Life.

Last year we wrote about No Free Lunch Theory (NFLT) and how it relates to AI (among other things). In this recent Wired article, this seems to be coming true. Deep Learning, the technology that helped AI make significant leaps in performance has limitations. These limitations, as reported in the article, cannot necessarily be overcome with more compute power.

As NFLT states (paraphrased): being good at doing X means an algorithm cannot also be good at Doing Not X. Deep Learning models that have success in one area is not a guarantee they will have success in other areas. In fact the opposite tends to be true. This is the NFLT in action and in many ways, specialized-instances of AI-based systems was an inevitability of this.

This has implications for the broader adoption of AI. For example, there can be no out-of-the box AI “system”. Implementing an AI solution based on the current-state-of-the-art is much like building a railway system. It needs to adapt to the local terrain. A firm can’t take a system from another firm or AI-solutions provider and hope it will be a turn-key operation. I guess it’s in the name, “Deep Learning”. The “Deep” refers to deep domain, i.e. specific use-case, an not necessarily deep thinking.

This is great news if you are an AI developer or have experience in building AI-systems. You are the house builder of the modern age and your talents will always be in demand – unless someone automates AI-system implementation.

UPDATE: A16Z wrote this piece – which supports my thesis.

Building Persistent State Knowledge

The tools available to produce charts and visualize data are sadly lacking in a critical area. While much focus has been placed on producing interesting visualizations, one problem has yet to be solved: it is all too easy to separate the Data layer from the Presentation layer in a chart. It is easy for the context of a chart to be lost when it becomes separated from its source. When that happens we lose meaning and we potentially introduce bias and ambiguity.

In plain english, when you produce a chart in Excel or Google Sheets, the source data is in the same document. When you embed that chart in a PowerPoint or Google slide deck you lose some of the source information. When you convert that presentation into a PDF and email it to someone, you risk losing all connections to the source. Step by step it becomes too easy to remove context from a chart.

Yes, you can label the chart. You can cite your source but neither are foolproof methods. These are like luggage tags, while they are attached they work but they are all too easy to remove.

In analytics, reproducibility and transparency are critical to building a credible story. Where did the data come from, could someone else remake the chart following these instructions (source, series information, filters applied, etc). Do the results stand up to objective scrutiny?

At Knowledge Leaps, we are building a system that ensures reproducibility and transparency by binding the context of the data and its “recipe” to the chart itself. This is built into the latest release of our application.

When charts are created we bind them to their source data (easy) and we bind the “recipe”. We then make them easily searchable and discoverable, unhindered by any silo information i.e. slide, presentation, folder, etc.

The end-benefit data and charts can be shared without loss of the underlying source information. People not actively involved in creating the chart can interpret and understand its content without any ambiguity.

Turning Analysis On Its Head.

Today we rolled out our new charting feature. This release marks an important milestone in the development of Knowledge Leaps (KL).

Our vision for the platform has always been to build a data analysis application platform that lets a firm harness the power of distributed computing and a distributed workforce.

Charts and data get siloed in organisations because they are buried in containers. Most charts are contained on a slide in a PowerPoint presentation that sits in a folder on a server somewhere in your company’s data center.

We have turned this on its head in our latest release. Charts that are produced in KL remain open and accessible to all users. We have also built in a collaborative interpretation feature where a group of people spread across locations can interpret data as part of a team rather than alone. This shares the burden of work and build more resilient insights since people with different perspectives can build the best-in-class narrative.

Awareness Doesn’t Diminish Bias Effect

In an interview with Shane Parrish the co-creator of Behavioral Economics, Daniel Kahneman, was asked if he was better at making decisions after studying decision-making for the past 40 years. His answer was a flat no. He then elaborated, saying that biases are hard for an individual to overcome. This dynamic is most evident in the investment community, especially start-up investors. WeWork is a good case study in people ignoring their biases. An article in yesterday’s Wall Street Journal (Paywall) describes WeWork’s external board and investors looking on as the firm missed projections year-after-year. On the run up to the IPO, people were swayed by their biases and despite data to the contrary more gasoline was poured on the fire. It took public scrutiny for the real narrative to come out and for people to see their own biases at play. To be fair to those involved, the IPO process was used to deliver some unvarnished truths to WeWork’s C-suite. As Kahneman said, even professional analysts of decision-making get it wrong from time to time. 

What hope do the rest of us have? With the right data it is easier to at least be reminded of your biases, even if you choose to accept them. With our data and analytics platform we have built two core components that give you and your team a greater opportunity of not falling into a bias trap.

Narrative Builder

This component uses an algorithm that outputs human-readable insight into the relationships in your data. Using correction techniques and cross-validation to avoid computational-bias you can identify the cold-facts when it comes to the relationships (the building blocks of the narrative) in your data. 

Collaborative Insight Generation

The second component we have built to help diminish bias is a collaboration feature. As you analyze data and produce charts other members of your team and provide input and hypotheses for each chart. Allowing a second, third or even fourth pair of eyes to interpret data helps build a resilient narrative.

Surfacing a bias-free narrative is only part of the journey, we still need to convince other humans, with their own biases, of the story discovered in the data. As we have learnt in recent years, straight facts aren’t sufficient conditions of belief. At least with a collaborative approach we can help overcome bias traps.

Surfacing a bias-free narrative is only part of the journey, we still need to convince other humans, with their own biases, of the story discovered in the data. As we have learnt in recent years, straight facts aren’t sufficient conditions of belief. At least with a collaborative approach we can help overcome bias traps.

One Chart Leads To Another, Guaranteed.

We have just released the charting feature in Knowledge Leaps. The ethos behind the design is this: in our experience, if you are going to make one chart using a data set you are probably going to make many charts using the data.

Specifying lots of charts one-by-one is painful, especially as a data set will typically have lots of variables that you want to plot against one specific variable, date for example. Our UI has been built with this in mind: specify multiple charts quickly, and simply, then spend the time you save putting your brain to work figuring out what the data narrative is.

Charts tend to get buried further into a silo – either as part of a workbook or a presentation. This requires contextual knowledge: to know where the chart is. In other words, you need to know where the chart is to know what story it tells. This is suboptimal, so we fixed that too. Knowledge Leaps platform lets all the your charts remain searchable and shareable. That also goes for your co-workers’ charts as well. This feature allows insight to be easily discovered and shared with a wider team – helping build persistent-state organizational intelligence, faster.

Market Research 3.0

In recent years, there has been lots of talk about incorporating Machine Learning and AI into market research. Back in 2015, I met someone at a firm who claimed to be able scale up market research survey results from a sample of 1,000 to samples as large as 100,000 using ML and AI.

Unfortunately that firm, Philometrics, was founded by Aleksandr Kogan – the person who wrote the app for Cambridge Analytica that scraped Facebook data using quizzes. Since then, the MR world has moved pretty slowly. I have a few theories but I will save those for later posts.

Back on topic, Knowledge Leaps got a head start on this six years ago when we filed our patent for technology that automatically analyzes survey data to draw out the story. We don’t eliminate human input, we just make sure computers and humans are put to their best respective uses.

We have incorporated that technology into a web-based platform: www.knowledgeleaps.com. We still think we are a little early to market but there might be enough early adopters out there now around which we can build a business. 

As well as reinventing market research, we will also reinvent the market research business model. Rather than charge a service fee for analysis, we only charge a subscription for using the platform.

Obviously you still have to pay for interviews to gather the data, but you get the idea. Our new tech-enabled service will dramatically reduce the time-to-insight and the cost-of-insight in market research. If you want to be a part of this revolution, then please get in touch: Doug@knowledgeleaps.com.