Human Analysts Guarantee Bias

During an interview between Shane Parrish and Daniel Kahneman, one of the many interesting comments made was around how to make better decisions. Kahneman said that despite studying decision-making for many years, he was still prone to his own biases. Knowing about your biases doesn’t make them any easier to overcome.

His recommendation to avoid bias in your decision making is to devolve as many decisions as you can to an algorithm. Translating what he is saying to analytical and statistical jobs suggests that no matter how hard we try, we always approach analysis with biases that are hard to overcome. Sometimes our own personal biases are exaggerated by external incentive models. Whether you are evaluating your bosses latest pet idea, or writing a research report for a paying client, delivering the wrong message can be costly, even if it is the right thing to do.

Knowledge Leaps has an answer. We have built two useful tools to overcome human bias in analysis. The first is a narrative builder that can be applied to any dataset to identify the objective narrative captured in the data. Our toolkit can surface a narrative without introducing human biases.

The second tool we built removes bias by using many pairs of eyes to analyze data and average out any potential analytical bias. Instead of a single human (i.e. bias prone analyst) looking at a data set our tool lets lots of people look at it simultaneously and share their individual interpretation of the data. Across many analysts, this tool will remove bias through collaboration and transparency.

Get in touch to learn more.

Awareness Doesn’t Diminish Bias Effect

In an interview with Shane Parrish the co-creator of Behavioral Economics, Daniel Kahneman, was asked if he was better at making decisions after studying decision-making for the past 40 years. His answer was a flat no. He then elaborated, saying that biases are hard for an individual to overcome. This dynamic is most evident in the investment community, especially start-up investors. WeWork is a good case study in people ignoring their biases. An article in yesterday’s Wall Street Journal (Paywall) describes WeWork’s external board and investors looking on as the firm missed projections year-after-year. On the run up to the IPO, people were swayed by their biases and despite data to the contrary more gasoline was poured on the fire. It took public scrutiny for the real narrative to come out and for people to see their own biases at play. To be fair to those involved, the IPO process was used to deliver some unvarnished truths to WeWork’s C-suite. As Kahneman said, even professional analysts of decision-making get it wrong from time to time. 

What hope do the rest of us have? With the right data it is easier to at least be reminded of your biases, even if you choose to accept them. With our data and analytics platform we have built two core components that give you and your team a greater opportunity of not falling into a bias trap.

Narrative Builder

This component uses an algorithm that outputs human-readable insight into the relationships in your data. Using correction techniques and cross-validation to avoid computational-bias you can identify the cold-facts when it comes to the relationships (the building blocks of the narrative) in your data. 

Collaborative Insight Generation

The second component we have built to help diminish bias is a collaboration feature. As you analyze data and produce charts other members of your team and provide input and hypotheses for each chart. Allowing a second, third or even fourth pair of eyes to interpret data helps build a resilient narrative.

Surfacing a bias-free narrative is only part of the journey, we still need to convince other humans, with their own biases, of the story discovered in the data. As we have learnt in recent years, straight facts aren’t sufficient conditions of belief. At least with a collaborative approach we can help overcome bias traps.

Surfacing a bias-free narrative is only part of the journey, we still need to convince other humans, with their own biases, of the story discovered in the data. As we have learnt in recent years, straight facts aren’t sufficient conditions of belief. At least with a collaborative approach we can help overcome bias traps.

Market Research 3.0

In recent years, there has been lots of talk about incorporating Machine Learning and AI into market research. Back in 2015, I met someone at a firm who claimed to be able scale up market research survey results from a sample of 1,000 to samples as large as 100,000 using ML and AI.

Unfortunately that firm, Philometrics, was founded by Aleksandr Kogan – the person who wrote the app for Cambridge Analytica that scraped Facebook data using quizzes. Since then, the MR world has moved pretty slowly. I have a few theories but I will save those for later posts.

Back on topic, Knowledge Leaps got a head start on this six years ago when we filed our patent for technology that automatically analyzes survey data to draw out the story. We don’t eliminate human input, we just make sure computers and humans are put to their best respective uses.

We have incorporated that technology into a web-based platform: We still think we are a little early to market but there might be enough early adopters out there now around which we can build a business. 

As well as reinventing market research, we will also reinvent the market research business model. Rather than charge a service fee for analysis, we only charge a subscription for using the platform.

Obviously you still have to pay for interviews to gather the data, but you get the idea. Our new tech-enabled service will dramatically reduce the time-to-insight and the cost-of-insight in market research. If you want to be a part of this revolution, then please get in touch:

Patented Technology

The patent that has just been awarded to Knowledge Leaps is for our continuous learning technology.  Whether it is survey data, purchase data or website traffic / usage data., the technology we have developed will automatically search these complex data spaces. The data spaces covers the price-demand space for packaged goods, or the attitudinal space of market research surveys and other data where there could be complex interactions.  In each case, as more data is gathered – more people shopping, more people completing a survey, more people using an app or website – the application updates its predictions and builds a better understanding of the space.

In the use-case for the price-demand for packaged goods, the updated predictions then alter the recommendations about price changes that are made. This feedback loop allows the application to update its beliefs about how shoppers are reacting to prices and make improved recommendations based on this knowledge.

In the survey data use-case, the technology will create an alert when the data set becomes self-predicting. At this point capturing further data is unnecessary to understand the data set and carries an additional expense.

The majority of statistical tools enable analysts to identify the relationships in data. In the hands of a human, this is a brute-force approach and is prone to human biases and time-constraints. The Knowledge Leaps technology allows for more systematic and parallelized approach – avoiding human bias and reducing human effort.

The Power of Non-Linear Analysis

A lot of what an analyst does is perform linear analysis. These analyses are guaranteed to produce human-readable stories, even if they aren’t  insightful.

The world in which we live is not linear.  In the book, 17 Equations That Changed The World, the author, Ian Stewart,  selects only three equations that are linear, the rest are non-linear.

This shows the limitations of linear analysis in explaining the world around us. A lot of what we experience in life is non-linear, from the flight of a ball in the air (parabolic) to the growth of your savings (exponential).

What’s true of the physical world is also true of the human brain too.  One example is the way in which our brains use non-linear relationships to evaluate choices. One of the foundational tenets in the field of Behavioral Economics is Kahneman and Tversky’s Prospect Theory and Loss Aversion.

Loss aversion describes the non-linear relationship between the value associated with gain of an item versus the value associated with loss of the same item.  We would rather not lose something than find that same thing.

Whether we are conscious of this or not, our brains use it every day when we evaluate choices, protecting what we own is a greater driver of behavior than gaining new things,  it is one reason why the insurance market is so large.

A good analyst will over lay this non-linear understanding of the world when interpreting findings, however it would be useful if analytics software could allow for human-readable non-linear analytics (it’s what makes Support Vector Machines so powerful, yet so indecipherable).