Guest Blog: Teneo Insight Trigger Analysis

We’re delighted to publish this guest blog authored by Thomas Bacon, Virtual Adviser General Editor at The Co-operative Bank, home of Mia.

++++

The idea behind a Virtual Adviser is very different to that of a search engine.  A search engine gives you a range of results, while a Virtual Adviser should be intelligent enough to understand a user input and make sure they get to exactly the right result.  At The Co-operative Bank, we offer a Virtual Adviser called Mia to support customer-facing staff, and our goal is to get every input to go to the right place. To give you an idea of the scale of this challenge, Mia is used by Advisers over 5,700 times a day. Of course, the reality is that there will always be unusual user inputs, ones that we simply haven’t thought of and so haven’t programmed Mia to understand.  Let me give you a few real examples:

  • One Adviser started using the term ‘e-Banking’ to refer to Internet Banking, something nobody in The Co-operative Bank has ever done before
  • Another Adviser started searching by codes rather than by words
  • Senior function leaders found that they weren’t getting the results they wanted out of Mia.  Of course, the reason was that a business leader’s terminology differs to that of an Adviser, and so they simply weren’t our target audience.

In practice, of the 160,000+ user inputs in February 2014, Mia was unable to recognise approximately 2%.  But how can we be sure Advisers are always going to the right information? In order to continually improve the quality of the search, we’ve worked closely with Artificial Solutions to design what we call an Optimisation Cycle.  Here’s how it works:

  • All of the information in our Virtual Adviser is already sorted into categories, and each category is reviewed frequently.  You can read about how we decide the review frequency in my article last year, Knowledge is Power.
  • As part of the review, though, we use Teneo Insight to generate a list of user inputs (known as ‘Triggers’) that have taken users into that category.
  • We then look through the Triggers, marking whether or not they went to the right place.  If they didn’t, we adjust the search.
  • Of course, we also review the 2% where Mia had been unable to help.

Let’s say a category has been accessed 6,446 times over the last month; when you look into the category, though, you might find there have only been 362 different terms users used when chatting with the Virtual Adviser.   So you check the 362 different Triggers, making sure they went to the right place.  This is a technique known as Trigger Analysis. The great thing about Trigger Analysis is that you can then use simple calculations to measure how efficient your Virtual Adviser is.  For example, here are results from February 2014:

(click to enlarge)

You can see straight away that the results in Mobile Banking were lower than the others (9.62% of Triggers had gone to the wrong place).  In this case, just one of Mia’s responses was incorrect in terms of the NLI.  It still wasn’t too bad – the Adviser was only ever one mouse-click away from the information they needed to see.  But it just wasn’t good enough, and so we changed it.  Next time we review Mobile Banking, we’ll expect the quality score to be higher, and the Editorial Team could actually be targeted on improving the search all the time.

Still, you can see why Mia’s Editors are happy with the quality of the search – in February’s categories, we measured Mia’s performance at 98.64%, with the Editors able to improve it to 99.97%.

In case you’re wondering why we call this an ‘Optimisation Cycle’, it’s because we build in Teneo Studio.  Teneo Studio allows you to install pre-determined questions, and the system tests the Virtual Adviser quality itself.  This is known as a ‘Solution Test’, and it’s good to see Solution Tests above 95% (we target at 99-100%).  At the end of our Trigger Analysis, we paste all of those different Triggers into Teneo Studio, meaning that the Solution Test includes an ever-growing range of actual user inputs.  It’s a cycle of efficiency that means our users have seen, and commented on, a real increase in quality.

The clear advantage of this kind of Optimisation Cycle is that your Virtual Adviser is always improving in terms of her performance.  What’s more, for the business, this gives potential goals and measures for the team who are responsible for maintaining your Virtual Adviser.

Leave a Reply

Your email address will not be published.

top