Sentiment in Teneo Query Language
Conversations are dynamic and unpredictable. They often require careful analysis after their completion to determine exactly what happened. In a previous article we showed you how to use Teneo’s various indicators to monitor sentiment and other aspects of user inputs during a conversation. In this article we focus on how to use Teneo Query Language (TQL) to identify situations that took place in a conversation, after it completed. For example, if a user becomes upset, we want to look back at previous interactions to discover why. We may also want to look forward in the conversation to see if the user’s sentiment improved.
Sentiment evolves and shifts in many ways during a conversation. That is why we need a tool like TQL to make sense of how users react to the bot’s responses. TQL allows you to query every single event that occurred in a conversation, for example:
- which flows were triggered
- which variables were set
- which listeners became active
- which transitions were traversed.
But more than that, TQL allows you to apply existing lexical resources to annotate sessions with additional information, such as the type of sentiment. Available libraries include the
- Teneo Lexical Resources
- Teneo Sentiment & Intensity Library
- Teneo Offensive Language Detector
as well as any additional project-specific entities and language objects that you have built. Language resources are improved over time, so the obvious advantage of this is the ability to annotate past conversations using the latest resources.
The first step in the Teneo Query Language process is to annotate or adorn the sessions and transactions with information relevant to the queries we want to make. To do this we:
- publish a solution containing the lexical resources needed for the queries.
- associate the published solution with the log data.
The next step is to adorn the session logs using the resources we’ve set up. We do this via a Web interface that allows us to define and run adorners against log data:
The adorner shown below tags transactions in which the user input contains a negative sentiment. Later, when we run a query, we only need to look for t.a.b:negative==true:
The language object SENTIMENT_NEGATIVE.INDICATOR contains all the negative sentiment rules that we have defined. As newer definitions exist, we can rerun the adorners to assure up-to-the-minute results.
Now that our adorners are in place, the real fun can begin with Teneo Query Language. Here are just a few ideas of what we can look for, now that the sentiment annotations have been created:
- find sessions with negative sentiment
- examine counts of negative/positive sentiment by session
- list trending words in negative sentiment
- locate sessions ending with a negative sentiment vs. positive sentiment
- which urls are associated with negative sentiment
- which flows preceded a negative sentiment
Let’s look more closely at the last of these ideas: the flows preceding an incident of negative sentiment. We can use the occurrence of a negative sentiment to step back and see what might have caused it.
This snapshot of common predecessors of negative sentiment in our data set suggests that users enter the conversation either to fix a problem with or to obtain information about one of their devices. After following this path, something may have gone wrong. Maybe the information wasn’t helpful or a link was broken. Now we would zero in on a particular flow combination and a sample of sessions to see exactly what users were unsatisfied about.
Based on the results of our first query, let’s say we want to zoom in on the pair “Get Device Model” -> “Device Troubleshooting”. We run a further query to identify some of the sessions in which the conversation took this path. This in turn locates a conversation in which the negative sentiment was “My question is not covered in the trouble shooting guide”:
Using the session ID we can then call up that particular session, to see what the user was asking.
Here we discover that the product documentation has specific gaps. We can take action to correct it, either by improving the documentation or increasing the bot’s repertoire to better handle the question. Each improvement we make solves the problem for all subsequent users, which could be many thousands. These are users that will not use the service hotline or submit an email query. In other words, less than one hour’s work saves hundreds of hours of time in human support.