Quality Assurance Testing of your Conversational AI Solution in Teneo

Testing your conversational AI solution is an important part of assuring the quality and making sure your bot works as it should. In this tutorial, you will learn how you can assure the quality of the solution you have built in Teneo Studio.

Auto-testing in Teneo allows you to perform quality assurance tasks during the development and maintenance of a Teneo solution.

Auto-test checks example questions associated with flow triggers and transitions. It verifies that positive examples fire the trigger and transition while negative examples do not. For Auto-test to work, it requires that you have added positive examples to the trigger and transitions you want to test. That is one reason it is good practice to add example inputs to all triggers and transitions. 

Running Auto-test

Auto-test can test, triggers, transitions, and URLs to make sure they work as intended. All of them will be included by default. Disabling one of these options will speed up the Auto-test process, as it will test fewer items.

You can run an Auto-test at three different levels: Flow level, Folder level, and Solution level.

There are two different ways of running an Auto-test which applies to the Solution, Folder or Flow level:

Global scope
When selecting Run Flow Test, the triggers or transitions in that scope (flow, folder or solution) are tested in two ways:

  • Do the examples match (for positive ones) or not match (for negative ones) the condition of the trigger they belong to?
  • Does the trigger fire for the examples that match the condition, or is the example’s input “stolen” by another trigger with a higher ranking in the intent trigger ordering?

Local scope
Run Test With Flow Scope only tests that the trigger or transition’s condition matches their positive example and that negative examples do not fire the trigger or transition. It ignores the existence of any other triggers and transitions in this test.

Auto-test tests none of the following items:

  • NLU scripts
  • Global scripted contexts
  • Complete dialogs
  • Values of variables

You can exclude specific triggers and transitions from a test by:

  1. First, open the flow in edit mode
  2. Make sure that you have the Condition panel open
  3. At the bottom of the condition panel, you will find ‘Included in tests’. Click that button and that trigger or transition will be excluded from tests.


To perform Auto-test on a single flow, you need to open the flow you want to perform the test on. Then, in the upper left corner click on the Flow button and select Auto-test in the panel to the left (See the image in Running Auto-test). 

After you set up the flow as you want, it is good practice to run an Auto-test at a flow level to test if all example inputs match the coded condition match the correct trigger and transition.


To test ‘chunks’ of your solution you can run a test on a specific flow folder. Right-click the folder in the Solution Explorer and select Test. If the selected folder has sub-folders, it will include them in the test.  


To test all the triggers, transitions and URLs that have been set up in the solution, go to the Solution tab and click on the Auto-test tab. As with flow and folder level tests, you can choose what you want to test (triggers, transitions, URLs or all). It will include all of them in the test by default.

Solution testing is mostly used for regression testing after major updates or right before publishing the solution to quality assurance or production environments. 

Interpreting the results

The test results panel shows the results of the tests you selected (trigger, transition, URL) and which level you did the test on (solution, folder or flow). If you selected the “Run Test” option (instead of “Run Test Using Flow Scope”), you will also see if triggers ordered higher stole the tested positive inputs ordering. Ordering refers to the ordering of triggers with similar or overlapping trigger conditions that may conflict with each other, and if so, by which triggers.

By clicking the ‘Get Report’ button you can view the test results in an XLS format. You can also view older results by clicking on the ‘History’ button and then select which test result you want to view by clicking open. You can also export the older test versions.

Results window

In the results window, you will find the results of the selected Auto-test run, the most recently done test will be selected automatically. Here, you can see which flow and it’s trigger that failed the test, and which folder it is in. You can also filter the test results on:

  • Passed test results
  • Passed (with warning) test results
  • Failed test results
  • Non-testable items

Besides filtering on items, you can also text filter on flow name, example input or message. 

Action panel

The action panel displays more information about the selected test result. For example, if an input was stolen or blocked by a higher ordered trigger the action panel will display what it did trigger and what it should’ve triggered. The action panel also provides suggestions to solve the selected test result. Each failed test and test passed (with warning) will have their own suggestions on how to solve the problem. You can view the suggestion by clicking the ‘More Information’ button. 

The action panel adapts the information it displays depending on what you have selected.

Failed tests

When a failed test (or one with a warning) is selected in the test result window. The Action panel on the right displays further information. The most common reasons for failures are:

  • Class problem (The example’s input does not fire the class trigger)
  • Ordering problem (The example’s input is stolen by a trigger with a higher order)
  • Condition problem (The trigger or transition condition does not match the example)

Class problem

When the example input does not match the class trigger, the test result will say “The example was not matched”. This can happen when other classes contain too similar training data or the class trigger uses a context restriction.

Ordering problem

When an example is stolen by another higher ordered trigger, the test result shows a failure and mentions the trigger that fired.

In the image above, the input ‘Do you have a store in London?’ triggered the flow ‘Safetynet’ but, the example was found in the trigger ‘User want to know if we have a store in City’. To solve this, you would move the flow ‘Safetynet’ to a lower order group.

Condition problem

If an example is not covered by a syntax trigger condition, the test result will say “The example did not match this trigger”. 

In the image above, the trigger ‘Partial understanding: coffee‘ had a positive example added to it. And, since the condition is not designed to match ‘doppio’, we can open the ‘Partial understanding: coffee’ flow and expand the condition to include the positive example.

Another way of finding similar problems, like forgetting to change the positive example in a trigger, could be by using the Suggestion panel which can be found in the optimization tab backstage of your solution.

Test passed with a warning

When a test passes with a warning, it means that although the example matched the syntax trigger it also matched a different syntax trigger in the same order group. It is recommended to create an order relation between any syntax triggers that have a conflict.

Teneo Developers

Try Before You Buy.
Build. Deploy. Analyze.

Sign up to get your own developer sandbox of Teneo containing all the tools needed to build and manage advanced conversational solutions.

Get Started For Free