Top Challenges in Computational Linguistics

The field of computational linguistics has a long list of unsolved problems that have been challenging linguists all around the world for decades now. Disciplines like Natural Language Processing (NLP) and Machine Translation (MT), as well as speech technologies like Automatic Speech Recognition (ASR) and Text-To-Speech (TTS), despite having proved successful in the past few years, still lack the naturalness that is inherent to human communication.

Among the many actual challenges still to be solved by computational linguists, it is worth mentioning some of them, for it’s a widely acknowledged truth that their disentangling will play a major role in future advancements within the field:

  • Input level:
    Difficulties in speech recognition systems are mainly caused by variations in terms of accents; use of spontaneous speech; differences in articulation, volume, speed, etc.; acoustic conditions, and quite many others.
  • Understanding level:
    – Morphological and syntactic phenomena such as ellipsis and anaphora resolution present challenges at NLU level and are active areas of research.
    Word-Sense Disambiguation (WSD), aiming at the selection of one single meaning for an ambiguous word, is one of the most popular open issues in linguistics, for it has a huge impact on the accuracy of search engines.
  • Dialog level:
    Context disambiguation, social intelligence, interpretation of spontaneous gestures, etc. are some of the current gaps (to different extents) at discourse level.
  • Output level:
    Far beyond achievements in speech synthesis there still remain challenges like conferring human-like capabilities to embodied agents (in terms of appearance, non-verbal communication, etc.).

Virtual Assistants (VAs) are a composite gathering of quite many different sub-fields across computational linguistics, that’s why they still lack completeness. Weak points in the above disciplines are most visible in a VA, which can be thought of as one of the most complex front-ends of applied computational linguistics.

Pure linguistic issues holding back overt success for all kinds of NLP-based applications may range from pronunciation and accent variation in speech recognition to context disambiguation and anaphora resolution at discourse level.

When brought together, the different components in a VA (and/or dialog system) conform such an interdependent Natural Language Interaction (NLI) entity that a minor error on one of its sides might result in a dreadful behavior at natural language and dialog management ends. Imagine your reaction to such disparate statements like an actual “that’s speech recognition” and the virtual “that’s peach wreck in kitchen”. Can you imagine what a NLU module will be able to get out of this? Can you also imagine what a dialog manager will do with the parsed result?

Teneo Interaction Engine benefits from extensive experience both in the field of Research & Development and the world of professional services. It’s a powerful symbiosis between analysis and pragmatism. With tools like Teneo Studio at our service, Knowledge Engineers (as well as customers themselves) are now able to build robust, cross-platform applications, having the chance to address some of these difficulties easily. A profound familiarity with these widely claimed linguistic problems, from both developers’ and linguists’ perspectives make Teneo’s array of linguistic tools and resources highly promising and functional.

I want to believe that we are ready to take up the challenge. What’s more, customers are now ready to take up the challenge with us too… Do YOU want to take up the challenge?


The Natural Language Opportunity

Leave a Reply

Your email address will not be published.

top