Do chatbots teach themselves?

The last few years have seen a boom in the number of chatbots and natural language automation tools. Many large organizations have adopted chatbots to improve their customer satisfaction and employee productivity. As these tools gain more attention, confusion about how they work and what their capabilities are arises. One of the most common questions we are asked is, “Can your bot teach itself?”

Before we answer the question and learn more about how chatbots learn, it’s important to first understand the terminology behind chatbot “intelligence”.

AI, Machine Learning, and Natural Language Processing

Artificial Intelligence (AI) is a blanket term for any software or program that does something “smart”. Machine learning (ML) is a subset of AI that uses data and repeated experience to improve at a task over time. Natural Language Processing (NLP) is an implementation of machine learning applied to linguistics. It reveals the structure and meaning of text through machine learning models.

These three topics are like nesting dolls — all machine learning can be considered a form of AI, but not all AI counts as machine learning. Similarly, all natural language processing can be considered machine learning, but not all machine learning is natural language processing.

Listen intelligently

Most chatbots work by following a set of “conversation trees”. A bot is typically trained to determine the most appropriate response to a user’s query, evaluate their response, and use that data to determine the next step.

Q: What time does your store open?
A: 8am to 5pm

These types of responses are easy to integrate with third party services, as well.

Q: Where is my order?
A: Your order is shipping and should arrive on Thursday.

In more realistic cases, these conversation trees grow and branch to accommodate new, more complex use cases. The example above could more realistically look like:

User: Where is my order?
Bot: Please enter your order number so I can look that up.
User: 123456
Bot: Your order shipped on Wednesday and should arrive on Thursday.

In these typical examples, the bots are trained to understand a number of trigger phrases. In addition to “Where is my order” above, the order locator service might also be activated by phrases like “What’s my order status” or “Has my order shipped”. These phrases are typically manually generated by developers at first, which we call seed phrases, but over time can leverage NLP to accommodate slight differences in phrasing, misspellings, etc.

The bots are not teaching themselves to respond in different ways (which can be dangerous!), rather, they are leveraging human-provided examples to better understand a range of user phrases about a topic.

Respond simply

These simpler “goal-oriented” bots allow developers to control the experience and provide a carefully planned experience for users. We can decide the tone of the bot and design responses with our precise goals in mind.

To see why it is important to have humans curating bot responses for these applications, imagine a bot that was teaching itself based on observing user behavior. A conversation like the order status one above may run into a problem that looks like this:

User: Where is my order?
Bot: You should be more patient. It will get there when it gets there.

This is a silly example, but you can see the potential for problems when bots are allowed to respond based on their machine learning models without any human-driven checks in place. We don’t want bots to respond in ways that could be offensive, inappropriate, or off-topic. In a fully automatic model, that’s a very real risk.

In a real-world example, Microsoft’s Tay Bot was trained by malicious users to respond with hate speech and extremism. True self-learning can be very risky.

Blending human and machine

In most cases the goal-oriented model described above works well. In some more advanced situations a hybrid model that allows the bot some freedom in expression can be useful.

In these blended models an initial training set is provided to the bot that allow it to begin basic conversations with users. This type of language is akin to small talk, and the bot may be able to initially answer questions like “What is your name?”, or “Who built you?”. With some cloud-based services, you can leverage existing curated models for these types of conversations based on millions of historical interactions.

While an AI-driven model will always be able to provide a response, it may not be helpful or appropriate.

In a blended model, the bot helps identify areas where more conversation pathways may be appropriate. It can also indicate where it’s not very confident that the response it gave to a user’s query may not be correct. This data can then be evaluated by developers and used to train and enhance the bot.

So, going back to our question “Can your bot teach itself?”, the answer is yes — but you probably don’t want it to. In most cases, a truly self-learning bot is not appropriate.

Leave a Reply

Your email address will not be published. Required fields are marked *