The Ultimate FAQ About FAQ Chatbots

FAQ Hero Graphic

In January 2011, IBM unveiled on the TV show Jeopardy what they claimed to be the ultimate FAQ chatbot – Watson. Unfortunately, Watson proved to be “all hat and no cattle” and was never able to translate game show success into practical Enterprise AI success. Meanwhile the world has changed a lot, and AI has made many advances since those early days. 

As is often the case with any new technology, the things that appear to be amazing in the early stages of innovation quickly become basic features as the technology matures and real business world problems are tackled and solved.

Today, a chatbot answering basic questions is considered a bare minimum requirement when considering what a chatbot needs to be capable of to be able to perform the jobs of actual humans. 

We now use the term “Digital Assistant” or “Enterprise Assistant” to describe a chatbot that has many more skills than just being able to answer simple questions. Though often, the first time many organizations try out a chatbot solution, it’s by piloting what they believe is the easy option: an FAQ chatbot. 

However, not all FAQ chatbot skills are created equal. In the AI world of FAQ capabilities there is a huge variance between different vendor solutions. 

Think of it this way. Most people can sing, but most people aren’t great singers. In the same way, most chatbots have basic FAQ skills, but very few chatbots have great FAQ skills.

Freddie Mercury vs. someone
Figure 1: Both of them can sing, but one is a lot better than the other.

So, to cast much needed light upon this subject, we’ve created an FAQ about FAQ chatbots that should help explain the difference. 

Q: Can I add as many questions as I want to an FAQ chatbot, and it’ll be able to answer all of them accurately once I’ve conducted supervised training?

A: For most FAQ chatbots the answer is no! Many of them start to suffer the dreaded “intent mismatch” issue at around 100 questions. Only Chatbots properly architected can handle thousands of questions accurately. 

Q: What’s an “intent mismatch” issue?

A: This is when you ask a chatbot a question and it matches to the wrong question, and therefore gives you the wrong answer. This is the worst thing that can happen in the chatbot world, and will destroy confidence of it in your organization. 

Q: What causes intent mismatching?

A: Oftentimes it’s poor training that’s the culprit, and that can be easily fixed. But there are scalability issues that tend to kick in around 100 questions (though it can happen at a lot less than that), whereby the chatbot starts to get more and more confused as to what it thinks the human is asking it. 

Q: Why is there more likelihood of intent mismatch issues once I get close to 100 questions?

A: As the number of intents for a chatbot increases, the chance of some intents (questions) looking similar to other intents also increases. This is a scalability issue. If the FAQ chatbot is not architected properly it will suffer hugely from scalability issues, and will be unable to handle lots of questions that sound (in the mind of the chatbot) very similar. 

Q: What do “good” FAQ chatbots do that allows them to solve the intent mismatch issue?

A: The good ones have multiple ways of understanding what the human is asking. They don’t just rely on simple NLP (Natural Language Processing) training, and are able to also factor in things like subject recognition, entity existence, and knowledge of your organization’s vocabulary. The reason this is a far superior means of intent matching is because this is how actual humans think. We don’t just use one indicator to understand what someone is saying, we deduce understanding from multiple elements and inferences of a sentence. And that’s how a really smart FAQ chatbot does it too, and how it’s able to handle thousands of questions and match them perfectly. 

Q: What happens when the question is ambiguous because the human wasn’t completely clear on what they wanted?

A: This all depends on the chatbot. Some chatbots just cross their fingers, make a guess, and hope for the best. Some recognize ambiguity based on confidence level analysis (which isn’t always accurate either). While the very best have smart algorithms for dealing with ambiguity and will ask clarifying questions to make sure they understand the “intent” of the question. 

Q: Does this mean that a good FAQ Chatbot is more complicated to manage than a bad one? Given how much more it is capable of doing?

A: No, quite the opposite. Because it’s massively more capable it makes it much easier to manage. Think of it this way, training something that already has lots of skills is much easier than training something that has very basic skills.  

Q: Can FAQ chatbots handle the fact that though the question may be the same, the answer can vary due to location/job/department differences of the person asking the question? For example, the question may be, “what is the sick leave policy”. And depending on who is asking, the answer is often very different.

A: Like the mismatch question, the answer varies based on good chatbots vs bad chatbots. The bad ones only support basic 1-to-1 mappings. One question always equals one answer. In the Enterprise world this doesn’t work at all. So, the good chatbots are capable of understanding demographic information about the person asking the question and can tailor the answer based on that. 

Q: My chatbot vendor said I need to load all my “answers” into their chatbot in the Cloud. Is this a good idea? 

A: No, this is a terrible idea. Loading all your content into someone else’s environment is not only technically unnecessary, it’s also forcing you into dual maintenance of two sources of truth. A good chatbot needs to be able to plug into your many sources of content to provide the answer

Q: But what if the answer is too long to show in a conversation? My chatbot vendor is telling me that I need to manually create abbreviated versions of all my unstructured content. 

A: Best practice UX (user experience) is that the chatbot does provide summarized responses (with options to see the full answer) to make the conversation easy to understand by the human. However, good chatbots can use AI to auto-summarize the text, and this would be the recommended approach. 

Q: Can FAQ chatbots only answer a question with static (ex: text, HTML, or web links) information, or can they also include data too? 

A: Basic FAQ chatbots are limited to only being able to respond with static data, but the good ones can also include data from other systems. And the great ones can also bring back that data from both on-premise and multiple Cloud systems. 

Q: It sounds like there’s a massive difference between FAQ chatbots and it’s important to look “under the hood” before I make a decision?

A: Yes, if you can take the time to test-drive a $20,000 car, then you should definitely test-drive any chatbot before making a decision. 

If you’d like to see a great chatbot in action, please contact us for a live demonstration. 

Contact Us