There’s a very old joke in the software industry:
What’s the difference between a car salesman and a software salesman?
A car salesman knows when he is lying.
Unfortunately, there’s a huge amount of truth to this joke, and the explanation for why this is true is simple. Software is pretty complicated, and cars are pretty straight forward. With a car you can generally read up on everything you need to know in a matter of hours (enough to sell it anyway). While software can sometimes take months to really understand. Then factor in the myriad ways it can be used, and what business requirements people may be asking of it, and even the best sales people can be stumped at how to answer a question.
Oftentimes they really do believe they do know the answer. And that’s the source of the joke.
And this leads us to the new era of software: Artificial Intelligence (AI). And, of course, this means a whole bunch more woefully inadequate answers to very reasonable questions.
How does the chatbot know what to do when we ask it a question?
It learns using AI.
It just does. It’s called deep learning.
But what if it makes mistakes?
It learns from its mistakes.
It uses deep learning.
Obviously, this isn’t how any of this works at all. But given the mystery that shrouds all things AI, it’s not a surprise that these types of conversations take place.
So, to add transparency to what will be a very challenging subject to evaluate for many organizations, we’ve created a list of five facts that are critical to aid the understanding and implementation of a chatbot solution in the Enterprise.
Fact 1: AI is like a garden, it needs seeding & cultivation
Figure 1: Automation of nature and nurture
Out of the box, all chatbot engines (.ai) come with a general understanding of language and grammatical constructs. They also have a limited understanding of entities. Ex: I can ask a chatbot to do something “next Tuesday” and it will know what that date is, because it has knowledge of an entity that defines what a date can be. It also understands “today” and “tomorrow” too. It may also understand people’s names and cities in a country. “Is the Chicago office open tomorrow”?
What chatbots generally don’t know out of the box are the things particular to your domain. They don’t understand HR jargon, or campus terminology. They don’t know which departments you have, or job titles. Terms like “leave of absence”, “expense reimbursement” and “travel auth” aren’t considered entities that have specific meanings, in the way that “next Friday” or “tomorrow” do.
So, it’s important to “seed” the AI on day one of your implementation. In many ways it’s just like how a farmer will grow a field. The farmer doesn’t just hope that nature will turn the field into a spectacular crop of wheat. Nature can only do so much, the farmer needs to do his/her bit also. The soil must be prepared, the seed planted, and each day it needs to be inspected and tended to ensure growth is according to plan.
For AI, it’s critical to plant the seed of domain knowledge on day one. And then monitor usage to identify areas it needs to be expanded, and also the specific areas it needs additional training and seeding in. If the chatbot is HR focused then it needs an entire vocabulary injected and trained, in preparation for usage by actual humans.
If your chatbot doesn’t understand the difference between an adoption reimbursement program, and an adoption leave program, it will be destined to disappoint.
Fact 2: It’s not deep learning and big data that will be the key to success, it’s smart algorithms and neural networks
Last year we wrote a blog on AlphaGo Zero, and talked about how it wasn’t deep learning that made it so smart. The same thing is true of Enterprise chatbot implementations. Deep learning is a very powerful tool, but it isn’t the answer to everything. Neural networks and smart algorithms are the real engine behind a successful chatbot implementation.
Figure 2: Monte Carlo Tree Search in AlphaGo Zero, guided by neural networks
The lesson AlphaGo Zero taught the world was that AI is at its most powerful when it can map out its own neural network, while also readjusting decision points based on actual outcomes. This is why creating an incredible Chess or Go master is much easier than creating AI that cures cancer.
In the Enterprise chatbot world, sophisticated decision networks don’t just create themselves, and deep learning doesn’t build them. They need to exist on day one, and they need to have been pre-built with domain knowledge and stacked with business rules that determine flows.
In the same way AlphaGo Zero needed to be aware of the rules of Go, your Enterprise chatbot needs to be aware of the best practice rules of the Enterprise. Only then can it be trusted by your employees, managers, students and faculty members.
In the Enterprise chatbot world, this equates to massively complex and sophisticated dialog flows that come pre-built and configurable for your business requirements. And that have over a decade of domain knowledge built into them.
Fact 3: AI is not a data warehouse
Figure 3: AI is not a data warehouse
Those that remember IBM’s Watson winning Jeopardy may be disappointed to know that what they were really watching was a massive data warehouse stored in memory, with a search feature that had been built manually in order to meet the needs of one game.
There’s a lot of reasons why putting sensitive data into a proprietary AI engine in the Cloud isn’t a good idea. Security, data privacy, dual maintenance, and conversion effort, are just some of them. Your data belongs where it is right now.
It’s the obligation of the AI vendor to be able to plug into your data, not the other way round. As always in life, the tail should not be wagging the dog.
Of course, there are reasons why many vendors require this: laziness and lack of knowledge top the list.
Creating sophisticated data adapters that are a broker between your data and the AI isn’t easy and takes lots of domain knowledge. We know this because we’ve spent ten years doing it.
But with many vendors looking to jump into a market that they have no knowledge of, shortcuts have been taken by many of them. But that doesn’t change the fact that your data needs to be protected, and chatbot implementations shouldn’t be turned into massive integration projects.
Fact 4: A concierge chatbot is a requirement, not a nice to have
Figure 4: Avoid chatbot confusion
Does anyone remember what a link farm is? Yes, they were awful. Quite possibly the worst manifestation of web-based technologies. And the problem was obvious, all those link farms did was sow confusion and frustration with the poor users who had to deal with them.
It’s 2019 now, and we face a similar conundrum. One chatbot that knows everything. Or hundreds of chatbots that know bits of information, but no way for the user to know which ones know what. Imagine being handed 100 help desk numbers and being asked to guess which one was the right one based on your area of need.
Fortunately, the problem has a solution. A concierge chatbot can be used as the focal point for all questions. All the concierge is responsible for is knowing which chatbot knows the answer to which question, and then seamlessly managing the handoff in the conversation, such that to the human it feels like one conversation with one chatbot.
This way the humans only ever need to start a conversation with the concierge. The ultimate one-stop bot.
Having worked extensively with Oracle’s chatbot framework, we not only can recommend it very highly, but we can also attest to its concierge capabilities. So, while Oracle is rolling out lots of small function-focused bots (they call them skills). All these bots/skills can be managed with one concierge chatbot automatically. Meaning you can have one concierge that includes Oracle delivered bots, IntraSee delivered bots, and also custom bots created by you.
Oracles uses the term “Oracle Digital Assistant”. What this is, is concierge chatbot capability under one technology stack.
Fact 5: AI that requires massive human intervention, and coding development, isn’t AI
Figure 5: We like Joe, but Joe shouldn’t be creating Enterprise chatbots
AI that requires 95% of all its functionality to be created by the human hand isn’t really AI. It’s cool, but it’s not AI. It’s also not supportable or maintainable. The weakest link will be the human hand that pieced it all together. And if that hand has no domain knowledge, it won’t just be buggy, it will be stupid.
The real key to AI is not just automation of the task a chatbot can complete. It’s automation of the creation of the chatbot itself.
This blog has been about the facts as we see them at IntraSee. So let’s look at the facts of what a sample chatbot pilot generated. The background being: 200 FAQ’s, 16 view data intents, 6 transactions (promotions, transfers, etc.), 10 reports (yes, a chatbot can run reports). Here’s the technical numbers:
- 24 Custom Entities
- 690 Custom Component Invocations
- 2,696 System Component Invocations
- 3,386 States
- 101,609 Transitions between States
We firmly believe that automation of creation is the key to AI success. Manually coding over 100,000 state transitions creates inherent instability, and leads to what we would call, a Frankenbot.
At IntraSee we have automated the creation of a chatbot, such that with one push button we can generate hundreds of thousands, even millions, of chatbot states, transitions, invocations, and entities. We do this for multiple reasons:
- We remove human error from the equation.
- We simplify the management and maintenance, such that a business user can easily deploy any changes.
- We massively shorten implementation times down to a just a few weeks.
- We can deliver more in four weeks than would normally be possible in over one year.
- We can deploy mass changes, risk free, to a chatbot in a matter of minutes.
Please contact us if you’d like to learn more…