In January 2011, IBM unveiled on the TV show Jeopardy what they claimed to be the ultimate FAQ chatbot – Watson. Unfortunately, Watson proved to be “all hat and no cattle” and was never able to translate game show success into practical Enterprise AI success. Meanwhile the world has changed a lot, and AI has made many advances since those early days. 

As is often the case with any new technology, the things that appear to be amazing in the early stages of innovation quickly become basic features as the technology matures and real business world problems are tackled and solved.

Today, a chatbot answering basic questions is considered a bare minimum requirement when considering what a chatbot needs to be capable of to be able to perform the jobs of actual humans. 

We now use the term “Digital Assistant” or “Enterprise Assistant” to describe a chatbot that has many more skills than just being able to answer simple questions. Though often, the first time many organizations try out a chatbot solution, it’s by piloting what they believe is the easy option: an FAQ chatbot. 

However, not all FAQ chatbot skills are created equal. In the AI world of FAQ capabilities there is a huge variance between different vendor solutions. 

Think of it this way. Most people can sing, but most people aren’t great singers. In the same way, most chatbots have basic FAQ skills, but very few chatbots have great FAQ skills.

Freddie Mercury vs. someone
Figure 1: Both of them can sing, but one is a lot better than the other.

So, to cast much needed light upon this subject, we’ve created an FAQ about FAQ chatbots that should help explain the difference. 

Q: Can I add as many questions as I want to an FAQ chatbot, and it’ll be able to answer all of them accurately once I’ve conducted supervised training?

A: For most FAQ chatbots the answer is no! Many of them start to suffer the dreaded “intent mismatch” issue at around 100 questions. Only Chatbots properly architected can handle thousands of questions accurately. 

Q: What’s an “intent mismatch” issue?

A: This is when you ask a chatbot a question and it matches to the wrong question, and therefore gives you the wrong answer. This is the worst thing that can happen in the chatbot world, and will destroy confidence of it in your organization. 

Q: What causes intent mismatching?

A: Oftentimes it’s poor training that’s the culprit, and that can be easily fixed. But there are scalability issues that tend to kick in around 100 questions (though it can happen at a lot less than that), whereby the chatbot starts to get more and more confused as to what it thinks the human is asking it. 

Q: Why is there more likelihood of intent mismatch issues once I get close to 100 questions?

A: As the number of intents for a chatbot increases, the chance of some intents (questions) looking similar to other intents also increases. This is a scalability issue. If the FAQ chatbot is not architected properly it will suffer hugely from scalability issues, and will be unable to handle lots of questions that sound (in the mind of the chatbot) very similar. 

Q: What do “good” FAQ chatbots do that allows them to solve the intent mismatch issue?

A: The good ones have multiple ways of understanding what the human is asking. They don’t just rely on simple NLP (Natural Language Processing) training, and are able to also factor in things like subject recognition, entity existence, and knowledge of your organization’s vocabulary. The reason this is a far superior means of intent matching is because this is how actual humans think. We don’t just use one indicator to understand what someone is saying, we deduce understanding from multiple elements and inferences of a sentence. And that’s how a really smart FAQ chatbot does it too, and how it’s able to handle thousands of questions and match them perfectly. 

Q: What happens when the question is ambiguous because the human wasn’t completely clear on what they wanted?

A: This all depends on the chatbot. Some chatbots just cross their fingers, make a guess, and hope for the best. Some recognize ambiguity based on confidence level analysis (which isn’t always accurate either). While the very best have smart algorithms for dealing with ambiguity and will ask clarifying questions to make sure they understand the “intent” of the question. 

Q: Does this mean that a good FAQ Chatbot is more complicated to manage than a bad one? Given how much more it is capable of doing?

A: No, quite the opposite. Because it’s massively more capable it makes it much easier to manage. Think of it this way, training something that already has lots of skills is much easier than training something that has very basic skills.  

Q: Can FAQ chatbots handle the fact that though the question may be the same, the answer can vary due to location/job/department differences of the person asking the question? For example, the question may be, “what is the sick leave policy”. And depending on who is asking, the answer is often very different.

A: Like the mismatch question, the answer varies based on good chatbots vs bad chatbots. The bad ones only support basic 1-to-1 mappings. One question always equals one answer. In the Enterprise world this doesn’t work at all. So, the good chatbots are capable of understanding demographic information about the person asking the question and can tailor the answer based on that. 

Q: My chatbot vendor said I need to load all my “answers” into their chatbot in the Cloud. Is this a good idea? 

A: No, this is a terrible idea. Loading all your content into someone else’s environment is not only technically unnecessary, it’s also forcing you into dual maintenance of two sources of truth. A good chatbot needs to be able to plug into your many sources of content to provide the answer

Q: But what if the answer is too long to show in a conversation? My chatbot vendor is telling me that I need to manually create abbreviated versions of all my unstructured content. 

A: Best practice UX (user experience) is that the chatbot does provide summarized responses (with options to see the full answer) to make the conversation easy to understand by the human. However, good chatbots can use AI to auto-summarize the text, and this would be the recommended approach. 

Q: Can FAQ chatbots only answer a question with static (ex: text, HTML, or web links) information, or can they also include data too? 

A: Basic FAQ chatbots are limited to only being able to respond with static data, but the good ones can also include data from other systems. And the great ones can also bring back that data from both on-premise and multiple Cloud systems. 

Q: It sounds like there’s a massive difference between FAQ chatbots and it’s important to look “under the hood” before I make a decision?

A: Yes, if you can take the time to test-drive a $20,000 car, then you should definitely test-drive any chatbot before making a decision. 

If you’d like to see a great chatbot in action, please contact us for a live demonstration. 

Contact Us

It’s almost three years since we wrote the original Workday blog, and since then the world has changed a huge amount. But, surprisingly, Workday has not. Back in 2016 they were a cool back office HCM Cloud SaaS provider, and today, well, they’re still the same. Only what was shiny and glittery back then, now looks a tad jaded and long in the tooth in 2019.

So, while doing a decent job of becoming PeopleSoft 2.0 was definitely an accomplishment of sorts, the world has changed enough over the past few years that you have to wonder what the attraction is now.

So, with that said, here’s 10 reasons why Workday may not be a great option in 2019.

1. It lacks UX innovation tools or solutions (while its competitors are racing by it).

It’s almost two years now since Workday announced details on what their “intent” to open up their PaaS platform would actually mean. So, what’s the status now?

After the passage of two years, we were hoping to see much more progress in this area. Unfortunately, as of today, Workday appears to have made almost no progress. On their developer.workday.com web site, they advertise limited availability to be one of the first to be able to use their PaaS platform.

Limited Availability: This exclusive program gives you the opportunity to be one of the first organizations using the Workday Cloud Platform. Create business-impacting applications leveraging Workday’s technology. Help influence our roadmap.

– developer.workday.com

2. It’s turned its back on the new UI revolution: chatbots.

Also, and maybe a lot more concerning, Workday’s participation in the AI revolution and the new era of disruption appears to be stalled in the chatbot realm. If you go to www.workday.com and search for chatbot, you get zero results. Whereas if you go to www.oracle.com and perform the same search, you get over 1,100 results. That’s a massive differentiation.

Meanwhile, the advice of Gartner is that conversational UI (aka chatbots/digital assistants) is a critical feature of our lives that we all need to be focused on for the next decade.

“Conversational AI-first” will supersede ‘cloud-first, mobile-first’ as the most important high-level imperative for the next 10 years”.

– Gartner Sept 2016

Certainly, we are just a few years away from wondering how Enterprise systems were ever usable without some kind of digital assistant. In the same way we would now wonder how we ever managed to use the web without Google search.

And here’s 10 reasons why the Enterprise needs a chatbot solution, if you need to be more persuaded.

3. It’s a closed community

Innovation requires activity by multiple parties to spark new ideas, and new ways of doing things. Workday believes that only they can innovate on their platform, and that any attempt to do so by their clients or other vendors needs oversight, control, and permission. They have a euphemism for this: “curation”. In the Workday world, curation is the means of stifling innovation. Or, to quote their CEO, Aneel Bhusri:

“Right now what we’re seeing is what I’d call small pieces of additional functionality rather than applications that have a larger purpose. So the potential impact is limited. You can bring whatever code you want but, we curate and certify everything that goes into that platform and will continue to do so. We have to because we have a responsibility to ensure that customers remain compliant”.

“We are approaching verticalization and extensions differently to others. We are curating everything and will discuss our plans with partners so that there is a clear line between the areas we will enter and those where our partners will have a free run

– Aneel Bhusri

The bolded comments are the ones we feel are most pertinent. In the new age of digital disruption: agility and innovation are the key requirements of any organization. Without these things, you cannot adapt. Having an Enterprise system that requires curation and certification will be an impediment to clients and partners ability to provide the UX that their organizations want. And in this new world of digital disruption and transformation, this will be a major inhibitor to progress. Certainly, with the rise of chatbots as the new UI, organizations need the ability to adapt to these changes, and should not risk being forced to go through a curation and certification process. Or, even worse, be told, “no, you can’t do that”.

4. It’s not fully mature

How long has PeopleSoft been around? Forever, right? Well, technically since 1987. But in the software world, that’s pretty much forever. And guess what? There are still new features and functionality being added to it each year. Oracle has done a great job keeping on top of things and expanding functionality to meet demand. So that’s over 30 years of development. Building a mature Enterprise system takes decades. It’s a colossal undertaking. For Workday to catch up to all that development will take many years, if ever, before they can match PeopleSoft feature for feature.

And now, to make matters worse for Workday, Oracle has not only passed them by with their HCM Cloud SaaS offering. But they are also the undisputed leader with their Cloud ERP (aka Financials).

Meanwhile Workday still has a very long way to go in the Campus world, and the Financials world, to really be able to describe themselves as a mature Enterprise software vendor.

5. It’s like selecting a client-server solution, when everyone else is selecting web-based systems.

As any NFL quarterback will tell you, you don’t throw the ball to where the receiver is, you throw it to where the receiver will be.

Likewise, you don’t select an Enterprise software vendor for what they are doing today, you select one based on where they will be in one year and beyond.

And right now, there’s a revolution taking place in the field of UX. Web-based systems focused on back office use will become dinosaurs, while systems built with conversational UI’s (chatbots/digital assistants) that everyone in the organization can use, will be the new standard for all Enterprise systems.

Selecting a vendor in 2019 that has no chatbot solution, would be akin to selecting a client-server solution in 1997. Or buying a VHS player, while everyone is buying DVD’s.

Selecting Workday is a tough buy in 2019. There’s a sense that they are being passed by, and are drifting into a state of irrelevance. Mostly due to an inability, or reluctance, to change with the times.

6. There’s more to an Enterprise system than just having a pretty face

There’s no doubt that Workday has attractive features. And at first glance it does catch the eye (though even that has waned over the past two years).

But from a UX perspective it’s way below par. Having a pleasant UI and poor UX (there’s a difference) is something that can be glossed over in the sales cycle, but not when real people start to use it.

7. It’s not focused on the complete user experience

As Owen Wilson wistfully said in Wedding Crashers, “I think we only use 10% of our hearts”. Workday falls into this trap. Out of the box it doesn’t satisfy the complete user experience. Just a small fraction. How it’s implemented, and the tools provided, are the key to unlocking the real potential of an Enterprise system. In fact, the whole concept of an “Enterprise system” is typically something less tangible than people would like to admit. For most organizations it’s really an eco-system of multiple systems that the user is somehow expected to navigate and comprehend as one system (like the universe).

Unfortunately, the human brain is not wired to process complex and poorly connected applications (unless you’re Stephen Hawking). Which leads to massive under-utilization of the true potential that Enterprise systems could provide (which thus creates an under-realization of ROI). Owen Wilson was right. 10% is a pretty accurate number.

8. It lacks a “portal”

In 2016 we noted that Workday has no Portal. In 2019, they still have no portal. And there’s a fascinating historical reason for this.

Basically, the team that left PeopleSoft to form Workday never had a clue how to fully utilize the PeopleSoft portal (now called the PeopleSoft Interaction Hub) while they were at PeopleSoft (pre-2005). It was a source of frustration for many people. So, when they formed Workday, they just assumed that if they couldn’t find a reason to use it then, then there was no reason to build one now. Obviously, this was one bad decision, layered on a bunch more bad decisions.

All Enterprise systems need some kind of one-stop shop that integrates everything nicely for the user. That’s not even debatable in 2019 (and wasn’t really in the year 2000, or 2005 either). Though the irony is that the truest realization of a one-stop shop is a digital assistant (aka chatbot). It’s the ultimate navigation-less UI. And, unfortunately, as mentioned earlier, Workday doesn’t have that either.

So today, Workday has no portal. Meanwhile PeopleSoft has the Interaction Hub, and Oracle has the Content & Experience Cloud (a terrific portal that actually works with PeopleSoft, as well as other Cloud applications).

9. It won’t integrate with your corporate systems

All organizations have their own eco-system of internal systems that gradually morphed and developed over the years. We can’t just pretend they don’t exist. And if people are using these systems (which they must be) then that makes them part of the usability experience, and they need to be brought into the fold like everything else. Let’s call this Exhibit B in the case for why everyone needs a good portal. And until Workday has one, then they are missing the boat.

10. It’s 2019, and a “brain” for the Enterprise isn’t a nice-to-have, it’s a requirement.

It’s undoubtedly a brave new world that we are about to enter. If 1997 and the advent of web-based systems was a seismic shock to the Enterprise world. Then the new era we are about to enter will be 10 times what occurred back then. Enterprise vendors that don’t deliver a “brain” will leave their customers in a technological wilderness. Some people will try and “roll their own”, which will be shown to be a big mistake. Creating a brain for the Enterprise may sound easy, but it isn’t. Some people will try and use tools like Microsoft LUIS, which is not a good fit, or IBM Watson, which is equally flawed. Others will use vertical solutions, like Salesforce with Einstein, that can only do small chunks of what they need to do.

The correct way to enter this new era is to adopt something already built, that can be configured for your Enterprise needs. Oracle and IntraSee have partnered to provide such a solution and can demonstrate exactly what it looks like.

Please contact us for a live demonstration of what the future looks like, and how you can implement it today.

Contact Us

A few years ago we took lessons learned from Jurassic Park and applied them to the world of usability testing. Since then the world has changed, but the sage advice found in the movie has not.

We now live in the era of artificial intelligence (AI), and many organizations are grappling with the world of build vs. buy.

The question being asked is, “do we build an Enterprise chatbot (aka Digital Assistant) from scratch using something like Microsoft LUIS, or do we buy something already built”?

As we all discovered in the movie, good intentions can quickly turn into a series of unexpected outcomes. And the main lesson learned (if indeed it needed to be learned) is that as smart as we think we are, there’s no substitute for having a coherent and informed plan. And being able to learn from failed lessons of the past.

So, using Jurassic Park as an example of a build that totally went wrong, we thought we’d look at the lessons learned from the terrific movie and apply them to the world of chatbot implementations. You’ll be surprised at how applicable it is. 

1. Dr. Ian Malcolm: Oh, yeah. Oooh, ahhh, that’s how it always starts. Then later there’s running and um, screaming.

All projects begin with general excitement and great anticipation. And none moreso than chatbot projects. Kickoff and initial design meetings tend to be stress-free and filled with the hope that something great will happen. The prospect of replicating a human brain that can understand your Enterprise sounds like lots of fun. What could possibly go wrong? And, of course, the pizza ordering example that comes with the LUIS tutorial makes it all look like a walk in the park. But then, six months later, it all becomes apparent that ordering pizza isn’t a great use-case. And that the more you build, the more complicated it all becomes. And that’s when the screaming starts. 

And this is at the core of Microsoft’s problem. They are not an Enterprise software applications provider, in the way that someone like Oracle is. What they are good at is building tools that let you build applications. They just don’t build those applications for you. Again, unlike Oracle. 

With Microsoft LUIS, everything is a build, and in the Enterprise software world, the smart people are buying.  

2.  Dr. Ian Malcolm: Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.

Yes, it is possible to build an Enterprise chatbot using a tool like Microsoft LUIS. But only in the same way it’s possible that an army of monkeys, given enough time, could recreate the works of Shakespeare. The real question isn’t could you roll your sleeves up and try your luck with LUIS, it’s should you? And the answer is a resounding no.

Gartner has already spoken on this subject and advised that IT needs to stop trying to reinvent the wheel and instead purchase things in the Cloud that someone else has already built.

3. Dr. Ian Malcolm: Gee, the lack of humility before nature that’s being displayed here, uh… staggers me. 

Microsoft’s belief is that they just need to hand LUIS to their clients, and they’ll build massively complex neural systems that understand their Enterprise systems. This shows a complete lack of understanding, or appreciation for, the task at hand. 

And this naivete is fundamentally grounded in the fact that they, like IBM with Watson, don’t sell the software that organizations wish their chatbot to integrate with. 

Unlike, for example, Oracle, who not only sell chatbot technology (Oracle Digital Assistant), but are also using that technology to build their own Enterprise skills. Unless you drink your own champagne, it’s impossible to know if it’s any good or not.

4. Nick Van Owen: You seem like you have a shred of common sense, what the hell are you doing here?

At some point in every project, someone needs to apply common sense and say, “enough is enough”. If after a few months you’re not seeing something worth sharing with your organization, then that maybe is the time to try something “off the shelf”. Typically, that point is 3-5 months. By that time, you’ve probably figured out that brain surgery isn’t what your IT group is cut out for, and that you need to look for something already built. So, keep that in mind. 

If somebody keeps telling you that you’ll see the value in year two, then that’s a sign you should move on.

5. Dr. Ian Malcolm: Taking dinosaurs off this island is the worst idea in the long, sad history of bad ideas. And I’m gonna be there when you learn that.

As Gartner once said, “building a bad chatbot is easy, building a good one is hard”. If you spent over a year building a bad one, and then roll that out to your organization, then that’s going to be a tough lesson to learn – for the team that built it, and the organization that has to live with it. 

Trying to build an Enterprise chatbot from scratch using Microsoft LUIS, or IBM Watson, will go down in IT history as a bad idea on many levels. 

Let us count the ways:

  • Massive waste of resources and time
  • Poor first impression for the organization
  • Opportunity cost of lost time that could have been better used implementing a “proven” solution

6. John Hammond: Don’t worry, I’m not making the same mistakes again.
Dr. Ian Malcolm: No, you’re making all new ones.

The history of IT is littered with projects that went over budget and under-delivered. And that’s just looking at web-based implementations over the past 20 years. Given such a poor track record with a very simple to implement technology like HTML, imagine how badly awry projects using a conversational UI could go. Yes, you likely won’t be making the same mistakes. But there’s plenty of new ones to make if you’ve never done this before. 

And using Microsoft LUIS, or IBM Watson, will allow you to make those mistakes. Ultimately, they are just coding tools, and don’t have the smarts already built into them to ensure you don’t go down the wrong path. 

What you want is pre-delivered skills, not a swiss army knife. Skills that already understand your Enterprise needs and rules. Skills that are plug-configure-play. 

7. Dr. Ian Malcolm: God help us, we’re in the hands of engineers.

The last thing you need to be in, while implementing a chatbot solution, is the hands of engineers. Microsoft LUIS forces you into the path of focusing on how to engineer code, and away from the path of “how should this work for the user”. 

In this new world of AI, the focus now isn’t on people learning how to interact with machines, it’s all about machines learning how to interact with people. And to make that happen you need all your engineering issues resolved on day one. Not day 1000. 

8. Dr. Ellie Sattler: [after finding Malcolm with a broken leg] Should we chance moving him?
Dr. Ian Malcolm: [the Tyrannosaur roars nearby] Please, chance it.

An illusory comfort zone is not the place to be in. Whether it’s comfort with your IT group, or comfort with a vendor you’ve happily worked with for years. When you hear the T.Rex roar, it’s time to move to a better place. A safer place. Don’t wed yourself to a toolset or a preferred vendor, take a chance and reach out to the world. Speak to people you’ve never spoken to before. 

Ask for demos from many vendors in the market. Ask difficult questions. Enter into an interview process to find someone who has already done what you want to do. It may seem like taking a chance, but it’s better than certain failure. 

9. Dr. Alan Grant: The world has just changed so radically, and we’re all running to catch up. I don’t want to jump to any conclusions, but look… Dinosaurs and man, two species separated by 65 million years of evolution have just been suddenly thrown back into the mix together. How can we possibly have the slightest idea what to expect? 

The world has just suddenly changed radically. Many people are running to catch up, while others, like Workday, are just ignoring the change and hoping it won’t affect them. But machines interacting with humans, as if they were human, is not a change that is going away. 

At the same time there is a complexity to all this. And ordering pizza doesn’t really describe that complexity. 

Meanwhile both Oracle and IntraSee have been building these skills over the past two years, on the same technology platform (ODA), and have learned valuable lessons during that period. Having already done this, we now know what to expect when adding conversational skills to complex Enterprise systems (both Cloud and on-premise). 

10. Dr. Ian Malcolm: I’ll be right back. I give you my word.
Kelly Malcolm: [pounds her fists on the railing] But you *never* keep your word!

The worst thing about implementing a failed chatbot solution is that you will train people to never trust you. No matter how many times you tell them that the next version will solve all their issues, they’ll never believe it. And they’d be right not to. It’s very rare in life that we stumble across the perfect way to do anything. And the odds of that happening with an Enterprise chatbot (aka Digital Assistant) created from scratch using a tool like Microsoft LUIS or IBM Watson, are slim to none.  

No matter how many assurances from the vendor, if it’s not already been built, you have no reason to believe it will be built properly.

So, ask to see a demonstration of a fully formed chatbot with advanced Enterprise skills. Ask as many people as you can. But definitely also ask us. We’ll be happy to oblige. 

To learn more, just contact us below.

Contact Us

Higher Education is going through a major shift as institutions attempt to align to changing environments and student demands. Today’s student is looking for options outside of, or coupled with, a traditional four-year degree. The desire for life-long education and demand for lower-cost options has many traditional schools turning to online offerings. It has been a shift met with resistance by administrations, but minds are starting to change. Education, which was once believed to only be effective in a classroom, is now available in an online medium. Not to mention 24 hours a day, seven days a week, all over the world.

“This year 73% of schools made a decision to offer online programs based on growth potential for overall enrollment”

– 2018 Online Education Trends Report

Online education can open many doors for students who otherwise would never be able to attend a certain school. For example, Stanford this year rolled out 150 courses online. Students who never dreamt of attending Stanford, due to cost or distance, can now do so. The accessibility to high quality education is changing before our eyes. 

This wave was never more apparent than when Purdue University purchased for-profit Kaplan University in 2017 and turned it into Purdue Global with the aim of serving post-secondary education. A traditional, nonprofit land-grant university is shifting to meet the demands of present-day students. Being mentioned in the same press release as a for-profit education company was unfathomable, until it wasn’t.

We aren’t talking about only adult students here either. The trends point to traditional aged students (18-24 years old) turning toward online education as well.

”Students aged 18-24 saw the greatest year-over-year increase in online education enrollment at 115%”

– 2018 Online Education Trends Report

Unintended Consequences

However, no good deed comes without unintended consequences. Learning from a remote location can be a challenge. There is no building to walk into. There is no teaching assistant (TA) to sit down with. There is no residence hall advisor to check in on you. And rarely any other students to remind you of deadlines and schedules. 

Being online means you need online self-service mechanisms for support and help. A student’s success depends on it.

Many institutions have attempted to use their traditional telephone-based student support services. However, phone support is slow, it produces inconsistent answers, it isn’t available 24×7, and is mostly an experience students dread.

Also, help desks are not cheap to operate. Each call can generate a cost of at least $5 and oftentimes much more. 

Time has always been a student’s scarcest commodity. Whether it is balancing a full course load, or juggling work and family, students simply can’t waste time on hold waiting for someone to answer a basic question like: when am I allowed to register?

If perceptions weren’t bad enough about phone support, imagine how your worldwide student body feels about that support only speaking English. English second language learners require a special approach.

A 2018 study showed how many international students struggle in their relationships, with their finances, feelings of isolation and belonging, all of which affect their educational experience. For example, regarding isolation, only 35% of respondents reported feeling a part of the university.

– The International Journal of Higher Education Research

Digital Assistant to the Rescue!

Superman chatbot with a graduation cap

Able to help thousands of students in a single bound!

Digital Assistants are a type of enterprise chatbot that not only can answer questions and provide support, but they can assist you in completing tasks such as changing your email address on file with the registrar, or even signing you up for a class. They know who you are and can personalize their service to you. Today’s digital assistant is not your parent’s MovieFone.

Your digital assistant can be that self-service help, 24 hours a day, 7 days a week. Whether it is questions for the registrar, financial aid office or student services, the digital assistant provides consistent answers at speed. 

Average response time from the digital assistant is usually sub-second. With student demand for instant answers and their dislike for waiting on hold, chatbots are not only an essential tool, but oftentimes the preferred communication method of students.

Supporting students in their native language can bring a comfort to someone reaching out for help. Digital assistants can provide that multi-lingual help.

Figures speaking multiple languages

Speak all of your student’s languages

Our chatbot can speak over 100 languages automatically. That is a level of service that would otherwise be very expensive, and almost impossible, to provide with traditional support centers.

No conversation about student systems can be had without considering the impact to student success. Digital assistants open up all sorts of ways to help the student along their academic journey. While we have touched upon support functions already, sometimes students needs a more proactive nudge.

The digital assistant knows, for example, when a student has an assignment coming up or an advising appointment so it can make sure the student is reminded. If the student has a hold placed on their account which can interfere with graduation, the digital assistant can pop up and help them resolve the issue. 

You may be wondering… what about complicated problems with nuanced solutions or those that really need the personal touch? Chatbots don’t replace all personal interactions. The chatbot can sense when the student is stuck and transfer them to a live person or have someone such as their advisor follow up with a phone call or personal visit. 

An even better consequence of deploying a digital assistant is that it frees up time from key roles like academic advisors who no longer need to answer common, mundane questions. They can refocus on activities that help students be successful. Plus, it also negates the need to increase help desk staff to support an increasingly online student body.

And, of course, digital assistants don’t go to sleep, and never call in sick

Because digital assistants are extremely cheap to run, they are the key to keeping operational costs down, while student enrollment rises. 

AI is driving, and supporting, a new era of technological disruption

When you see the big names such as Purdue University, Stanford, MIT and Harvard getting into online education, you know the winds of change are blowing. And with those winds, we can’t lose sight of supporting our students in these new models and ensuring they are successful. Digital assistants can address many of the real problems presented by changing models. Consider that in the next decade, the incoming class of students will have never known life without Alexa or Siri or Google Assistant. This group will expect AI to be in place to support their needs.

This wave of change and the promise of cost savings, expanded enrollment, and better student success are compelling enough that CIO’s in higher education surveyed by Gartner designated artificial intelligence as the top game changer for 2019. 

Getting started with a digital assistant for your institution is as easy as our 12-week pilot program which has no long-term commitments. Contact us to learn more and see a demo for yourself. 

Contact Us

There’s a very old joke in the software industry:

Question:
What’s the difference between a car salesman and a software salesman?
Answer:
A car salesman knows when he is lying.

Unfortunately, there’s a huge amount of truth to this joke, and the explanation for why this is true is simple. Software is pretty complicated, and cars are pretty straight forward. With a car you can generally read up on everything you need to know in a matter of hours (enough to sell it anyway). While software can sometimes take months to really understand. Then factor in the myriad ways it can be used, and what business requirements people may be asking of it, and even the best sales people can be stumped at how to answer a question.

Oftentimes they really do believe they do know the answer. And that’s the source of the joke.

And this leads us to the new era of software: Artificial Intelligence (AI). And, of course, this means a whole bunch more woefully inadequate answers to very reasonable questions.

Customer:
How does the chatbot know what to do when we ask it a question?
Sales Person:
It learns using AI.
Customer:
But how?
Sales Person:
It just does. It’s called deep learning.
Customer:
But what if it makes mistakes?
Sales Person:
It learns from its mistakes.
Customer:
But how?
Sales Person:
It uses deep learning.

Obviously, this isn’t how any of this works at all. But given the mystery that shrouds all things AI, it’s not a surprise that these types of conversations take place.

So, to add transparency to what will be a very challenging subject to evaluate for many organizations, we’ve created a list of five facts that are critical to aid the understanding and implementation of a chatbot solution in the Enterprise.

Fact 1: AI is like a garden, it needs seeding & cultivation

Robotic hand gardening

Figure 1: Automation of nature and nurture

Out of the box, all chatbot engines (.ai) come with a general understanding of language and grammatical constructs. They also have a limited understanding of entities. Ex: I can ask a chatbot to do something “next Tuesday” and it will know what that date is, because it has knowledge of an entity that defines what a date can be. It also understands “today” and “tomorrow” too. It may also understand people’s names and cities in a country. “Is the Chicago office open tomorrow”?

What chatbots generally don’t know out of the box are the things particular to your domain. They don’t understand HR jargon, or campus terminology. They don’t know which departments you have, or job titles.  Terms like “leave of absence”, “expense reimbursement” and “travel auth” aren’t considered entities that have specific meanings, in the way that “next Friday” or “tomorrow” do.

So, it’s important to “seed” the AI on day one of your implementation. In many ways it’s just like how a farmer will grow a field. The farmer doesn’t just hope that nature will turn the field into a spectacular crop of wheat. Nature can only do so much, the farmer needs to do his/her bit also. The soil must be prepared, the seed planted, and each day it needs to be inspected and tended to ensure growth is according to plan.

For AI, it’s critical to plant the seed of domain knowledge on day one. And then monitor usage to identify areas it needs to be expanded, and also the specific areas it needs additional training and seeding in.  If the chatbot is HR focused then it needs an entire vocabulary injected and trained, in preparation for usage by actual humans.

If your chatbot doesn’t understand the difference between an adoption reimbursement program, and an adoption leave program, it will be destined to disappoint.

Fact 2: It’s not deep learning and big data that will be the key to success, it’s smart algorithms and neural networks

Last year we wrote a blog on AlphaGo Zero, and talked about how it wasn’t deep learning that made it so smart. The same thing is true of Enterprise chatbot implementations. Deep learning is a very powerful tool, but it isn’t the answer to everything. Neural networks and smart algorithms are the real engine behind a successful chatbot implementation.

Figure 2: Monte Carlo Tree Search in AlphaGo Zero, guided by neural networks

The lesson AlphaGo Zero taught the world was that AI is at its most powerful when it can map out its own neural network, while also readjusting decision points based on actual outcomes. This is why creating an incredible Chess or Go master is much easier than creating AI that cures cancer. 

In the Enterprise chatbot world, sophisticated decision networks don’t just create themselves, and deep learning doesn’t build them. They need to exist on day one, and they need to have been pre-built with domain knowledge and stacked with business rules that determine flows.

In the same way AlphaGo Zero needed to be aware of the rules of Go, your Enterprise chatbot needs to be aware of the best practice rules of the Enterprise. Only then can it be trusted by your employees, managers, students and faculty members.

In the Enterprise chatbot world, this equates to massively complex and sophisticated dialog flows that come pre-built and configurable for your business requirements. And that have over a decade of domain knowledge built into them.

Fact 3: AI is not a data warehouse

ones and zeros being inspected

Figure 3: AI is not a data warehouse

Those that remember IBM’s Watson winning Jeopardy may be disappointed to know that what they were really watching was a massive data warehouse stored in memory, with a search feature that had been built manually in order to meet the needs of one game.

There’s a lot of reasons why putting sensitive data into a proprietary AI engine in the Cloud isn’t a good idea. Security, data privacy, dual maintenance, and conversion effort, are just some of them. Your data belongs where it is right now.

It’s the obligation of the AI vendor to be able to plug into your data, not the other way round. As always in life, the tail should not be wagging the dog.

Of course, there are reasons why many vendors require this: laziness and lack of knowledge top the list.

Creating sophisticated data adapters that are a broker between your data and the AI isn’t easy and takes lots of domain knowledge. We know this because we’ve spent ten years doing it.

But with many vendors looking to jump into a market that they have no knowledge of, shortcuts have been taken by many of them. But that doesn’t change the fact that your data needs to be protected, and chatbot implementations shouldn’t be turned into massive integration projects.

Fact 4: A concierge chatbot is a requirement, not a nice to have

Tug of war between robots

Figure 4: Avoid chatbot confusion

Does anyone remember what a link farm is? Yes, they were awful. Quite possibly the worst manifestation of web-based technologies. And the problem was obvious, all those link farms did was sow confusion and frustration with the poor users who had to deal with them.

It’s 2019 now, and we face a similar conundrum. One chatbot that knows everything. Or hundreds of chatbots that know bits of information, but no way for the user to know which ones know what. Imagine being handed 100 help desk numbers and being asked to guess which one was the right one based on your area of need.

Fortunately, the problem has a solution. A concierge chatbot can be used as the focal point for all questions. All the concierge is responsible for is knowing which chatbot knows the answer to which question, and then seamlessly managing the handoff in the conversation, such that to the human it feels like one conversation with one chatbot.

This way the humans only ever need to start a conversation with the concierge. The ultimate one-stop bot.

Having worked extensively with Oracle’s chatbot framework, we not only can recommend it very highly, but we can also attest to its concierge capabilities. So, while Oracle is rolling out lots of small function-focused bots (they call them skills). All these bots/skills can be managed with one concierge chatbot automatically. Meaning you can have one concierge that includes Oracle delivered bots, IntraSee delivered bots, and also custom bots created by you.

Oracles uses the term “Oracle Digital Assistant”. What this is, is concierge chatbot capability under one technology stack. 

Fact 5: AI that requires massive human intervention, and coding development, isn’t AI

Storm trooper legos with guy at laptop

Figure 5: We like Joe, but Joe shouldn’t be creating Enterprise chatbots

AI that requires 95% of all its functionality to be created by the human hand isn’t really AI. It’s cool, but it’s not AI. It’s also not supportable or maintainable. The weakest link will be the human hand that pieced it all together. And if that hand has no domain knowledge, it won’t just be buggy, it will be stupid.

The real key to AI is not just automation of the task a chatbot can complete. It’s automation of the creation of the chatbot itself.

This blog has been about the facts as we see them at IntraSee. So let’s look at the facts of what a sample chatbot pilot generated. The background being: 200 FAQ’s, 16 view data intents, 6 transactions (promotions, transfers, etc.), 10 reports (yes, a chatbot can run reports). Here’s the technical numbers:

  • 24 Custom Entities
  • 690 Custom Component Invocations
  • 2,696 System Component Invocations
  • 3,386 States
  • 101,609 Transitions between States

We firmly believe that automation of creation is the key to AI success. Manually coding over 100,000 state transitions creates inherent instability, and leads to what we would call, a Frankenbot.

At IntraSee we have automated the creation of a chatbot, such that with one push button we can generate hundreds of thousands, even millions, of chatbot states, transitions, invocations, and entities. We do this for multiple reasons:

  • We remove human error from the equation.
  • We simplify the management and maintenance, such that a business user can easily deploy any changes.
  • We massively shorten implementation times down to a just a few weeks.
  • We can deliver more in four weeks than would normally be possible in over one year.
  • We can deploy mass changes, risk free, to a chatbot in a matter of minutes.

Please contact us if you’d like to learn more…

Contact Us