A few years ago we took lessons learned from Jurassic Park and applied them to the world of usability testing. Since then the world has changed, but the sage advice found in the movie has not.

We now live in the era of artificial intelligence (AI), and many organizations are grappling with the world of build vs. buy.

The question being asked is, “do we build an Enterprise chatbot (aka Digital Assistant) from scratch using something like Microsoft LUIS, or do we buy something already built”?

As we all discovered in the movie, good intentions can quickly turn into a series of unexpected outcomes. And the main lesson learned (if indeed it needed to be learned) is that as smart as we think we are, there’s no substitute for having a coherent and informed plan. And being able to learn from failed lessons of the past.

So, using Jurassic Park as an example of a build that totally went wrong, we thought we’d look at the lessons learned from the terrific movie and apply them to the world of chatbot implementations. You’ll be surprised at how applicable it is. 

1. Dr. Ian Malcolm: Oh, yeah. Oooh, ahhh, that’s how it always starts. Then later there’s running and um, screaming.

All projects begin with general excitement and great anticipation. And none moreso than chatbot projects. Kickoff and initial design meetings tend to be stress-free and filled with the hope that something great will happen. The prospect of replicating a human brain that can understand your Enterprise sounds like lots of fun. What could possibly go wrong? And, of course, the pizza ordering example that comes with the LUIS tutorial makes it all look like a walk in the park. But then, six months later, it all becomes apparent that ordering pizza isn’t a great use-case. And that the more you build, the more complicated it all becomes. And that’s when the screaming starts. 

And this is at the core of Microsoft’s problem. They are not an Enterprise software applications provider, in the way that someone like Oracle is. What they are good at is building tools that let you build applications. They just don’t build those applications for you. Again, unlike Oracle. 

With Microsoft LUIS, everything is a build, and in the Enterprise software world, the smart people are buying.  

2.  Dr. Ian Malcolm: Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.

Yes, it is possible to build an Enterprise chatbot using a tool like Microsoft LUIS. But only in the same way it’s possible that an army of monkeys, given enough time, could recreate the works of Shakespeare. The real question isn’t could you roll your sleeves up and try your luck with LUIS, it’s should you? And the answer is a resounding no.

Gartner has already spoken on this subject and advised that IT needs to stop trying to reinvent the wheel and instead purchase things in the Cloud that someone else has already built.

3. Dr. Ian Malcolm: Gee, the lack of humility before nature that’s being displayed here, uh… staggers me. 

Microsoft’s belief is that they just need to hand LUIS to their clients, and they’ll build massively complex neural systems that understand their Enterprise systems. This shows a complete lack of understanding, or appreciation for, the task at hand. 

And this naivete is fundamentally grounded in the fact that they, like IBM with Watson, don’t sell the software that organizations wish their chatbot to integrate with. 

Unlike, for example, Oracle, who not only sell chatbot technology (Oracle Digital Assistant), but are also using that technology to build their own Enterprise skills. Unless you drink your own champagne, it’s impossible to know if it’s any good or not.

4. Nick Van Owen: You seem like you have a shred of common sense, what the hell are you doing here?

At some point in every project, someone needs to apply common sense and say, “enough is enough”. If after a few months you’re not seeing something worth sharing with your organization, then that maybe is the time to try something “off the shelf”. Typically, that point is 3-5 months. By that time, you’ve probably figured out that brain surgery isn’t what your IT group is cut out for, and that you need to look for something already built. So, keep that in mind. 

If somebody keeps telling you that you’ll see the value in year two, then that’s a sign you should move on.

5. Dr. Ian Malcolm: Taking dinosaurs off this island is the worst idea in the long, sad history of bad ideas. And I’m gonna be there when you learn that.

As Gartner once said, “building a bad chatbot is easy, building a good one is hard”. If you spent over a year building a bad one, and then roll that out to your organization, then that’s going to be a tough lesson to learn – for the team that built it, and the organization that has to live with it. 

Trying to build an Enterprise chatbot from scratch using Microsoft LUIS, or IBM Watson, will go down in IT history as a bad idea on many levels. 

Let us count the ways:

  • Massive waste of resources and time
  • Poor first impression for the organization
  • Opportunity cost of lost time that could have been better used implementing a “proven” solution

6. John Hammond: Don’t worry, I’m not making the same mistakes again.
Dr. Ian Malcolm: No, you’re making all new ones.

The history of IT is littered with projects that went over budget and under-delivered. And that’s just looking at web-based implementations over the past 20 years. Given such a poor track record with a very simple to implement technology like HTML, imagine how badly awry projects using a conversational UI could go. Yes, you likely won’t be making the same mistakes. But there’s plenty of new ones to make if you’ve never done this before. 

And using Microsoft LUIS, or IBM Watson, will allow you to make those mistakes. Ultimately, they are just coding tools, and don’t have the smarts already built into them to ensure you don’t go down the wrong path. 

What you want is pre-delivered skills, not a swiss army knife. Skills that already understand your Enterprise needs and rules. Skills that are plug-configure-play. 

7. Dr. Ian Malcolm: God help us, we’re in the hands of engineers.

The last thing you need to be in, while implementing a chatbot solution, is the hands of engineers. Microsoft LUIS forces you into the path of focusing on how to engineer code, and away from the path of “how should this work for the user”. 

In this new world of AI, the focus now isn’t on people learning how to interact with machines, it’s all about machines learning how to interact with people. And to make that happen you need all your engineering issues resolved on day one. Not day 1000. 

8. Dr. Ellie Sattler: [after finding Malcolm with a broken leg] Should we chance moving him?
Dr. Ian Malcolm: [the Tyrannosaur roars nearby] Please, chance it.

An illusory comfort zone is not the place to be in. Whether it’s comfort with your IT group, or comfort with a vendor you’ve happily worked with for years. When you hear the T.Rex roar, it’s time to move to a better place. A safer place. Don’t wed yourself to a toolset or a preferred vendor, take a chance and reach out to the world. Speak to people you’ve never spoken to before. 

Ask for demos from many vendors in the market. Ask difficult questions. Enter into an interview process to find someone who has already done what you want to do. It may seem like taking a chance, but it’s better than certain failure. 

9. Dr. Alan Grant: The world has just changed so radically, and we’re all running to catch up. I don’t want to jump to any conclusions, but look… Dinosaurs and man, two species separated by 65 million years of evolution have just been suddenly thrown back into the mix together. How can we possibly have the slightest idea what to expect? 

The world has just suddenly changed radically. Many people are running to catch up, while others, like Workday, are just ignoring the change and hoping it won’t affect them. But machines interacting with humans, as if they were human, is not a change that is going away. 

At the same time there is a complexity to all this. And ordering pizza doesn’t really describe that complexity. 

Meanwhile both Oracle and IntraSee have been building these skills over the past two years, on the same technology platform (ODA), and have learned valuable lessons during that period. Having already done this, we now know what to expect when adding conversational skills to complex Enterprise systems (both Cloud and on-premise). 

10. Dr. Ian Malcolm: I’ll be right back. I give you my word.
Kelly Malcolm: [pounds her fists on the railing] But you *never* keep your word!

The worst thing about implementing a failed chatbot solution is that you will train people to never trust you. No matter how many times you tell them that the next version will solve all their issues, they’ll never believe it. And they’d be right not to. It’s very rare in life that we stumble across the perfect way to do anything. And the odds of that happening with an Enterprise chatbot (aka Digital Assistant) created from scratch using a tool like Microsoft LUIS or IBM Watson, are slim to none.  

No matter how many assurances from the vendor, if it’s not already been built, you have no reason to believe it will be built properly.

So, ask to see a demonstration of a fully formed chatbot with advanced Enterprise skills. Ask as many people as you can. But definitely also ask us. We’ll be happy to oblige. 

To learn more, just contact us below.

Contact Us

Higher Education is going through a major shift as institutions attempt to align to changing environments and student demands. Today’s student is looking for options outside of, or coupled with, a traditional four-year degree. The desire for life-long education and demand for lower-cost options has many traditional schools turning to online offerings. It has been a shift met with resistance by administrations, but minds are starting to change. Education, which was once believed to only be effective in a classroom, is now available in an online medium. Not to mention 24 hours a day, seven days a week, all over the world.

“This year 73% of schools made a decision to offer online programs based on growth potential for overall enrollment”

– 2018 Online Education Trends Report

Online education can open many doors for students who otherwise would never be able to attend a certain school. For example, Stanford this year rolled out 150 courses online. Students who never dreamt of attending Stanford, due to cost or distance, can now do so. The accessibility to high quality education is changing before our eyes. 

This wave was never more apparent than when Purdue University purchased for-profit Kaplan University in 2017 and turned it into Purdue Global with the aim of serving post-secondary education. A traditional, nonprofit land-grant university is shifting to meet the demands of present-day students. Being mentioned in the same press release as a for-profit education company was unfathomable, until it wasn’t.

We aren’t talking about only adult students here either. The trends point to traditional aged students (18-24 years old) turning toward online education as well.

”Students aged 18-24 saw the greatest year-over-year increase in online education enrollment at 115%”

– 2018 Online Education Trends Report

Unintended Consequences

However, no good deed comes without unintended consequences. Learning from a remote location can be a challenge. There is no building to walk into. There is no teaching assistant (TA) to sit down with. There is no residence hall advisor to check in on you. And rarely any other students to remind you of deadlines and schedules. 

Being online means you need online self-service mechanisms for support and help. A student’s success depends on it.

Many institutions have attempted to use their traditional telephone-based student support services. However, phone support is slow, it produces inconsistent answers, it isn’t available 24×7, and is mostly an experience students dread.

Also, help desks are not cheap to operate. Each call can generate a cost of at least $5 and oftentimes much more. 

Time has always been a student’s scarcest commodity. Whether it is balancing a full course load, or juggling work and family, students simply can’t waste time on hold waiting for someone to answer a basic question like: when am I allowed to register?

If perceptions weren’t bad enough about phone support, imagine how your worldwide student body feels about that support only speaking English. English second language learners require a special approach.

A 2018 study showed how many international students struggle in their relationships, with their finances, feelings of isolation and belonging, all of which affect their educational experience. For example, regarding isolation, only 35% of respondents reported feeling a part of the university.

– The International Journal of Higher Education Research

Digital Assistant to the Rescue!

Superman chatbot with a graduation cap

Able to help thousands of students in a single bound!

Digital Assistants are a type of enterprise chatbot that not only can answer questions and provide support, but they can assist you in completing tasks such as changing your email address on file with the registrar, or even signing you up for a class. They know who you are and can personalize their service to you. Today’s digital assistant is not your parent’s MovieFone.

Your digital assistant can be that self-service help, 24 hours a day, 7 days a week. Whether it is questions for the registrar, financial aid office or student services, the digital assistant provides consistent answers at speed. 

Average response time from the digital assistant is usually sub-second. With student demand for instant answers and their dislike for waiting on hold, chatbots are not only an essential tool, but oftentimes the preferred communication method of students.

Supporting students in their native language can bring a comfort to someone reaching out for help. Digital assistants can provide that multi-lingual help.

Figures speaking multiple languages

Speak all of your student’s languages

Our chatbot can speak over 100 languages automatically. That is a level of service that would otherwise be very expensive, and almost impossible, to provide with traditional support centers.

No conversation about student systems can be had without considering the impact to student success. Digital assistants open up all sorts of ways to help the student along their academic journey. While we have touched upon support functions already, sometimes students needs a more proactive nudge.

The digital assistant knows, for example, when a student has an assignment coming up or an advising appointment so it can make sure the student is reminded. If the student has a hold placed on their account which can interfere with graduation, the digital assistant can pop up and help them resolve the issue. 

You may be wondering… what about complicated problems with nuanced solutions or those that really need the personal touch? Chatbots don’t replace all personal interactions. The chatbot can sense when the student is stuck and transfer them to a live person or have someone such as their advisor follow up with a phone call or personal visit. 

An even better consequence of deploying a digital assistant is that it frees up time from key roles like academic advisors who no longer need to answer common, mundane questions. They can refocus on activities that help students be successful. Plus, it also negates the need to increase help desk staff to support an increasingly online student body.

And, of course, digital assistants don’t go to sleep, and never call in sick

Because digital assistants are extremely cheap to run, they are the key to keeping operational costs down, while student enrollment rises. 

AI is driving, and supporting, a new era of technological disruption

When you see the big names such as Purdue University, Stanford, MIT and Harvard getting into online education, you know the winds of change are blowing. And with those winds, we can’t lose sight of supporting our students in these new models and ensuring they are successful. Digital assistants can address many of the real problems presented by changing models. Consider that in the next decade, the incoming class of students will have never known life without Alexa or Siri or Google Assistant. This group will expect AI to be in place to support their needs.

This wave of change and the promise of cost savings, expanded enrollment, and better student success are compelling enough that CIO’s in higher education surveyed by Gartner designated artificial intelligence as the top game changer for 2019. 

Getting started with a digital assistant for your institution is as easy as our 12-week pilot program which has no long-term commitments. Contact us to learn more and see a demo for yourself. 

Contact Us

There’s a very old joke in the software industry:

Question:
What’s the difference between a car salesman and a software salesman?
Answer:
A car salesman knows when he is lying.

Unfortunately, there’s a huge amount of truth to this joke, and the explanation for why this is true is simple. Software is pretty complicated, and cars are pretty straight forward. With a car you can generally read up on everything you need to know in a matter of hours (enough to sell it anyway). While software can sometimes take months to really understand. Then factor in the myriad ways it can be used, and what business requirements people may be asking of it, and even the best sales people can be stumped at how to answer a question.

Oftentimes they really do believe they do know the answer. And that’s the source of the joke.

And this leads us to the new era of software: Artificial Intelligence (AI). And, of course, this means a whole bunch more woefully inadequate answers to very reasonable questions.

Customer:
How does the chatbot know what to do when we ask it a question?
Sales Person:
It learns using AI.
Customer:
But how?
Sales Person:
It just does. It’s called deep learning.
Customer:
But what if it makes mistakes?
Sales Person:
It learns from its mistakes.
Customer:
But how?
Sales Person:
It uses deep learning.

Obviously, this isn’t how any of this works at all. But given the mystery that shrouds all things AI, it’s not a surprise that these types of conversations take place.

So, to add transparency to what will be a very challenging subject to evaluate for many organizations, we’ve created a list of five facts that are critical to aid the understanding and implementation of a chatbot solution in the Enterprise.

Fact 1: AI is like a garden, it needs seeding & cultivation

Robotic hand gardening

Figure 1: Automation of nature and nurture

Out of the box, all chatbot engines (.ai) come with a general understanding of language and grammatical constructs. They also have a limited understanding of entities. Ex: I can ask a chatbot to do something “next Tuesday” and it will know what that date is, because it has knowledge of an entity that defines what a date can be. It also understands “today” and “tomorrow” too. It may also understand people’s names and cities in a country. “Is the Chicago office open tomorrow”?

What chatbots generally don’t know out of the box are the things particular to your domain. They don’t understand HR jargon, or campus terminology. They don’t know which departments you have, or job titles.  Terms like “leave of absence”, “expense reimbursement” and “travel auth” aren’t considered entities that have specific meanings, in the way that “next Friday” or “tomorrow” do.

So, it’s important to “seed” the AI on day one of your implementation. In many ways it’s just like how a farmer will grow a field. The farmer doesn’t just hope that nature will turn the field into a spectacular crop of wheat. Nature can only do so much, the farmer needs to do his/her bit also. The soil must be prepared, the seed planted, and each day it needs to be inspected and tended to ensure growth is according to plan.

For AI, it’s critical to plant the seed of domain knowledge on day one. And then monitor usage to identify areas it needs to be expanded, and also the specific areas it needs additional training and seeding in.  If the chatbot is HR focused then it needs an entire vocabulary injected and trained, in preparation for usage by actual humans.

If your chatbot doesn’t understand the difference between an adoption reimbursement program, and an adoption leave program, it will be destined to disappoint.

Fact 2: It’s not deep learning and big data that will be the key to success, it’s smart algorithms and neural networks

Last year we wrote a blog on AlphaGo Zero, and talked about how it wasn’t deep learning that made it so smart. The same thing is true of Enterprise chatbot implementations. Deep learning is a very powerful tool, but it isn’t the answer to everything. Neural networks and smart algorithms are the real engine behind a successful chatbot implementation.

Figure 2: Monte Carlo Tree Search in AlphaGo Zero, guided by neural networks

The lesson AlphaGo Zero taught the world was that AI is at its most powerful when it can map out its own neural network, while also readjusting decision points based on actual outcomes. This is why creating an incredible Chess or Go master is much easier than creating AI that cures cancer. 

In the Enterprise chatbot world, sophisticated decision networks don’t just create themselves, and deep learning doesn’t build them. They need to exist on day one, and they need to have been pre-built with domain knowledge and stacked with business rules that determine flows.

In the same way AlphaGo Zero needed to be aware of the rules of Go, your Enterprise chatbot needs to be aware of the best practice rules of the Enterprise. Only then can it be trusted by your employees, managers, students and faculty members.

In the Enterprise chatbot world, this equates to massively complex and sophisticated dialog flows that come pre-built and configurable for your business requirements. And that have over a decade of domain knowledge built into them.

Fact 3: AI is not a data warehouse

ones and zeros being inspected

Figure 3: AI is not a data warehouse

Those that remember IBM’s Watson winning Jeopardy may be disappointed to know that what they were really watching was a massive data warehouse stored in memory, with a search feature that had been built manually in order to meet the needs of one game.

There’s a lot of reasons why putting sensitive data into a proprietary AI engine in the Cloud isn’t a good idea. Security, data privacy, dual maintenance, and conversion effort, are just some of them. Your data belongs where it is right now.

It’s the obligation of the AI vendor to be able to plug into your data, not the other way round. As always in life, the tail should not be wagging the dog.

Of course, there are reasons why many vendors require this: laziness and lack of knowledge top the list.

Creating sophisticated data adapters that are a broker between your data and the AI isn’t easy and takes lots of domain knowledge. We know this because we’ve spent ten years doing it.

But with many vendors looking to jump into a market that they have no knowledge of, shortcuts have been taken by many of them. But that doesn’t change the fact that your data needs to be protected, and chatbot implementations shouldn’t be turned into massive integration projects.

Fact 4: A concierge chatbot is a requirement, not a nice to have

Tug of war between robots

Figure 4: Avoid chatbot confusion

Does anyone remember what a link farm is? Yes, they were awful. Quite possibly the worst manifestation of web-based technologies. And the problem was obvious, all those link farms did was sow confusion and frustration with the poor users who had to deal with them.

It’s 2019 now, and we face a similar conundrum. One chatbot that knows everything. Or hundreds of chatbots that know bits of information, but no way for the user to know which ones know what. Imagine being handed 100 help desk numbers and being asked to guess which one was the right one based on your area of need.

Fortunately, the problem has a solution. A concierge chatbot can be used as the focal point for all questions. All the concierge is responsible for is knowing which chatbot knows the answer to which question, and then seamlessly managing the handoff in the conversation, such that to the human it feels like one conversation with one chatbot.

This way the humans only ever need to start a conversation with the concierge. The ultimate one-stop bot.

Having worked extensively with Oracle’s chatbot framework, we not only can recommend it very highly, but we can also attest to its concierge capabilities. So, while Oracle is rolling out lots of small function-focused bots (they call them skills). All these bots/skills can be managed with one concierge chatbot automatically. Meaning you can have one concierge that includes Oracle delivered bots, IntraSee delivered bots, and also custom bots created by you.

Oracles uses the term “Oracle Digital Assistant”. What this is, is concierge chatbot capability under one technology stack. 

Fact 5: AI that requires massive human intervention, and coding development, isn’t AI

Storm trooper legos with guy at laptop

Figure 5: We like Joe, but Joe shouldn’t be creating Enterprise chatbots

AI that requires 95% of all its functionality to be created by the human hand isn’t really AI. It’s cool, but it’s not AI. It’s also not supportable or maintainable. The weakest link will be the human hand that pieced it all together. And if that hand has no domain knowledge, it won’t just be buggy, it will be stupid.

The real key to AI is not just automation of the task a chatbot can complete. It’s automation of the creation of the chatbot itself.

This blog has been about the facts as we see them at IntraSee. So let’s look at the facts of what a sample chatbot pilot generated. The background being: 200 FAQ’s, 16 view data intents, 6 transactions (promotions, transfers, etc.), 10 reports (yes, a chatbot can run reports). Here’s the technical numbers:

  • 24 Custom Entities
  • 690 Custom Component Invocations
  • 2,696 System Component Invocations
  • 3,386 States
  • 101,609 Transitions between States

We firmly believe that automation of creation is the key to AI success. Manually coding over 100,000 state transitions creates inherent instability, and leads to what we would call, a Frankenbot.

At IntraSee we have automated the creation of a chatbot, such that with one push button we can generate hundreds of thousands, even millions, of chatbot states, transitions, invocations, and entities. We do this for multiple reasons:

  • We remove human error from the equation.
  • We simplify the management and maintenance, such that a business user can easily deploy any changes.
  • We massively shorten implementation times down to a just a few weeks.
  • We can deliver more in four weeks than would normally be possible in over one year.
  • We can deploy mass changes, risk free, to a chatbot in a matter of minutes.

Please contact us if you’d like to learn more…

Contact Us

It was a pleasure to attend another fantastic Gartner conference in Las Vegas (November 26-29, 2018). And while Amazon had their own mega conference (re:Invent 2018) down the road at the Venetian, the smart set were taking a broader look at the future with the team from Gartner.

So, what we’d like to do is break down the key messaging that we got from the conference, based on the tracks that we followed.

So here we go: Gartner’s key messaging from November 2018.

“IT organizations need to stop thinking in terms of projects, and start thinking in terms of products.”

– Gartner

This was the keynote theme for 2018. Last years theme was that IT needed to discover the word “yes” in their vocabulary. This year Gartner focused their messaging on redefining how IT needs to partner with the business community. Primarily, the advice was that IT needed to stop seeing the world in terms of projects, and instead embrace the concept of products as a means of implementing solutions. And, of course, it wouldn’t be a technology conference without the introduction of new a catchy phrase: PRODUCTology.

The concept is pretty straight forward and seeks to explain a lot of bad history when it comes to how IT has attempted to implement the dreams of the business community over many decades. And can be summed up as: projects are bad, products are good.

And the explanation makes a lot of sense. Projects are, by their nature, things of a finite duration. Risk needs to be managed. Scope needs to be controlled. Expectations need to be set. And then, when the project is complete, everyone moves on to the next project. Leaving in their wake a sterile, half-baked solution, that in a matter of months begins to age and crumble. The technology equivalent of a potted plant that never gets watered.

Meanwhile products are forever (well, at least until their replacement comes along). By their nature they are born of innovation and designed to be an entity that continues to grow and morph over the years as demands change and new ideas come to mind. Products have owners that care about them and lovingly tend to their good health, while also making sure they are meeting the requirements of the people using them. Products have roadmaps, they have interested parties, and they have a purpose.

In the consumer space, products are what make the world go round. While in the IT world it’s projects. And that is what needs to change according to Gartner (and we agree). If the business world wants to see their Enterprise systems become more consumer-like, then PRODUCTology is where it all starts.

“IT needs to come to the business community with ideas, as a trusted partner, and not be seen as an order taker.”

– Gartner

Gartner also spent a good deal of time urging the IT community to be more proactive with how they engage with the business community. Instead of waiting to be told what they needed to deliver, IT should be coming to the business community with innovative ideas on how to meet the demands of the era of disruption that we are now entering. Once the business community sees the IT group as an engaged and enthusiastic partner, then the nature of the relationship will completely change. And in ways that will benefit the entire organization. Gartner’s observation was that when IT and the business community collaborate well together, good things happen.

“IT can shape demand and become a thought leader.”

– Gartner

For IT to become a thought leader, Gartner recommended the use of external sources for inspiration. From Google, to Github, to Gartner themselves. There’s a plethora of information that IT can make use of, plus lots of vendors only too willing to demonstrate what they can bring to the table (which includes IntraSee by the way). IT should be looking to bring these resources and vendors to the attention of the business community as a means of creating a dialog about the art of the possible.

“If IT focuses on successful delivery, without trying to create everything themselves, then the business community will fund their initiatives.”

– Gartner

Gartner also believes that IT needs to stop trying to recreate wheels that have already been built. It’s not the job of IT to build anything. But it is the job of IT to ensure that “things” (ideally products) are built, and implemented, correctly. And that may mean a collaboration with a vendor that has a solution, but which needs configuration and extension that IT needs to be involved in. But that does not necessarily mean that IT needs to be building the code (which now has to be maintained). Once IT gets out of the code maintenance world, and into the innovation and enablement world, great things will happen for the organizations they support. What’s important isn’t how things get done, it’s what gets done that counts.

“Don’t try and build chatbots yourself. Building bad chatbots is easy. Building great chatbots is very hard. Find a vendor that understands your domain and can demonstrate excellence.”

– Gartner

And if there’s one thing that Gartner strongly recommended IT should not be trying to build, it’s chatbots. Instead, IT should be evaluating chatbot vendors and by a process of evaluation and demonstration figure out which ones truly match the hype, have the domain knowledge, and work securely with your existing Enterprise systems. Trying to build a “brain” from scratch may lead to a “Frankenbot” that consumes your organizations resources for many years.

The more research that IT does in this area, the less the chance that expensive, embarrassing, and time-consuming mistakes will be made.

“Don’t create ‘Technical debt’”

– Gartner

This isn’t a new concept. “Technical debt” refers to any code added now that will take more work to fix at a later time—typically with the purpose of achieving rapid gains. Shortcuts, hacks, and poor design choices will all lead to huge costs later on. Costs that aren’t just financial, but also reputational too. IT often creates technical debt for itself because of a desire to build things it doesn’t need to build. Then gets sucked into a maintenance and rewrite cycle that stymies its ability to take on new requirements from the business community.

Gartner very strongly believes that taking on unnecessary technical debt causes IT many issues that it needs to avoid.

“95% of bots in the market are s***”

– Gartner quoting Chatbot Summit 2018

Microsoft Clippy

Figure 1: Don’t implement your own chatbot version of “Clippy”

At IntraSee we would concur with this statement by Gartner 100%. The chatbot market right now is flooded with vendors who have massively subpar solutions. Many of them don’t have any experience in the Enterprise space, and have no domain expertise at all. Even an industry stalwart like IBM, with its Watson product, has failed to take a good idea and turn it into a viable Enterprise chatbot.

At IntraSee we firmly believe that a chatbot that is built by automated means, that can plug into your existing Enterprise systems, and comes delivered from day one with domain expertise, is the only way to deliver a chatbot solution.

And, we would say that this is something you need to see to believe. So, while you are looking at other chatbots in the market (which we encourage you to do), we would strongly advise you look at what we do too. You’ll see the difference immediately.

So please contact us to arrange an online demonstration of an Enterprise chatbot in action. And welcome to the world of PRODUCTology!

Contact Us

 

The week of October 22nd was a fun time to be in San Francisco at Oracle OpenWorld. As usual there was an overriding theme that dominated the conference, and this year it was robots. Robots that manage entire Cloud architectures, and robots (aka chatbots) that engage in complex conversations with humans.

2019 appears to be set as the year autonomous robots take hold of the Enterprise, making it more secure than ever, and cheaper to operate than ever.

As is often the case, Larry Ellison led the charge by calling out all the features that differentiate a gen 1 Cloud vs. a gen 2 Cloud.

“Today I want to talk about the second generation of our cloud, featuring Star Wars cyber defenses to protect our Generation 2 platform. We’ve had to re-architect it from the ground up. We’ve introduced Star Wars defenses, impenetrable barriers, and autonomous robots. The combination of those things protect your data and protect our Generation 2 Cloud.”

– Larry Ellison

Having worked with Oracle’s Cloud architecture for a number of years now, we can say that we’ve seen a massive change from Oracle’s gen 1 (aka classic) Cloud architecture, to todays automated gen 2 architecture. As Larry went on to say:

“I’m not talking about a few software changes here and a few software changes there. I’m talking about a completely new hardware configuration for the cloud. It starts with the foundations of the hardware. We had to add a new network of dedicated independent computers to basically surround the perimeter of our cloud. These are computers you don’t find in other clouds. They form this impenetrable barrier. It not only protects the perimeter of the cloud, these barriers also surround each individual customer zone in our cloud. Threats cannot then spread from one customer to another.”

– Larry Ellison

And of course, the key to all this is AI and autonomous bots.

“Then we use the latest AI machine learning technology to build autonomous robots that go out, search and destroy threats. We’ve added lots and lots of more robots to protect every aspect of the cloud. It’s got to be a case of it being completely automated, completely autonomous.”

– Larry Ellison

Naturally, it wouldn’t be an Oracle conference if Larry didn’t call out Amazon for all their failings (price, performance, reliability, and security).

“They [AWS] don’t have self-tuning, they have no autonomous features, it’s not available. They don’t have active data guard. They have no disaster recovery. They have no server failure recovery. They have no software failure recovery. They’ve got no automatic patching. They’ve got none of that. We automatically patch and the system keeps running. In that case, we are infinitely faster and infinitely cheaper.”

– Larry Ellison

This is, most definitely, important stuff. As organizations are recognizing now, infrastructure matters. With Oracle owning the SaaS, PaaS, and IaaS layers, it can ensure security and reliability at every level.

What is also very significant is Oracle’s commitment to innovation and empowerment of its client-base. It now has a massively advanced PaaS layer that customers can take advantage of to flourish in an era of change. Which is in complete contrast to Workday’s approach, which is to lock their clients into a technological alley that stifles any attempt at UX innovation via automation. Workday’s euphemism for this is to describe it as “curation”. But in an era of change, curation is the enemy of progress.

And this brings us to the other hero of Oracle OpenWorld: Chatbots! In this new era of automation Oracle has now released its gen 2 chatbot technology. Now wrapped up in a package called Oracle Digital Assistant. This is a lot more than just a rebrand of what was called Oracle Intelligent Bots (OIB). It’s now a technology platform that enables true chatbot concierge capabilities.

This means that one Oracle chatbot can now seamlessly be a broker (concierge) for many Oracle chatbots. Such that the human user need only converse with one chatbot for any question it may have, regardless of how many chatbots and systems there are “behind the scenes”.

At IntraSee we specifically chose the Oracle chatbot framework for this and many other reasons (including being able to run on a secure infrastructure). Because we can automate the actual creation of an Oracle chatbot, we can also automate the creation of a concierge chatbot, while also being a service chatbot to another Oracle concierge chatbot.

In summary, we couldn’t be happier with Oracle’s direction for its infrastructure (IaaS), and also it’s chatbot technology framework (PaaS). 2019 will undoubtedly be the year for automation in the Enterprise. And for that you need automation at all layers of the Enterprise, and Oracle now has that (IaaS, PaaS, and SaaS). So, we would say that this was a terrific conference that sets the stage for an absolutely fascinating 2019.

Our prediction is that by the end of 2019 that chatbots will be considered the standard UI for the Enterprise for almost all self-service and help desk features, and that web-based applications will start to be seen as the province of the “back office”.

Also, on a personal note, we did get to speak at the conference jointly with Oracle on the subject of chatbots. It was a fun time and if you’d like to get hold of a copy of the presentation, you can now request it.

And, of course, if you’d like to see a demo of what the future (2019) looks like, please let us know and we’d be happy to oblige.

Contact Us