The week of October 22nd was a fun time to be in San Francisco at Oracle OpenWorld. As usual there was an overriding theme that dominated the conference, and this year it was robots. Robots that manage entire Cloud architectures, and robots (aka chatbots) that engage in complex conversations with humans.

2019 appears to be set as the year autonomous robots take hold of the Enterprise, making it more secure than ever, and cheaper to operate than ever.

As is often the case, Larry Ellison led the charge by calling out all the features that differentiate a gen 1 Cloud vs. a gen 2 Cloud.

“Today I want to talk about the second generation of our cloud, featuring Star Wars cyber defenses to protect our Generation 2 platform. We’ve had to re-architect it from the ground up. We’ve introduced Star Wars defenses, impenetrable barriers, and autonomous robots. The combination of those things protect your data and protect our Generation 2 Cloud.”

– Larry Ellison

Having worked with Oracle’s Cloud architecture for a number of years now, we can say that we’ve seen a massive change from Oracle’s gen 1 (aka classic) Cloud architecture, to todays automated gen 2 architecture. As Larry went on to say:

“I’m not talking about a few software changes here and a few software changes there. I’m talking about a completely new hardware configuration for the cloud. It starts with the foundations of the hardware. We had to add a new network of dedicated independent computers to basically surround the perimeter of our cloud. These are computers you don’t find in other clouds. They form this impenetrable barrier. It not only protects the perimeter of the cloud, these barriers also surround each individual customer zone in our cloud. Threats cannot then spread from one customer to another.”

– Larry Ellison

And of course, the key to all this is AI and autonomous bots.

“Then we use the latest AI machine learning technology to build autonomous robots that go out, search and destroy threats. We’ve added lots and lots of more robots to protect every aspect of the cloud. It’s got to be a case of it being completely automated, completely autonomous.”

– Larry Ellison

Naturally, it wouldn’t be an Oracle conference if Larry didn’t call out Amazon for all their failings (price, performance, reliability, and security).

“They [AWS] don’t have self-tuning, they have no autonomous features, it’s not available. They don’t have active data guard. They have no disaster recovery. They have no server failure recovery. They have no software failure recovery. They’ve got no automatic patching. They’ve got none of that. We automatically patch and the system keeps running. In that case, we are infinitely faster and infinitely cheaper.”

– Larry Ellison

This is, most definitely, important stuff. As organizations are recognizing now, infrastructure matters. With Oracle owning the SaaS, PaaS, and IaaS layers, it can ensure security and reliability at every level.

What is also very significant is Oracle’s commitment to innovation and empowerment of its client-base. It now has a massively advanced PaaS layer that customers can take advantage of to flourish in an era of change. Which is in complete contrast to Workday’s approach, which is to lock their clients into a technological alley that stifles any attempt at UX innovation via automation. Workday’s euphemism for this is to describe it as “curation”. But in an era of change, curation is the enemy of progress.

And this brings us to the other hero of Oracle OpenWorld: Chatbots! In this new era of automation Oracle has now released its gen 2 chatbot technology. Now wrapped up in a package called Oracle Digital Assistant. This is a lot more than just a rebrand of what was called Oracle Intelligent Bots (OIB). It’s now a technology platform that enables true chatbot concierge capabilities.

This means that one Oracle chatbot can now seamlessly be a broker (concierge) for many Oracle chatbots. Such that the human user need only converse with one chatbot for any question it may have, regardless of how many chatbots and systems there are “behind the scenes”.

At IntraSee we specifically chose the Oracle chatbot framework for this and many other reasons (including being able to run on a secure infrastructure). Because we can automate the actual creation of an Oracle chatbot, we can also automate the creation of a concierge chatbot, while also being a service chatbot to another Oracle concierge chatbot.

In summary, we couldn’t be happier with Oracle’s direction for its infrastructure (IaaS), and also it’s chatbot technology framework (PaaS). 2019 will undoubtedly be the year for automation in the Enterprise. And for that you need automation at all layers of the Enterprise, and Oracle now has that (IaaS, PaaS, and SaaS). So, we would say that this was a terrific conference that sets the stage for an absolutely fascinating 2019.

Our prediction is that by the end of 2019 that chatbots will be considered the standard UI for the Enterprise for almost all self-service and help desk features, and that web-based applications will start to be seen as the province of the “back office”.

Also, on a personal note, we did get to speak at the conference jointly with Oracle on the subject of chatbots. It was a fun time and if you’d like to get hold of a copy of the presentation, you can now request it.

And, of course, if you’d like to see a demo of what the future (2019) looks like, please let us know and we’d be happy to oblige.

Contact Us

The “build vs. buy” debate has a long and storied history in the Enterprise world. The big question being, “do we buy something that is close to what we want, but requires us to change how we do things? Or do we build something from scratch that does something exactly how we think we want it to work?” Then the Cloud came along, and suddenly the question was switched to “subscribe” vs. “build”, and the drivers to “build” became even more difficult to justify.

The advent of the Cloud was like “subscribe” slam-dunking on “build”.

The common consensus in the Enterprise world is to now always buy/subscribe, unless there’s a huge and compelling reason to build. These days, “build” has to justify itself, as “subscribe” is now the default decision.

Most organizations would prefer to focus on their main business objectives, and not get side-tracked into becoming Enterprise software development companies.

Building your own payroll system may sound exciting, but someone else has already done it, and you’re better off using what they built (and which the vendor will be committed to maintaining and upgrading forever).

OK, building a payroll system actually doesn’t sound exciting at all. But what does sound exciting is the idea of building a brain. But, builder beware…

Humankind’s fascination with the concept of AI goes back a very long way. Also, as anyone who has used Enterprise software will tell you, a “brain” for the Enterprise is a much-needed addition. In fact, the call center industry is built entirely on the fact that Enterprise software is too difficult for the average person to use – primarily because it seems to lack any kind of intuitive brain.

Today we are seeing some organizations embark on a journey to “build a brain” that understands and can interact with their Enterprise systems and the people in their organization. Not to rain on anyone’s parade, but we at IntraSee can tell you that this is not a simple task. And we know because we’ve done it. The brain is a highly complex organism and it’s a multi-year undertaking to replicate even parts of what it does.

While it might be tempting to open up LUIS (Microsoft), Watson (IBM), or DialogFlow (Google) and start to build your own Enterprise brain, we would suggest that you look at subscribing to a brain that is already built and understands how the Enterprise operates, and that can be configured to understand your particular Enterprise implementation.

R2D2, Frankenstein and Mr. Potato Head

Figure 1: What kind of brain do you want?

So, before we get into the mechanics of chatbot creation and the complexities it deals with, let’s discuss the types of chatbots that you could end up either creating or buying. Note: Don’t assume that what a vendor is selling is anywhere close to actually meeting your needs.

In doing so we’ve characterized chatbot implementations into three types of outcomes:

  • Basic: aka Mr. Potato Head
  • Frighteningly complex, unstable & unpredictable: aka Frankenstein
  • Intuitive, reliable, knowledgeable: aka R2-D2

To paraphrase an old proverb, “The road to chatbot hell is paved with good intentions”.

  1. Mr. Potato Head
    This is the most prevalent chatbot today. A chatbot that can do a handful of things with a moderate level of precision. But with absolutely no hope of ever becoming anything sophisticated enough to operate at the Enterprise level. Initially there will be some excitement at the novelty factor that Mr. Potato Head brings to the table, but ultimately he’ll be perceived as nothing more than a toy. And most definitely not the advent of a new era of AI.

    Mr. Potato Head is most likely just a very basic version of what’s called an FAQ chatbot, and not even capable of integrating with your own knowledge-base(s).

  2. Frankenstein
    This is, literally, the most dangerous of all chatbots. Typically, the Frankenstein chatbot takes many years to build given its epic scope. Plus, costs lots of money (bye-bye ROI), and consumes masses of resources. The problem with the “FrankenBot” is that from a distance it looks like it might be able to do the job. And for a while it may actually function decently. But, like all ill-conceived monsters, eventually it will fall apart and begin to wreak havoc among your organization. Wrong answers, corrupt data, security breaches, instability, and an angry mob of employees are the future of the “FrankenBot”. And the temptation will always exist (given how much money was pumped into it) to somehow make one more fix that finally resolves everything. But the problem is that the more you add to it, the more unstable and unmaintainable it will become.

    Ultimately the “FrankenBot” will take your organization down a rat hole that will set you back years, and will also sow organizational distrust that the next chatbot implementation team will have to deal with.

    Simpsons Riot Scene

    Figure 2: The inevitable end of the “FrankenBot” project

  3. R2-D2
    What’s not to like about R2-D2? A great example of how AI should be done. Let us count the ways:

    • Understands your Enterprise systems out of the box
      (ex: it even knew how to fix the shield generator on the Naboo ship)
    • Multi-lingual
      (actually, we’d need to add C-3PO’s linguistic skills to check this box)
    • A means of extending its knowledge-base by hooking into your knowledge base
      (ex: R2-D2 was able to plug directly into the Death Star to answer the question, “where is the Tractor Beam power source located?”)
    • Reliable
      (ex: always clutch in any situation)
    • Does more than just answer questions
      (ex: was able to assist Luke Skywalker by fixing the stabilizer and adding power to the X-Wing)
    • Built in a modular fashion
      (ex: was able to use some of C-3PO’s circuits after taking some damage in the battle of Yavin)
    • Highly secure
      (ex: even when captured, R2-D2 would never reveal anything to the wrong people, though it could have used better encryption of its databanks)
    R2-D2 tapping into computer system

    Figure 3: A good chatbot needs to know how to plug into your systems and access your data in your formats

In order to understand why building a chatbot from scratch is a daunting process (better suited for a “buy” decision). Let’s walk through what a chatbot actually needs to do, which can be broken down into three core competencies:

  • Communication
    • Understand the command of a human, and also understand how to relay a message back to the human. In potentially over 100 languages.
    • Chatbot terminology: This means Intents, entity definitions, utterance training, and language translation.
  • Thinking
    • Contemplate the “command” and consider what the appropriate course of action should be. Taking into account what is “allowed” to be done, what “can” be done, and what “should” be done.
    • Chatbot terminology: Dialog flows are used to determine which logic paths the chatbot should consider, and what decisions it has to make. Intelligent/intuitive branching logic is used to figure out both context and applicability in order that the optimal branch is taken.
  • Instructing
    • Ensure all the appropriate functions of the command are handled and any difficulties are dealt with in an elegant fashion.
    • Chatbot terminology: API awareness, coordination and execution. Note: The chatbot can’t actually do anything itself (in the same way a brain is useless without a physical body to respond to commands). But it does need to know how to interact with an entire suite of Enterprise API’s (REST, SOAP, RPA, custom, etc.) – which for some commands can be very complex.

Chatbot development tools come delivered with a lot of capability in the “communication” realm of the chatbot. And, of course, some better than others.

Where many fall short is with the “thinking”and “instructing”. For various reasons chatbot vendors think it’s a good idea for everyone to have to create their own “thinking” and “instructing” components. At IntraSee we think that’s a cop out, and that’s why we built these capabilities for you, and deliver them out of the box.

Also, because chatbots rely so much on API’s, they require a rich catalog of API’s for every system the chatbot needs to communicate with. Oftentimes this catalog does not exist, or is woefully inadequate, which requires someone (you) to build it. Which is why we spent the past ten (10) years building Enterprise Adapters and API’s for the major Cloud and on-premise Enterprise systems. Just so you don’t need to.

Remember the golden rule: without all the appropriate Enterprise Adapters and API’s, the chatbot is just a “brain in a jar”.

Brain in a jar

Figure 4: It’s just a brain in a jar until it’s hooked up with your Enterprise systems

IntraSee has spent over ten (10) years building an extensive catalog of Enterprise Adapters that are API enabled. Even for systems that are “API-challenged”, like PeopleSoft.

The question, before any such project begins, isn’t can you do this (though very few people can)? It’s, should you do this? Is your IT organization ready to become an AI development shop, and spend the next 10 years trying to create the perfect chatbot that understands your entire Enterprise suite?

Frankenstein on a table being assembled

Figure 5: How NOT to build and maintain a chatbot

If someone has already done this, wouldn’t that be, as Gartner advises, an easier and better solution?

The unfortunate truth is that the vast majority of chatbot solutions were built by hand from the ground up. Every dialog flow is hand-coded, entities are manually defined, and integrations with data and content are created by a coder with no knowledge of how your systems actually work. Even business rules, and often core branching logic, is hard-coded, forcing dual maintenance in a system you can barely understand. The chance of this business logic being wrong is massive. And there’s no simple way for you to ever know without constant regression testing – every single day.

What has been created isn’t just spaghetti code, it’s multi-dimensional spaghetti code.

Neural Network

Figure 6: Is this something you want to manually build and maintain?

AI that is driven from manually coded IF statements is not AI

At IntraSee we don’t hand-build any chatbot solution. Everything is automated and generated from an engine that is already Enterprise-aware, and a pluggable architecture that operates like a neural network, communicating across multiple lobes of the “brain”.

In a similar way to how neurons communicate across the synapse by switching protocols (electric to chemical), we have adapters that manage protocol differences in the many systems the chatbot needs to communicate with.

If you are considering implementing a chatbot solution for your organization and don’t want Mr. Potato Head, or a “FrankenBot”, then please contact us to learn more…

Contact Us

In January 2011, IBM amazed the entire world by hosting the television show Jeopardy, and pitched its AI computer, named Watson, against two of its best champions, Ken Jennings and Brad Rutter. Its victory was both shocking and exciting all at the same time.

Ken Jennings (who almost beat Watson), magnanimously stated, “I, for one, welcome our new computer overlords”. It was official, machines had finally usurped humans. It appeared that IBM had achieved the impossible dream that science-fiction novelists had been predicting for decades.

One of Watson’s lead developers, Dr. David Ferrucci, added to the quickly escalating hype with:

“People ask me if this is HAL in “2001: A Space Odyssey.” “HAL’s not the focus; the focus is on the computer on ‘Star Trek,’ where you have this intelligent information seek dialogue, where you can ask follow-up questions and the computer can look at all the evidence and tries to ask follow-up questions. That’s very cool.”

Seven years later the hype train appears to be still stuck at the station. All the promise of 2011 is yet to be materialized. How could something so utterly amazing back in 2011 still be struggling to find its place in the business world of today?

To answer that question, we need to go back in time to 2011 and look at exactly how Watson beat two extremely brilliant humans. What was its secret? And exactly how much AI was really involved?

Once you look at how Watson won, it’s not really that surprising that it was never able to capitalize on that victory in future years.

For those watching the show in February 2011 (when the show aired) they were told that Watson would not be allowed to connect to the internet. That would be cheating, right?

But what was not explained was that the team that built Watson had spent the previous five years downloading the internet onto Watson. Or, to be more precise, the parts of the internet that they knew Jeopardy took its questions from.

This was why the show was hosted at IBM offices. “Watson” was actually a massive room full of IBM hardware. Specifically, 90 IBM Power 750 servers, each of which contains 32 POWER7 processor cores running at 3.55 GHz, according to the company. This architecture allows massively parallel processing on an embarrassingly large scale: as David Ferrucci told Daily Finance, Watson can process 500 gigabytes per second, the equivalent of the content in about a million books.

Moreover, each of those processors were equipped with 256 GB of RAM so that it could retain about 200 million pages worth of data about the world. (During a game, Watson didn’t rely on data stored on hard drives because they would have been too slow to access.)

So, how did the Watson team know which parts of the internet to download and index? Well, that was the easy part. As anyone who has studied the game can tell you, Jeopardy answers can mostly be found from either Wikipedia or specific encyclopedias. “All” they had to do was download what they knew would be the source of the questions (called “answers” on Jeopardy) and then turn unstructured content into semi-structured content by plucking out what they felt would be applicable for any question (names, titles, places, dates, movies, books, songs, etc.). However, doing that was no simple feat, and one of the reasons why it took them five years to make Watson a formidable opponent.

It was a massively labor-intensive operation that required a large IBM staff with PhD’s many years to accomplish. This was the first red flag that the Watson team should have been aware of.

Unfortunately, their goal at the time was to build a computer that could win a TV quiz show. What they should have been building was something that could be implemented in any environment without the need for an army of highly paid professors. Instead they were told, “Build something that can win on Jeopardy”. And that’s exactly what they did.

To this day, Watson is still notoriously picky about the kind of data it can analyze, and still needs a huge amount of manual intervention. In the world of AI, automation is a key requirement of any solution that hopes to be accepted in the business world. Labor intensive solutions are massively expensive and require huge amounts of constant maintenance, which cost large amounts of money.

IBM made this work for Jeopardy because there was no realistic budget or timelines (it ended up taking them five years), and all it had to do was win one game. In the real world of business, budgets matter. And high maintenance costs will destroy any ROI.

IBM Watson Server Farm

Figure 1: Wikipedia-in-a-box

So, as you can see, from the get-go Watson had every advantage possible. While it wasn’t allowed to connect to the internet (remember, that would be cheating), they didn’t have to. They’d already spent five years indexing the parts of the internet they knew they needed.

And then there were the rules of Jeopardy. These played directly into the plexiglass “hand” of Watson. Anyone who has seen the show knows how it works. The “answer” is displayed on the screen, then the host of the show gets to verbally read it out. When he has finished a light appears on a big screen and the competitors can press a button. The first person to press the button gets to state what the “question” is.

But here’s why a machine has a massive built-in advantage. On average it takes 6 seconds for the host to actually read out loud the question. That means that both Watson and the humans get six seconds to figure out if they know the answer or not. Six seconds for a room full of computers is a huge amount of time. Try typing in a question using Google on your phone and see how fast it is. Yes, less than one second.

What this means is that by the time the light comes up on the board, Watson already knows the answer (almost all of the time). But, oftentimes, so do really smart humans. So, theoretically, when the light flashes, it’s possible that everyone knows the answer (and that night the questions weren’t particularly hard).

The determiner of who gets to answer the question is who can depress a buzzer the fastest.

What the TV audience didn’t get to see was that, to make this seem kind of human, the Watson team had rigged the machinery with a plexiglass hand that would automatically depress the buzzer the exact moment the light appeared on the board. Due to various laws of science, this took exactly 10 milliseconds to happen. The best any human can expect to do this is 100 milliseconds (try running this test yourself and see how fast you can do it). Which meant that it was technically impossible for any human to click the buzzer faster than Watson.

The only real way for the humans to stand a chance was to anticipate the light going on (though if they clicked too early they invoked a quarter of a second delay before they could click again). But to help out the humans, Watson was programmed not to anticipate the light.

If you watch the show, you can see that Ken Jennings clicked his buzzer on almost every question that Watson won on. He just wasn’t fast enough. No human could be, unless they got really lucky with their anticipation of the light going on.

Also, what wasn’t apparent to the viewers was that Watson wasn’t “listening” to any questions. An ascii text file was transmitted to Watson the moment the question appeared on the board. Watson then parsed that question (which actually was a very impressive feat by IBM) to figure out the true intent, and then IBM did use a voice feature to actually read out the answer.

In 2011 this was a genuine achievement by IBM, and we do salute the team that worked on it. What they did was not easy and did advance our understanding of AI. But not really in the way the world understood it. Watson was one small (massively over-hyped) step forward, not the huge leap it appeared to be.

What IBM did was build a fantastic Jeopardy machine. It did use elements of AI, but wasn’t quite the miracle it seemed to be. Yes, it was part AI, but it was also part “Wizard of Oz”. And because it was pretty much a one-tricky pony in 2011, IBM has subsequently struggled to make it work in the Enterprise. Though they have tried.

What has become very apparent since 2011 is that what “worked” for winning at Jeopardy doesn’t work today in the real world. AI has come a long way since those pioneering days, and approaches to creating the ultimate Q&A machine have altered dramatically.

While we commend Watson, and the awareness it created, we believe there are better ways to implement an AI solution that do not require an army of PhD graduates. AI is something that should be, and can be, implemented in a matter of weeks. And which can also be easily maintained – delivering ROI on day 1.

Please contact us to learn more…

Contact Us

In the 80’s and 90’s Sony ruled the world of portable music. Everyone remembers the game changing Walkman series of products that revolutionized the portable music landscape and enriched our lives. Then digital music came along in the form of mp3’s. And now all the music online was in this format, with the only problem being that Sony only supported their proprietary ATRAC format on all their meticulously manufactured hardware. This was not a recipe for success for Sony, and culminated in the dead-on-arrival Sony Network Walkman. In 1999 it was the smallest digital music player on the market. Beautifully crafted, like all Sony products were back then (though it kind of looks like a bicycle lock now), it defiantly only played ATRAC files.

Network Walkman

Figure 1: Beautifully crafted obsolescence

The death knell tolled for the Sony digital music division in October 2001 when Apple announced, in all its mock-turtlenecked glory, the iPod combined with iTunes. It was the ultimate mp3 player. It took Sony a further four years to fully realize what had happened to them (ironically right before the iPhone was announced), but the rest of the world got it immediately.

Today we are seeing a similar parallel. Chatbot vendors that are delivering solutions that require you to load all your data into their proprietary AI systems, with their proprietary formats.

These are AI systems that you have no control over, and, often, systems that are highly insecure. Why would any vendor require that when we have such a thing as API’s? Shouldn’t AI technology be smart enough to use API’s to look at your data in real-time and figure stuff out? The answer is simple: yes, it should.

In 2018, API’s are to AI, what mp3 was to digital music in 2001.

Let’s go with the most basic of examples to illustrate this. All chatbot vendors provide what’s called an FAQ feature (though the better solutions do a LOT more). Meaning that you ask the chatbot a question and it gives you an answer. But what the “bad” vendors do is that they require you to load all the answers into their proprietary AI solution, in their proprietary format. Why is this bad? Let us count the ways:

  1. Those answers already exist in some knowledge base in your current systems. Now you have to maintain the answers not just in your current systems, you also have to maintain the same answers in the chatbot AI too. This is called dual maintenance, and it’s bad. The chances of your web systems’ content being out of sync with your chatbot content just went through the roof.
  2. The tight control you have in your current content management systems (CMS), and all the sophisticated features they come with, almost certainly don’t exist within the chatbot vendors solution (which was never built to be a true CMS). So, you now have to deal with a knowledge base that is very loosely controlled, and one which would never pass muster as a CMS if you were actually in the market to purchase a new one. Which was never your intention in the first place (and nor should it be).
  3. From having your content tightly controlled on your systems (cloud or on-premise), you’ll now have your content stored on someone else’s server in an environment you have no control over.
  4. If you did decide to store all your Q&A’s within the chatbot vendors solution, then you now have what’s called vendor lock-in. The vendor will like that, but that doesn’t help you in any way at all.

The lesson to be learned is that you should not be so dazzled by the thought of AI that you thrust yourself into a world of disorder. As always in life, be aware of the Cynefin framework:

Cynefin illustrative sketch

Figure 2: Avoid selecting a solution formulated in the wrong domain (especially the fifth one – Disorder)

In simplest terms, the Cynefin framework exists to help us realize that all situations are not created equal and to help us understand that different situations require different responses to successfully navigate them.

So now we know how things shouldn’t work, let’s look at how it should work.

    1. Any true AI chatbot FAQ feature needs to be able to consume answers already stored in your, potentially, many knowledge base repositories. These include:
      • Oracle Content & Experience Cloud
      • Salesforce
      • PeopleSoft Interaction Hub
      • Oracle Service Cloud
      • PeopleSoft HR Help desk
      • ServiceNow
      • Drupal
      • SharePoint
      • WordPress
      • Etc.
    2. Also, the answer for each question could be contained in any of those systems, and your chatbot should know exactly which one.
    3. It needs to be role/group aware. Such that it knows key demographic data about who is asking the question and is therefore able to deliver the correct answer for that person.
      Ex: If a French employee wants to know what the time off policy is, don’t reply with the USA policy.
    4. It needs to be able to understand questions asked in potentially more than 100 languages.
    5. It needs to be able to utilize AI to summarize long answers down to a digestible length.

Doesn’t this sound better? It does to us too. And that’s why we built our chatbot this way. A chatbot that adapts to your content and your data on day one.

To summarize: an AI solution that requires a massive data and content dump of all your Enterprise systems is not a valid proposition that anyone should be considering. Other than the obvious data privacy issues, it’s also not a technically tenable solution. Yet in the rush to market by hundreds of vendors peddling chatbots, there is a general hope by them that all these giant red flags will be ignored.

Our advice is that organizations should be diligent, and only accept solutions that accommodate the scalability and security requirements of large Enterprise systems.

And as we have said in previous blogs, in the world of technology, all that glitters is not necessarily gold.

We hope that this blog has laid out a better way for your organization to take advantage of the next industrial revolution: Automation via AI.

To find out more, please contact us below…

Contact Us