The “build vs. buy” debate has a long and storied history in the Enterprise world. The big question being, “do we buy something that is close to what we want, but requires us to change how we do things? Or do we build something from scratch that does something exactly how we think we want it to work?” Then the Cloud came along, and suddenly the question was switched to “subscribe” vs. “build”, and the drivers to “build” became even more difficult to justify.

The advent of the Cloud was like “subscribe” slam-dunking on “build”.

The common consensus in the Enterprise world is to now always buy/subscribe, unless there’s a huge and compelling reason to build. These days, “build” has to justify itself, as “subscribe” is now the default decision.

Most organizations would prefer to focus on their main business objectives, and not get side-tracked into becoming Enterprise software development companies.

Building your own payroll system may sound exciting, but someone else has already done it, and you’re better off using what they built (and which the vendor will be committed to maintaining and upgrading forever).

OK, building a payroll system actually doesn’t sound exciting at all. But what does sound exciting is the idea of building a brain. But, builder beware…

Humankind’s fascination with the concept of AI goes back a very long way. Also, as anyone who has used Enterprise software will tell you, a “brain” for the Enterprise is a much-needed addition. In fact, the call center industry is built entirely on the fact that Enterprise software is too difficult for the average person to use – primarily because it seems to lack any kind of intuitive brain.

Today we are seeing some organizations embark on a journey to “build a brain” that understands and can interact with their Enterprise systems and the people in their organization. Not to rain on anyone’s parade, but we at IntraSee can tell you that this is not a simple task. And we know because we’ve done it. The brain is a highly complex organism and it’s a multi-year undertaking to replicate even parts of what it does.

While it might be tempting to open up LUIS (Microsoft), Watson (IBM), or DialogFlow (Google) and start to build your own Enterprise brain, we would suggest that you look at subscribing to a brain that is already built and understands how the Enterprise operates, and that can be configured to understand your particular Enterprise implementation.

R2D2, Frankenstein and Mr. Potato Head

Figure 1: What kind of brain do you want?

So, before we get into the mechanics of chatbot creation and the complexities it deals with, let’s discuss the types of chatbots that you could end up either creating or buying. Note: Don’t assume that what a vendor is selling is anywhere close to actually meeting your needs.

In doing so we’ve characterized chatbot implementations into three types of outcomes:

  • Basic: aka Mr. Potato Head
  • Frighteningly complex, unstable & unpredictable: aka Frankenstein
  • Intuitive, reliable, knowledgeable: aka R2-D2

To paraphrase an old proverb, “The road to chatbot hell is paved with good intentions”.

  1. Mr. Potato Head
    This is the most prevalent chatbot today. A chatbot that can do a handful of things with a moderate level of precision. But with absolutely no hope of ever becoming anything sophisticated enough to operate at the Enterprise level. Initially there will be some excitement at the novelty factor that Mr. Potato Head brings to the table, but ultimately he’ll be perceived as nothing more than a toy. And most definitely not the advent of a new era of AI.

    Mr. Potato Head is most likely just a very basic version of what’s called an FAQ chatbot, and not even capable of integrating with your own knowledge-base(s).

  2. Frankenstein
    This is, literally, the most dangerous of all chatbots. Typically, the Frankenstein chatbot takes many years to build given its epic scope. Plus, costs lots of money (bye-bye ROI), and consumes masses of resources. The problem with the “FrankenBot” is that from a distance it looks like it might be able to do the job. And for a while it may actually function decently. But, like all ill-conceived monsters, eventually it will fall apart and begin to wreak havoc among your organization. Wrong answers, corrupt data, security breaches, instability, and an angry mob of employees are the future of the “FrankenBot”. And the temptation will always exist (given how much money was pumped into it) to somehow make one more fix that finally resolves everything. But the problem is that the more you add to it, the more unstable and unmaintainable it will become.

    Ultimately the “FrankenBot” will take your organization down a rat hole that will set you back years, and will also sow organizational distrust that the next chatbot implementation team will have to deal with.

    Simpsons Riot Scene

    Figure 2: The inevitable end of the “FrankenBot” project

  3. R2-D2
    What’s not to like about R2-D2? A great example of how AI should be done. Let us count the ways:

    • Understands your Enterprise systems out of the box
      (ex: it even knew how to fix the shield generator on the Naboo ship)
    • Multi-lingual
      (actually, we’d need to add C-3PO’s linguistic skills to check this box)
    • A means of extending its knowledge-base by hooking into your knowledge base
      (ex: R2-D2 was able to plug directly into the Death Star to answer the question, “where is the Tractor Beam power source located?”)
    • Reliable
      (ex: always clutch in any situation)
    • Does more than just answer questions
      (ex: was able to assist Luke Skywalker by fixing the stabilizer and adding power to the X-Wing)
    • Built in a modular fashion
      (ex: was able to use some of C-3PO’s circuits after taking some damage in the battle of Yavin)
    • Highly secure
      (ex: even when captured, R2-D2 would never reveal anything to the wrong people, though it could have used better encryption of its databanks)
    R2-D2 tapping into computer system

    Figure 3: A good chatbot needs to know how to plug into your systems and access your data in your formats

In order to understand why building a chatbot from scratch is a daunting process (better suited for a “buy” decision). Let’s walk through what a chatbot actually needs to do, which can be broken down into three core competencies:

  • Communication
    • Understand the command of a human, and also understand how to relay a message back to the human. In potentially over 100 languages.
    • Chatbot terminology: This means Intents, entity definitions, utterance training, and language translation.
  • Thinking
    • Contemplate the “command” and consider what the appropriate course of action should be. Taking into account what is “allowed” to be done, what “can” be done, and what “should” be done.
    • Chatbot terminology: Dialog flows are used to determine which logic paths the chatbot should consider, and what decisions it has to make. Intelligent/intuitive branching logic is used to figure out both context and applicability in order that the optimal branch is taken.
  • Instructing
    • Ensure all the appropriate functions of the command are handled and any difficulties are dealt with in an elegant fashion.
    • Chatbot terminology: API awareness, coordination and execution. Note: The chatbot can’t actually do anything itself (in the same way a brain is useless without a physical body to respond to commands). But it does need to know how to interact with an entire suite of Enterprise API’s (REST, SOAP, RPA, custom, etc.) – which for some commands can be very complex.

Chatbot development tools come delivered with a lot of capability in the “communication” realm of the chatbot. And, of course, some better than others.

Where many fall short is with the “thinking”and “instructing”. For various reasons chatbot vendors think it’s a good idea for everyone to have to create their own “thinking” and “instructing” components. At IntraSee we think that’s a cop out, and that’s why we built these capabilities for you, and deliver them out of the box.

Also, because chatbots rely so much on API’s, they require a rich catalog of API’s for every system the chatbot needs to communicate with. Oftentimes this catalog does not exist, or is woefully inadequate, which requires someone (you) to build it. Which is why we spent the past ten (10) years building Enterprise Adapters and API’s for the major Cloud and on-premise Enterprise systems. Just so you don’t need to.

Remember the golden rule: without all the appropriate Enterprise Adapters and API’s, the chatbot is just a “brain in a jar”.

Brain in a jar

Figure 4: It’s just a brain in a jar until it’s hooked up with your Enterprise systems

IntraSee has spent over ten (10) years building an extensive catalog of Enterprise Adapters that are API enabled. Even for systems that are “API-challenged”, like PeopleSoft.

The question, before any such project begins, isn’t can you do this (though very few people can)? It’s, should you do this? Is your IT organization ready to become an AI development shop, and spend the next 10 years trying to create the perfect chatbot that understands your entire Enterprise suite?

Frankenstein on a table being assembled

Figure 5: How NOT to build and maintain a chatbot

If someone has already done this, wouldn’t that be, as Gartner advises, an easier and better solution?

The unfortunate truth is that the vast majority of chatbot solutions were built by hand from the ground up. Every dialog flow is hand-coded, entities are manually defined, and integrations with data and content are created by a coder with no knowledge of how your systems actually work. Even business rules, and often core branching logic, is hard-coded, forcing dual maintenance in a system you can barely understand. The chance of this business logic being wrong is massive. And there’s no simple way for you to ever know without constant regression testing – every single day.

What has been created isn’t just spaghetti code, it’s multi-dimensional spaghetti code.

Neural Network

Figure 6: Is this something you want to manually build and maintain?

AI that is driven from manually coded IF statements is not AI

At IntraSee we don’t hand-build any chatbot solution. Everything is automated and generated from an engine that is already Enterprise-aware, and a pluggable architecture that operates like a neural network, communicating across multiple lobes of the “brain”.

In a similar way to how neurons communicate across the synapse by switching protocols (electric to chemical), we have adapters that manage protocol differences in the many systems the chatbot needs to communicate with.

If you are considering implementing a chatbot solution for your organization and don’t want Mr. Potato Head, or a “FrankenBot”, then please contact us to learn more…

Contact Us

In January 2011, IBM amazed the entire world by hosting the television show Jeopardy, and pitched its AI computer, named Watson, against two of its best champions, Ken Jennings and Brad Rutter. Its victory was both shocking and exciting all at the same time.

Ken Jennings (who almost beat Watson), magnanimously stated, “I, for one, welcome our new computer overlords”. It was official, machines had finally usurped humans. It appeared that IBM had achieved the impossible dream that science-fiction novelists had been predicting for decades.

One of Watson’s lead developers, Dr. David Ferrucci, added to the quickly escalating hype with:

“People ask me if this is HAL in “2001: A Space Odyssey.” “HAL’s not the focus; the focus is on the computer on ‘Star Trek,’ where you have this intelligent information seek dialogue, where you can ask follow-up questions and the computer can look at all the evidence and tries to ask follow-up questions. That’s very cool.”

Seven years later the hype train appears to be still stuck at the station. All the promise of 2011 is yet to be materialized. How could something so utterly amazing back in 2011 still be struggling to find its place in the business world of today?

To answer that question, we need to go back in time to 2011 and look at exactly how Watson beat two extremely brilliant humans. What was its secret? And exactly how much AI was really involved?

Once you look at how Watson won, it’s not really that surprising that it was never able to capitalize on that victory in future years.

For those watching the show in February 2011 (when the show aired) they were told that Watson would not be allowed to connect to the internet. That would be cheating, right?

But what was not explained was that the team that built Watson had spent the previous five years downloading the internet onto Watson. Or, to be more precise, the parts of the internet that they knew Jeopardy took its questions from.

This was why the show was hosted at IBM offices. “Watson” was actually a massive room full of IBM hardware. Specifically, 90 IBM Power 750 servers, each of which contains 32 POWER7 processor cores running at 3.55 GHz, according to the company. This architecture allows massively parallel processing on an embarrassingly large scale: as David Ferrucci told Daily Finance, Watson can process 500 gigabytes per second, the equivalent of the content in about a million books.

Moreover, each of those processors were equipped with 256 GB of RAM so that it could retain about 200 million pages worth of data about the world. (During a game, Watson didn’t rely on data stored on hard drives because they would have been too slow to access.)

So, how did the Watson team know which parts of the internet to download and index? Well, that was the easy part. As anyone who has studied the game can tell you, Jeopardy answers can mostly be found from either Wikipedia or specific encyclopedias. “All” they had to do was download what they knew would be the source of the questions (called “answers” on Jeopardy) and then turn unstructured content into semi-structured content by plucking out what they felt would be applicable for any question (names, titles, places, dates, movies, books, songs, etc.). However, doing that was no simple feat, and one of the reasons why it took them five years to make Watson a formidable opponent.

It was a massively labor-intensive operation that required a large IBM staff with PhD’s many years to accomplish. This was the first red flag that the Watson team should have been aware of.

Unfortunately, their goal at the time was to build a computer that could win a TV quiz show. What they should have been building was something that could be implemented in any environment without the need for an army of highly paid professors. Instead they were told, “Build something that can win on Jeopardy”. And that’s exactly what they did.

To this day, Watson is still notoriously picky about the kind of data it can analyze, and still needs a huge amount of manual intervention. In the world of AI, automation is a key requirement of any solution that hopes to be accepted in the business world. Labor intensive solutions are massively expensive and require huge amounts of constant maintenance, which cost large amounts of money.

IBM made this work for Jeopardy because there was no realistic budget or timelines (it ended up taking them five years), and all it had to do was win one game. In the real world of business, budgets matter. And high maintenance costs will destroy any ROI.

IBM Watson Server Farm

Figure 1: Wikipedia-in-a-box

So, as you can see, from the get-go Watson had every advantage possible. While it wasn’t allowed to connect to the internet (remember, that would be cheating), they didn’t have to. They’d already spent five years indexing the parts of the internet they knew they needed.

And then there were the rules of Jeopardy. These played directly into the plexiglass “hand” of Watson. Anyone who has seen the show knows how it works. The “answer” is displayed on the screen, then the host of the show gets to verbally read it out. When he has finished a light appears on a big screen and the competitors can press a button. The first person to press the button gets to state what the “question” is.

But here’s why a machine has a massive built-in advantage. On average it takes 6 seconds for the host to actually read out loud the question. That means that both Watson and the humans get six seconds to figure out if they know the answer or not. Six seconds for a room full of computers is a huge amount of time. Try typing in a question using Google on your phone and see how fast it is. Yes, less than one second.

What this means is that by the time the light comes up on the board, Watson already knows the answer (almost all of the time). But, oftentimes, so do really smart humans. So, theoretically, when the light flashes, it’s possible that everyone knows the answer (and that night the questions weren’t particularly hard).

The determiner of who gets to answer the question is who can depress a buzzer the fastest.

What the TV audience didn’t get to see was that, to make this seem kind of human, the Watson team had rigged the machinery with a plexiglass hand that would automatically depress the buzzer the exact moment the light appeared on the board. Due to various laws of science, this took exactly 10 milliseconds to happen. The best any human can expect to do this is 100 milliseconds (try running this test yourself and see how fast you can do it). Which meant that it was technically impossible for any human to click the buzzer faster than Watson.

The only real way for the humans to stand a chance was to anticipate the light going on (though if they clicked too early they invoked a quarter of a second delay before they could click again). But to help out the humans, Watson was programmed not to anticipate the light.

If you watch the show, you can see that Ken Jennings clicked his buzzer on almost every question that Watson won on. He just wasn’t fast enough. No human could be, unless they got really lucky with their anticipation of the light going on.

Also, what wasn’t apparent to the viewers was that Watson wasn’t “listening” to any questions. An ascii text file was transmitted to Watson the moment the question appeared on the board. Watson then parsed that question (which actually was a very impressive feat by IBM) to figure out the true intent, and then IBM did use a voice feature to actually read out the answer.

In 2011 this was a genuine achievement by IBM, and we do salute the team that worked on it. What they did was not easy and did advance our understanding of AI. But not really in the way the world understood it. Watson was one small (massively over-hyped) step forward, not the huge leap it appeared to be.

What IBM did was build a fantastic Jeopardy machine. It did use elements of AI, but wasn’t quite the miracle it seemed to be. Yes, it was part AI, but it was also part “Wizard of Oz”. And because it was pretty much a one-tricky pony in 2011, IBM has subsequently struggled to make it work in the Enterprise. Though they have tried.

What has become very apparent since 2011 is that what “worked” for winning at Jeopardy doesn’t work today in the real world. AI has come a long way since those pioneering days, and approaches to creating the ultimate Q&A machine have altered dramatically.

While we commend Watson, and the awareness it created, we believe there are better ways to implement an AI solution that do not require an army of PhD graduates. AI is something that should be, and can be, implemented in a matter of weeks. And which can also be easily maintained – delivering ROI on day 1.

Please contact us to learn more…

Contact Us

In the 80’s and 90’s Sony ruled the world of portable music. Everyone remembers the game changing Walkman series of products that revolutionized the portable music landscape and enriched our lives. Then digital music came along in the form of mp3’s. And now all the music online was in this format, with the only problem being that Sony only supported their proprietary ATRAC format on all their meticulously manufactured hardware. This was not a recipe for success for Sony, and culminated in the dead-on-arrival Sony Network Walkman. In 1999 it was the smallest digital music player on the market. Beautifully crafted, like all Sony products were back then (though it kind of looks like a bicycle lock now), it defiantly only played ATRAC files.

Network Walkman

Figure 1: Beautifully crafted obsolescence

The death knell tolled for the Sony digital music division in October 2001 when Apple announced, in all its mock-turtlenecked glory, the iPod combined with iTunes. It was the ultimate mp3 player. It took Sony a further four years to fully realize what had happened to them (ironically right before the iPhone was announced), but the rest of the world got it immediately.

Today we are seeing a similar parallel. Chatbot vendors that are delivering solutions that require you to load all your data into their proprietary AI systems, with their proprietary formats.

These are AI systems that you have no control over, and, often, systems that are highly insecure. Why would any vendor require that when we have such a thing as API’s? Shouldn’t AI technology be smart enough to use API’s to look at your data in real-time and figure stuff out? The answer is simple: yes, it should.

In 2018, API’s are to AI, what mp3 was to digital music in 2001.

Let’s go with the most basic of examples to illustrate this. All chatbot vendors provide what’s called an FAQ feature (though the better solutions do a LOT more). Meaning that you ask the chatbot a question and it gives you an answer. But what the “bad” vendors do is that they require you to load all the answers into their proprietary AI solution, in their proprietary format. Why is this bad? Let us count the ways:

  1. Those answers already exist in some knowledge base in your current systems. Now you have to maintain the answers not just in your current systems, you also have to maintain the same answers in the chatbot AI too. This is called dual maintenance, and it’s bad. The chances of your web systems’ content being out of sync with your chatbot content just went through the roof.
  2. The tight control you have in your current content management systems (CMS), and all the sophisticated features they come with, almost certainly don’t exist within the chatbot vendors solution (which was never built to be a true CMS). So, you now have to deal with a knowledge base that is very loosely controlled, and one which would never pass muster as a CMS if you were actually in the market to purchase a new one. Which was never your intention in the first place (and nor should it be).
  3. From having your content tightly controlled on your systems (cloud or on-premise), you’ll now have your content stored on someone else’s server in an environment you have no control over.
  4. If you did decide to store all your Q&A’s within the chatbot vendors solution, then you now have what’s called vendor lock-in. The vendor will like that, but that doesn’t help you in any way at all.

The lesson to be learned is that you should not be so dazzled by the thought of AI that you thrust yourself into a world of disorder. As always in life, be aware of the Cynefin framework:

Cynefin illustrative sketch

Figure 2: Avoid selecting a solution formulated in the wrong domain (especially the fifth one – Disorder)

In simplest terms, the Cynefin framework exists to help us realize that all situations are not created equal and to help us understand that different situations require different responses to successfully navigate them.

So now we know how things shouldn’t work, let’s look at how it should work.

    1. Any true AI chatbot FAQ feature needs to be able to consume answers already stored in your, potentially, many knowledge base repositories. These include:
      • Oracle Content & Experience Cloud
      • Salesforce
      • PeopleSoft Interaction Hub
      • Oracle Service Cloud
      • PeopleSoft HR Help desk
      • ServiceNow
      • Drupal
      • SharePoint
      • WordPress
      • Etc.
    2. Also, the answer for each question could be contained in any of those systems, and your chatbot should know exactly which one.
    3. It needs to be role/group aware. Such that it knows key demographic data about who is asking the question and is therefore able to deliver the correct answer for that person.
      Ex: If a French employee wants to know what the time off policy is, don’t reply with the USA policy.
    4. It needs to be able to understand questions asked in potentially more than 100 languages.
    5. It needs to be able to utilize AI to summarize long answers down to a digestible length.

Doesn’t this sound better? It does to us too. And that’s why we built our chatbot this way. A chatbot that adapts to your content and your data on day one.

To summarize: an AI solution that requires a massive data and content dump of all your Enterprise systems is not a valid proposition that anyone should be considering. Other than the obvious data privacy issues, it’s also not a technically tenable solution. Yet in the rush to market by hundreds of vendors peddling chatbots, there is a general hope by them that all these giant red flags will be ignored.

Our advice is that organizations should be diligent, and only accept solutions that accommodate the scalability and security requirements of large Enterprise systems.

And as we have said in previous blogs, in the world of technology, all that glitters is not necessarily gold.

We hope that this blog has laid out a better way for your organization to take advantage of the next industrial revolution: Automation via AI.

To find out more, please contact us below…

Contact Us

Automation: the application of machines to tasks once performed by human beings

It’s hard to comprehend, but it was just over a hundred years ago when car manufacturers relied on horse-drawn carriages to deliver each frame to the workers. Then, in 1913, Henry Ford introduced the industry’s first moving assembly line at the Highland Park Plant in Detroit.

Now, complex robots with “laser eyes” perform everything from welding, to die casting and painting, alongside their human colleagues on the factory floor.

Meanwhile, automation in the so-called “hi-tech” world of software has been much slower coming. Servers need bouncing, patches must be applied, security alerts need investigating, performance needs to be tuned, and code needs to be developed. All by hand.

And this is just maintenance of the actual machines! If we then flip the script and look at the experience of the humans using the software, we see the same glaring inefficiencies.

Chatbot Service

Figure 1: The help desk of the future

Entire “call center” industries have been created, all predicated on the idea that Enterprise software is too difficult to use – and only manual intervention can fix that.

The advent of the Cloud was the first step in starting to solve this problem, but in itself was not the solution. What it was, was the enabler of the solution. Step one was about creating one infrastructure and one code-line supporting thousands of organizations.

Step one was great, but step two is where the Cloud starts to realize its true potential. Step two is all about automation.

And this is where the real benefits to organizations are realized, all powered by the driver of what will be the next industrial revolution: Artificial Intelligence (AI). Today we can confidently say that the future is autonomous. And that the future is now “open for business”.

Rosie from the Jetsons

Figure 2: The future, as described in The Jetsons, isn’t so far away!

But what is fascinating about this new era is that it will differ in one key way that will make it massively different to every revolution beforehand: barrier to entry.

In the past only the super-rich could take advantage of each industrial revolution. Want to build cars? Well, good luck finding the money to create a factory of robots and assembly lines. However, if you want to fully automate your entire IT infrastructure, and enable robots to automatically handle “help desk” calls? All you need is a subscription. Everything you need has already been built and it’s in the Cloud.

The new industrial revolution will be priced on a pay-on-demand basis, and will be available and attainable for all organizations, big or small. There will be almost no barrier to entry. The playing field will be leveled by the most egalitarian revolution in the history of mankind.

API’s and AI will combine to enable automation in the same way a faucet, when turned on, provides water.

A water tap

Figure 3: Once upon a time this was a revolutionary means of transporting water!

So, why is the autonomous revolution now market-ready? Two things:

1. Oracle’s Autonomous Cloud Platform
A self-driving, self-securing, self-repairing stack that exists at each level: IaaS, PaaS, SaaS.

Autonomy across the entire platform is key for many reasons:

  • Lower downtime
  • Improved performance and stability
  • Lower operational costs
  • Ease of compliance
  • Greater security

Note: the security aspect cannot be stated more strongly. Many Cloud systems are hardened at the perimeter only, like an egg with a hard shell. Unfortunately, once the perimeter is broken into, there is little to prevent further intrusion.

With Oracle autonomously baking security into every level, there are multiple levels of security that are all monitored, patched, and repaired in real-time.  This means the entire system is always in compliance without the need for human intervention.

85% of all security breaches occur when a patch is available but not applied.

2. IntraSee’s Autonomous Chatbot Solution (which resides solely on the Oracle Autonomous Cloud platform)
A chatbot that autonomously builds itself utilizing pre-built Enterprise skills that provide a fully-formed chatbot from day one. Also, because it resides on the Oracle Autonomous Cloud platform, it is the most secure chatbot solution available.

Figure 4: Automation, the new “Easy” button

What also stands out with Oracle’s and IntraSee’s offerings is the low barrier to entry. At IntraSee we pride ourselves on making AI a turnkey solution. Which is why both Oracle and IntraSee provide pilot options, where you can quickly see the value of automation for your organization.

With our chatbot solution we can implement and configure in just four weeks. Let your department kick the tires for another four weeks. Then roll it out to a larger pilot audience for another four weeks.

If you like it? Then roll it out to your entire organization in another four weeks.

Using IntraSee’s Chatbot 4×4 implementation methodology, the benefits will be instant. Chatbots dealing with complex calls for less than one dollar a conversation. Compared to $5 to $200 for an actual human. Plus, with the added bonus of higher customer/employee/student satisfaction and a lot more reasons too.

The future will belong to those organizations that embrace autonomy. And those that don’t will struggle with never-ending spiraling operational costs that will hinder their ability to compete.

Imagine if Ford were still trying to transport car frames via horse-drawn carriages. That’s right, they wouldn’t exist today. The lesson in life, as always, is adapt or perish.

typewriter vs laptop

Figure 5: Old vs. New

If you’d like to know more, please contact us.

Contact Us

Since the dawn of time (almost), people have been communicating via conversations. And this worked great for many thousands of years. Then computers happened, and suddenly we all had to learn a different mechanism for communicating. This was loosely referred to as computer-speak. And over the years it changed as technologies changed. From mainframe, to MS-DOS, Windows/Mac, client-server, and then the web. But the one constant over the years was that it was still computer-speak. Meaning that it had nothing to do with how humans actually spoke to each other.

And this is the crux as to why your Enterprise systems are so difficult for your organization to use today. And why it costs you so much money to support everyone, and everything.

Implementing a chatbot solution for your Enterprise, if done right, will save your organization millions of dollars per year, increase organizational efficiency, and improve user satisfaction with your systems.

The ROI is dramatic. A conversation with a chatbot should cost less than 50 cents per conversation. Whereas a conversation with a human is costing your organization at least 10+ times as much, and, if HR professionals are involved, often 100+ times as much.

But to get there everyone needs to be comfortable with what this new era will look like, and how to get it started. To that end, the recommendation of Gartner (and we wholeheartedly agree) is to pilot the solution to ensure that you don’t embark on a failed IT journey.

So, here’s our recommended 10 steps to ensure that you hit this one out of the park on the first pitch.

Step 1: Understand how much it will cost

First off, pilots should be cheap, and they should be short (and if they are not, then that’s a red flag). But it’s also super-important to understand what the cost structure looks like once you’ve rolled this out across your entire organization. So, work with the vendor that’s pitching the pilot to fully explain what things will cost after the pilot is done and you’ve decided to roll this out to everyone in your organization.

The last thing you want to tell your CEO is that the pilot was a success, but it’s not cost-effective to roll out the solution organization-wide. Remember, the #1 reason for implementing a chatbot solution is to reduce operational costs.

Step 2: Understand how much of your team’s time will be needed, and what is required to support it

All implementations will require some time from your internal teams. That’s normal and desirable. But you really should avoid solutions that could tie up teams of your people for months on end.

Also, once implemented, you should expect support and maintenance to be minimal too. If you suddenly discover you are now performing “dual maintenance” of content, then this means you chose the wrong solution.

A chatbot solution should require minimal involvement from your team to setup, monitor, and support.

Step 3: Ensure that infrastructure/security is discussed and vetted early on

It feels like every day we hear about a security breach at some of the biggest internet companies in the world. Concerns about security and infrastructure are very real and must be addressed. It’s critical to understand the architecture that the chatbot provider is proposing, as you may discover that your data is not only flowing through channels you’d prefer it not to, but is also actually being stored in places it shouldn’t be.

At IntraSee we take infrastructure and security very seriously. Which is why we reside solely within the fully certified Oracle Cloud.

Step 4: Identify your pilot audience

We would recommend around 200 people for an average chatbot pilot. And while it may be tempting to select them all from the IT department, we would recommend a broad range of your organization, but would focus on employees/managers or students/faculty (in the higher education world).

Try to select a broad demographic that most accurately represents your organization.  Including those who speak any languages you would want the chatbot to support.

Step 5: Understand any language requirements

Chatbots are capable of conversing in many languages, some better than others. So be aware of which languages the people in your organization will be conversing in, and check with your vendor to get feedback on how competent they believe the chatbot is in those languages.

Adjustments can be made for any languages that the chatbot may not be completely proficient in. Just let your vendor know up front.

Step 6: Have a pilot-to-production plan in place

Ok, so the pilot went great. Everyone loved it and now you want everyone in your organization to get to use it. Be sure that you have a pilot-to-production plan in place before you started your pilot. Because now you’ll know what to do next, and how much it should cost.

The key to pilot-to-production is that it should be a relatively quick process. We would say that four weeks should be attainable, but that anything more than twelve weeks would be excessive.

Step 7: Think in “conversational” terms when solving for use-cases: what do people need help with?

Talk to your help desk team, and your HR Generalists, they can provide you with the top 100 things that they get asked on a frequent basis. That’s a great start. But also get insight into how the conversations go.

  • How are the questions phrased?
  • Are there follow up questions based on specific answers?
  • Are discovery questions needed?
  • Would the inclusion of specific data help with the conversation?
  • What’s the best way to answer a question?
  • Does the answer vary based on location, seniority, etc.
  • Is a follow up required after the answer has been given?

This will help with configuring how the chatbot interacts with your people.

The idea is that the chatbot understands your best practices and follows them every single time. Just like a perfect employee would.

Step 8: Understanding context

It’s critical that your chatbot understands context, because then it can make helpful suggestions during a conversation. Just like a well-trained support person would. Plus, it’s also important that the chatbot understands who you are and where you are. This way a meaningful conversation can take place, as opposed to a simple Q&A.

So, for example, if someone asks a question about taking time off work, the chatbot should already know that if the person resides in Germany, that it should respond with whatever is the German policy (because that may be different to the American policy). And, also, that it offers to help the person complete a leave of absence request, if they so wish.

Step 9: Think big, but be agile

There is a school of thought that says that chatbot pilots should only include a small number of “intents” (aka functionality). We at IntraSee do not subscribe to that point of view. A pilot using this methodology may appear to be a success – but that could all be an illusion.

The main (but not only) purpose of a pilot should be to see what would happen in a full rollout to your entire user population, but with a smaller group of people where you can monitor and see what works and what doesn’t. And the only way to do that properly is to try and make your pilot scope close to what you expect your full production scope to be.

This means you also need to be agile during the pilot and can adjust to things you see, such that by the end of the pilot you have a close match to what your pilot-to-production path would be. HINT: This is why you need a configurable and fully automated solution.

Step 10: Don’t try and “roll your own” chatbot pilot

There’s no doubt that chatbots are the most exciting thing to happen in the software industry in the past 20 years. And there’s a ton of examples on the internet about how easy it is to create a chatbot that can accept pizza orders. But Enterprise chatbots are highly complex and sophisticated creatures. And to build one properly from scratch would take many years. So, the strong recommendation would be to use something already built off-the-shelf that can quickly be configured for all your needs. This isn’t just our recommendation, this is the recommendation of Gartner too, from last year’s Gartner Application Summit.

Gartner’s clear advice for IT is to stop building things that are already built. Ultimately, it’s not just a waste of time and money, it also contributes to IT being primarily focused on being in “maintenance mode”, instead of “innovation mode”.

If you’d like to know more, fill out the following form and we will send you our white paper.