Higher Education is going through a major shift as institutions attempt to align to changing environments and student demands. Today’s student is looking for options outside of, or coupled with, a traditional four-year degree. The desire for life-long education and demand for lower-cost options has many traditional schools turning to online offerings. It has been a shift met with resistance by administrations, but minds are starting to change. Education, which was once believed to only be effective in a classroom, is now available in an online medium. Not to mention 24 hours a day, seven days a week, all over the world.

“This year 73% of schools made a decision to offer online programs based on growth potential for overall enrollment”

– 2018 Online Education Trends Report

Online education can open many doors for students who otherwise would never be able to attend a certain school. For example, Stanford this year rolled out 150 courses online. Students who never dreamt of attending Stanford, due to cost or distance, can now do so. The accessibility to high quality education is changing before our eyes. 

This wave was never more apparent than when Purdue University purchased for-profit Kaplan University in 2017 and turned it into Purdue Global with the aim of serving post-secondary education. A traditional, nonprofit land-grant university is shifting to meet the demands of present-day students. Being mentioned in the same press release as a for-profit education company was unfathomable, until it wasn’t.

We aren’t talking about only adult students here either. The trends point to traditional aged students (18-24 years old) turning toward online education as well.

”Students aged 18-24 saw the greatest year-over-year increase in online education enrollment at 115%”

– 2018 Online Education Trends Report

Unintended Consequences

However, no good deed comes without unintended consequences. Learning from a remote location can be a challenge. There is no building to walk into. There is no teaching assistant (TA) to sit down with. There is no residence hall advisor to check in on you. And rarely any other students to remind you of deadlines and schedules. 

Being online means you need online self-service mechanisms for support and help. A student’s success depends on it.

Many institutions have attempted to use their traditional telephone-based student support services. However, phone support is slow, it produces inconsistent answers, it isn’t available 24×7, and is mostly an experience students dread.

Also, help desks are not cheap to operate. Each call can generate a cost of at least $5 and oftentimes much more. 

Time has always been a student’s scarcest commodity. Whether it is balancing a full course load, or juggling work and family, students simply can’t waste time on hold waiting for someone to answer a basic question like: when am I allowed to register?

If perceptions weren’t bad enough about phone support, imagine how your worldwide student body feels about that support only speaking English. English second language learners require a special approach.

A 2018 study showed how many international students struggle in their relationships, with their finances, feelings of isolation and belonging, all of which affect their educational experience. For example, regarding isolation, only 35% of respondents reported feeling a part of the university.

– The International Journal of Higher Education Research

Digital Assistant to the Rescue!

Superman chatbot with a graduation cap

Able to help thousands of students in a single bound!

Digital Assistants are a type of enterprise chatbot that not only can answer questions and provide support, but they can assist you in completing tasks such as changing your email address on file with the registrar, or even signing you up for a class. They know who you are and can personalize their service to you. Today’s digital assistant is not your parent’s MovieFone.

Your digital assistant can be that self-service help, 24 hours a day, 7 days a week. Whether it is questions for the registrar, financial aid office or student services, the digital assistant provides consistent answers at speed. 

Average response time from the digital assistant is usually sub-second. With student demand for instant answers and their dislike for waiting on hold, chatbots are not only an essential tool, but oftentimes the preferred communication method of students.

Supporting students in their native language can bring a comfort to someone reaching out for help. Digital assistants can provide that multi-lingual help.

Figures speaking multiple languages

Speak all of your student’s languages

Our chatbot can speak over 100 languages automatically. That is a level of service that would otherwise be very expensive, and almost impossible, to provide with traditional support centers.

No conversation about student systems can be had without considering the impact to student success. Digital assistants open up all sorts of ways to help the student along their academic journey. While we have touched upon support functions already, sometimes students needs a more proactive nudge.

The digital assistant knows, for example, when a student has an assignment coming up or an advising appointment so it can make sure the student is reminded. If the student has a hold placed on their account which can interfere with graduation, the digital assistant can pop up and help them resolve the issue. 

You may be wondering… what about complicated problems with nuanced solutions or those that really need the personal touch? Chatbots don’t replace all personal interactions. The chatbot can sense when the student is stuck and transfer them to a live person or have someone such as their advisor follow up with a phone call or personal visit. 

An even better consequence of deploying a digital assistant is that it frees up time from key roles like academic advisors who no longer need to answer common, mundane questions. They can refocus on activities that help students be successful. Plus, it also negates the need to increase help desk staff to support an increasingly online student body.

And, of course, digital assistants don’t go to sleep, and never call in sick

Because digital assistants are extremely cheap to run, they are the key to keeping operational costs down, while student enrollment rises. 

AI is driving, and supporting, a new era of technological disruption

When you see the big names such as Purdue University, Stanford, MIT and Harvard getting into online education, you know the winds of change are blowing. And with those winds, we can’t lose sight of supporting our students in these new models and ensuring they are successful. Digital assistants can address many of the real problems presented by changing models. Consider that in the next decade, the incoming class of students will have never known life without Alexa or Siri or Google Assistant. This group will expect AI to be in place to support their needs.

This wave of change and the promise of cost savings, expanded enrollment, and better student success are compelling enough that CIO’s in higher education surveyed by Gartner designated artificial intelligence as the top game changer for 2019. 

Getting started with a digital assistant for your institution is as easy as our 12-week pilot program which has no long-term commitments. Contact us to learn more and see a demo for yourself. 

Contact Us

There’s a very old joke in the software industry:

What’s the difference between a car salesman and a software salesman?
A car salesman knows when he is lying.

Unfortunately, there’s a huge amount of truth to this joke, and the explanation for why this is true is simple. Software is pretty complicated, and cars are pretty straight forward. With a car you can generally read up on everything you need to know in a matter of hours (enough to sell it anyway). While software can sometimes take months to really understand. Then factor in the myriad ways it can be used, and what business requirements people may be asking of it, and even the best sales people can be stumped at how to answer a question.

Oftentimes they really do believe they do know the answer. And that’s the source of the joke.

And this leads us to the new era of software: Artificial Intelligence (AI). And, of course, this means a whole bunch more woefully inadequate answers to very reasonable questions.

How does the chatbot know what to do when we ask it a question?
Sales Person:
It learns using AI.
But how?
Sales Person:
It just does. It’s called deep learning.
But what if it makes mistakes?
Sales Person:
It learns from its mistakes.
But how?
Sales Person:
It uses deep learning.

Obviously, this isn’t how any of this works at all. But given the mystery that shrouds all things AI, it’s not a surprise that these types of conversations take place.

So, to add transparency to what will be a very challenging subject to evaluate for many organizations, we’ve created a list of five facts that are critical to aid the understanding and implementation of a chatbot solution in the Enterprise.

Fact 1: AI is like a garden, it needs seeding & cultivation

Robotic hand gardening

Figure 1: Automation of nature and nurture

Out of the box, all chatbot engines (.ai) come with a general understanding of language and grammatical constructs. They also have a limited understanding of entities. Ex: I can ask a chatbot to do something “next Tuesday” and it will know what that date is, because it has knowledge of an entity that defines what a date can be. It also understands “today” and “tomorrow” too. It may also understand people’s names and cities in a country. “Is the Chicago office open tomorrow”?

What chatbots generally don’t know out of the box are the things particular to your domain. They don’t understand HR jargon, or campus terminology. They don’t know which departments you have, or job titles.  Terms like “leave of absence”, “expense reimbursement” and “travel auth” aren’t considered entities that have specific meanings, in the way that “next Friday” or “tomorrow” do.

So, it’s important to “seed” the AI on day one of your implementation. In many ways it’s just like how a farmer will grow a field. The farmer doesn’t just hope that nature will turn the field into a spectacular crop of wheat. Nature can only do so much, the farmer needs to do his/her bit also. The soil must be prepared, the seed planted, and each day it needs to be inspected and tended to ensure growth is according to plan.

For AI, it’s critical to plant the seed of domain knowledge on day one. And then monitor usage to identify areas it needs to be expanded, and also the specific areas it needs additional training and seeding in.  If the chatbot is HR focused then it needs an entire vocabulary injected and trained, in preparation for usage by actual humans.

If your chatbot doesn’t understand the difference between an adoption reimbursement program, and an adoption leave program, it will be destined to disappoint.

Fact 2: It’s not deep learning and big data that will be the key to success, it’s smart algorithms and neural networks

Last year we wrote a blog on AlphaGo Zero, and talked about how it wasn’t deep learning that made it so smart. The same thing is true of Enterprise chatbot implementations. Deep learning is a very powerful tool, but it isn’t the answer to everything. Neural networks and smart algorithms are the real engine behind a successful chatbot implementation.

Figure 2: Monte Carlo Tree Search in AlphaGo Zero, guided by neural networks

The lesson AlphaGo Zero taught the world was that AI is at its most powerful when it can map out its own neural network, while also readjusting decision points based on actual outcomes. This is why creating an incredible Chess or Go master is much easier than creating AI that cures cancer. 

In the Enterprise chatbot world, sophisticated decision networks don’t just create themselves, and deep learning doesn’t build them. They need to exist on day one, and they need to have been pre-built with domain knowledge and stacked with business rules that determine flows.

In the same way AlphaGo Zero needed to be aware of the rules of Go, your Enterprise chatbot needs to be aware of the best practice rules of the Enterprise. Only then can it be trusted by your employees, managers, students and faculty members.

In the Enterprise chatbot world, this equates to massively complex and sophisticated dialog flows that come pre-built and configurable for your business requirements. And that have over a decade of domain knowledge built into them.

Fact 3: AI is not a data warehouse

ones and zeros being inspected

Figure 3: AI is not a data warehouse

Those that remember IBM’s Watson winning Jeopardy may be disappointed to know that what they were really watching was a massive data warehouse stored in memory, with a search feature that had been built manually in order to meet the needs of one game.

There’s a lot of reasons why putting sensitive data into a proprietary AI engine in the Cloud isn’t a good idea. Security, data privacy, dual maintenance, and conversion effort, are just some of them. Your data belongs where it is right now.

It’s the obligation of the AI vendor to be able to plug into your data, not the other way round. As always in life, the tail should not be wagging the dog.

Of course, there are reasons why many vendors require this: laziness and lack of knowledge top the list.

Creating sophisticated data adapters that are a broker between your data and the AI isn’t easy and takes lots of domain knowledge. We know this because we’ve spent ten years doing it.

But with many vendors looking to jump into a market that they have no knowledge of, shortcuts have been taken by many of them. But that doesn’t change the fact that your data needs to be protected, and chatbot implementations shouldn’t be turned into massive integration projects.

Fact 4: A concierge chatbot is a requirement, not a nice to have

Tug of war between robots

Figure 4: Avoid chatbot confusion

Does anyone remember what a link farm is? Yes, they were awful. Quite possibly the worst manifestation of web-based technologies. And the problem was obvious, all those link farms did was sow confusion and frustration with the poor users who had to deal with them.

It’s 2019 now, and we face a similar conundrum. One chatbot that knows everything. Or hundreds of chatbots that know bits of information, but no way for the user to know which ones know what. Imagine being handed 100 help desk numbers and being asked to guess which one was the right one based on your area of need.

Fortunately, the problem has a solution. A concierge chatbot can be used as the focal point for all questions. All the concierge is responsible for is knowing which chatbot knows the answer to which question, and then seamlessly managing the handoff in the conversation, such that to the human it feels like one conversation with one chatbot.

This way the humans only ever need to start a conversation with the concierge. The ultimate one-stop bot.

Having worked extensively with Oracle’s chatbot framework, we not only can recommend it very highly, but we can also attest to its concierge capabilities. So, while Oracle is rolling out lots of small function-focused bots (they call them skills). All these bots/skills can be managed with one concierge chatbot automatically. Meaning you can have one concierge that includes Oracle delivered bots, IntraSee delivered bots, and also custom bots created by you.

Oracles uses the term “Oracle Digital Assistant”. What this is, is concierge chatbot capability under one technology stack. 

Fact 5: AI that requires massive human intervention, and coding development, isn’t AI

Storm trooper legos with guy at laptop

Figure 5: We like Joe, but Joe shouldn’t be creating Enterprise chatbots

AI that requires 95% of all its functionality to be created by the human hand isn’t really AI. It’s cool, but it’s not AI. It’s also not supportable or maintainable. The weakest link will be the human hand that pieced it all together. And if that hand has no domain knowledge, it won’t just be buggy, it will be stupid.

The real key to AI is not just automation of the task a chatbot can complete. It’s automation of the creation of the chatbot itself.

This blog has been about the facts as we see them at IntraSee. So let’s look at the facts of what a sample chatbot pilot generated. The background being: 200 FAQ’s, 16 view data intents, 6 transactions (promotions, transfers, etc.), 10 reports (yes, a chatbot can run reports). Here’s the technical numbers:

  • 24 Custom Entities
  • 690 Custom Component Invocations
  • 2,696 System Component Invocations
  • 3,386 States
  • 101,609 Transitions between States

We firmly believe that automation of creation is the key to AI success. Manually coding over 100,000 state transitions creates inherent instability, and leads to what we would call, a Frankenbot.

At IntraSee we have automated the creation of a chatbot, such that with one push button we can generate hundreds of thousands, even millions, of chatbot states, transitions, invocations, and entities. We do this for multiple reasons:

  • We remove human error from the equation.
  • We simplify the management and maintenance, such that a business user can easily deploy any changes.
  • We massively shorten implementation times down to a just a few weeks.
  • We can deliver more in four weeks than would normally be possible in over one year.
  • We can deploy mass changes, risk free, to a chatbot in a matter of minutes.

Please contact us if you’d like to learn more…

Contact Us

It was a pleasure to attend another fantastic Gartner conference in Las Vegas (November 26-29, 2018). And while Amazon had their own mega conference (re:Invent 2018) down the road at the Venetian, the smart set were taking a broader look at the future with the team from Gartner.

So, what we’d like to do is break down the key messaging that we got from the conference, based on the tracks that we followed.

So here we go: Gartner’s key messaging from November 2018.

“IT organizations need to stop thinking in terms of projects, and start thinking in terms of products.”

– Gartner

This was the keynote theme for 2018. Last years theme was that IT needed to discover the word “yes” in their vocabulary. This year Gartner focused their messaging on redefining how IT needs to partner with the business community. Primarily, the advice was that IT needed to stop seeing the world in terms of projects, and instead embrace the concept of products as a means of implementing solutions. And, of course, it wouldn’t be a technology conference without the introduction of new a catchy phrase: PRODUCTology.

The concept is pretty straight forward and seeks to explain a lot of bad history when it comes to how IT has attempted to implement the dreams of the business community over many decades. And can be summed up as: projects are bad, products are good.

And the explanation makes a lot of sense. Projects are, by their nature, things of a finite duration. Risk needs to be managed. Scope needs to be controlled. Expectations need to be set. And then, when the project is complete, everyone moves on to the next project. Leaving in their wake a sterile, half-baked solution, that in a matter of months begins to age and crumble. The technology equivalent of a potted plant that never gets watered.

Meanwhile products are forever (well, at least until their replacement comes along). By their nature they are born of innovation and designed to be an entity that continues to grow and morph over the years as demands change and new ideas come to mind. Products have owners that care about them and lovingly tend to their good health, while also making sure they are meeting the requirements of the people using them. Products have roadmaps, they have interested parties, and they have a purpose.

In the consumer space, products are what make the world go round. While in the IT world it’s projects. And that is what needs to change according to Gartner (and we agree). If the business world wants to see their Enterprise systems become more consumer-like, then PRODUCTology is where it all starts.

“IT needs to come to the business community with ideas, as a trusted partner, and not be seen as an order taker.”

– Gartner

Gartner also spent a good deal of time urging the IT community to be more proactive with how they engage with the business community. Instead of waiting to be told what they needed to deliver, IT should be coming to the business community with innovative ideas on how to meet the demands of the era of disruption that we are now entering. Once the business community sees the IT group as an engaged and enthusiastic partner, then the nature of the relationship will completely change. And in ways that will benefit the entire organization. Gartner’s observation was that when IT and the business community collaborate well together, good things happen.

“IT can shape demand and become a thought leader.”

– Gartner

For IT to become a thought leader, Gartner recommended the use of external sources for inspiration. From Google, to Github, to Gartner themselves. There’s a plethora of information that IT can make use of, plus lots of vendors only too willing to demonstrate what they can bring to the table (which includes IntraSee by the way). IT should be looking to bring these resources and vendors to the attention of the business community as a means of creating a dialog about the art of the possible.

“If IT focuses on successful delivery, without trying to create everything themselves, then the business community will fund their initiatives.”

– Gartner

Gartner also believes that IT needs to stop trying to recreate wheels that have already been built. It’s not the job of IT to build anything. But it is the job of IT to ensure that “things” (ideally products) are built, and implemented, correctly. And that may mean a collaboration with a vendor that has a solution, but which needs configuration and extension that IT needs to be involved in. But that does not necessarily mean that IT needs to be building the code (which now has to be maintained). Once IT gets out of the code maintenance world, and into the innovation and enablement world, great things will happen for the organizations they support. What’s important isn’t how things get done, it’s what gets done that counts.

“Don’t try and build chatbots yourself. Building bad chatbots is easy. Building great chatbots is very hard. Find a vendor that understands your domain and can demonstrate excellence.”

– Gartner

And if there’s one thing that Gartner strongly recommended IT should not be trying to build, it’s chatbots. Instead, IT should be evaluating chatbot vendors and by a process of evaluation and demonstration figure out which ones truly match the hype, have the domain knowledge, and work securely with your existing Enterprise systems. Trying to build a “brain” from scratch may lead to a “Frankenbot” that consumes your organizations resources for many years.

The more research that IT does in this area, the less the chance that expensive, embarrassing, and time-consuming mistakes will be made.

“Don’t create ‘Technical debt’”

– Gartner

This isn’t a new concept. “Technical debt” refers to any code added now that will take more work to fix at a later time—typically with the purpose of achieving rapid gains. Shortcuts, hacks, and poor design choices will all lead to huge costs later on. Costs that aren’t just financial, but also reputational too. IT often creates technical debt for itself because of a desire to build things it doesn’t need to build. Then gets sucked into a maintenance and rewrite cycle that stymies its ability to take on new requirements from the business community.

Gartner very strongly believes that taking on unnecessary technical debt causes IT many issues that it needs to avoid.

“95% of bots in the market are s***”

– Gartner quoting Chatbot Summit 2018

Microsoft Clippy

Figure 1: Don’t implement your own chatbot version of “Clippy”

At IntraSee we would concur with this statement by Gartner 100%. The chatbot market right now is flooded with vendors who have massively subpar solutions. Many of them don’t have any experience in the Enterprise space, and have no domain expertise at all. Even an industry stalwart like IBM, with its Watson product, has failed to take a good idea and turn it into a viable Enterprise chatbot.

At IntraSee we firmly believe that a chatbot that is built by automated means, that can plug into your existing Enterprise systems, and comes delivered from day one with domain expertise, is the only way to deliver a chatbot solution.

And, we would say that this is something you need to see to believe. So, while you are looking at other chatbots in the market (which we encourage you to do), we would strongly advise you look at what we do too. You’ll see the difference immediately.

So please contact us to arrange an online demonstration of an Enterprise chatbot in action. And welcome to the world of PRODUCTology!

Contact Us


The week of October 22nd was a fun time to be in San Francisco at Oracle OpenWorld. As usual there was an overriding theme that dominated the conference, and this year it was robots. Robots that manage entire Cloud architectures, and robots (aka chatbots) that engage in complex conversations with humans.

2019 appears to be set as the year autonomous robots take hold of the Enterprise, making it more secure than ever, and cheaper to operate than ever.

As is often the case, Larry Ellison led the charge by calling out all the features that differentiate a gen 1 Cloud vs. a gen 2 Cloud.

“Today I want to talk about the second generation of our cloud, featuring Star Wars cyber defenses to protect our Generation 2 platform. We’ve had to re-architect it from the ground up. We’ve introduced Star Wars defenses, impenetrable barriers, and autonomous robots. The combination of those things protect your data and protect our Generation 2 Cloud.”

– Larry Ellison

Having worked with Oracle’s Cloud architecture for a number of years now, we can say that we’ve seen a massive change from Oracle’s gen 1 (aka classic) Cloud architecture, to todays automated gen 2 architecture. As Larry went on to say:

“I’m not talking about a few software changes here and a few software changes there. I’m talking about a completely new hardware configuration for the cloud. It starts with the foundations of the hardware. We had to add a new network of dedicated independent computers to basically surround the perimeter of our cloud. These are computers you don’t find in other clouds. They form this impenetrable barrier. It not only protects the perimeter of the cloud, these barriers also surround each individual customer zone in our cloud. Threats cannot then spread from one customer to another.”

– Larry Ellison

And of course, the key to all this is AI and autonomous bots.

“Then we use the latest AI machine learning technology to build autonomous robots that go out, search and destroy threats. We’ve added lots and lots of more robots to protect every aspect of the cloud. It’s got to be a case of it being completely automated, completely autonomous.”

– Larry Ellison

Naturally, it wouldn’t be an Oracle conference if Larry didn’t call out Amazon for all their failings (price, performance, reliability, and security).

“They [AWS] don’t have self-tuning, they have no autonomous features, it’s not available. They don’t have active data guard. They have no disaster recovery. They have no server failure recovery. They have no software failure recovery. They’ve got no automatic patching. They’ve got none of that. We automatically patch and the system keeps running. In that case, we are infinitely faster and infinitely cheaper.”

– Larry Ellison

This is, most definitely, important stuff. As organizations are recognizing now, infrastructure matters. With Oracle owning the SaaS, PaaS, and IaaS layers, it can ensure security and reliability at every level.

What is also very significant is Oracle’s commitment to innovation and empowerment of its client-base. It now has a massively advanced PaaS layer that customers can take advantage of to flourish in an era of change. Which is in complete contrast to Workday’s approach, which is to lock their clients into a technological alley that stifles any attempt at UX innovation via automation. Workday’s euphemism for this is to describe it as “curation”. But in an era of change, curation is the enemy of progress.

And this brings us to the other hero of Oracle OpenWorld: Chatbots! In this new era of automation Oracle has now released its gen 2 chatbot technology. Now wrapped up in a package called Oracle Digital Assistant. This is a lot more than just a rebrand of what was called Oracle Intelligent Bots (OIB). It’s now a technology platform that enables true chatbot concierge capabilities.

This means that one Oracle chatbot can now seamlessly be a broker (concierge) for many Oracle chatbots. Such that the human user need only converse with one chatbot for any question it may have, regardless of how many chatbots and systems there are “behind the scenes”.

At IntraSee we specifically chose the Oracle chatbot framework for this and many other reasons (including being able to run on a secure infrastructure). Because we can automate the actual creation of an Oracle chatbot, we can also automate the creation of a concierge chatbot, while also being a service chatbot to another Oracle concierge chatbot.

In summary, we couldn’t be happier with Oracle’s direction for its infrastructure (IaaS), and also it’s chatbot technology framework (PaaS). 2019 will undoubtedly be the year for automation in the Enterprise. And for that you need automation at all layers of the Enterprise, and Oracle now has that (IaaS, PaaS, and SaaS). So, we would say that this was a terrific conference that sets the stage for an absolutely fascinating 2019.

Our prediction is that by the end of 2019 that chatbots will be considered the standard UI for the Enterprise for almost all self-service and help desk features, and that web-based applications will start to be seen as the province of the “back office”.

Also, on a personal note, we did get to speak at the conference jointly with Oracle on the subject of chatbots. It was a fun time and if you’d like to get hold of a copy of the presentation, you can now request it.

And, of course, if you’d like to see a demo of what the future (2019) looks like, please let us know and we’d be happy to oblige.

Contact Us

In January 2011, IBM amazed the entire world by hosting the television show Jeopardy, and pitched its AI computer, named Watson, against two of its best champions, Ken Jennings and Brad Rutter. Its victory was both shocking and exciting all at the same time.

Ken Jennings (who almost beat Watson), magnanimously stated, “I, for one, welcome our new computer overlords”. It was official, machines had finally usurped humans. It appeared that IBM had achieved the impossible dream that science-fiction novelists had been predicting for decades.

One of Watson’s lead developers, Dr. David Ferrucci, added to the quickly escalating hype with:

“People ask me if this is HAL in “2001: A Space Odyssey.” “HAL’s not the focus; the focus is on the computer on ‘Star Trek,’ where you have this intelligent information seek dialogue, where you can ask follow-up questions and the computer can look at all the evidence and tries to ask follow-up questions. That’s very cool.”

Seven years later the hype train appears to be still stuck at the station. All the promise of 2011 is yet to be materialized. How could something so utterly amazing back in 2011 still be struggling to find its place in the business world of today?

To answer that question, we need to go back in time to 2011 and look at exactly how Watson beat two extremely brilliant humans. What was its secret? And exactly how much AI was really involved?

Once you look at how Watson won, it’s not really that surprising that it was never able to capitalize on that victory in future years.

For those watching the show in February 2011 (when the show aired) they were told that Watson would not be allowed to connect to the internet. That would be cheating, right?

But what was not explained was that the team that built Watson had spent the previous five years downloading the internet onto Watson. Or, to be more precise, the parts of the internet that they knew Jeopardy took its questions from.

This was why the show was hosted at IBM offices. “Watson” was actually a massive room full of IBM hardware. Specifically, 90 IBM Power 750 servers, each of which contains 32 POWER7 processor cores running at 3.55 GHz, according to the company. This architecture allows massively parallel processing on an embarrassingly large scale: as David Ferrucci told Daily Finance, Watson can process 500 gigabytes per second, the equivalent of the content in about a million books.

Moreover, each of those processors were equipped with 256 GB of RAM so that it could retain about 200 million pages worth of data about the world. (During a game, Watson didn’t rely on data stored on hard drives because they would have been too slow to access.)

So, how did the Watson team know which parts of the internet to download and index? Well, that was the easy part. As anyone who has studied the game can tell you, Jeopardy answers can mostly be found from either Wikipedia or specific encyclopedias. “All” they had to do was download what they knew would be the source of the questions (called “answers” on Jeopardy) and then turn unstructured content into semi-structured content by plucking out what they felt would be applicable for any question (names, titles, places, dates, movies, books, songs, etc.). However, doing that was no simple feat, and one of the reasons why it took them five years to make Watson a formidable opponent.

It was a massively labor-intensive operation that required a large IBM staff with PhD’s many years to accomplish. This was the first red flag that the Watson team should have been aware of.

Unfortunately, their goal at the time was to build a computer that could win a TV quiz show. What they should have been building was something that could be implemented in any environment without the need for an army of highly paid professors. Instead they were told, “Build something that can win on Jeopardy”. And that’s exactly what they did.

To this day, Watson is still notoriously picky about the kind of data it can analyze, and still needs a huge amount of manual intervention. In the world of AI, automation is a key requirement of any solution that hopes to be accepted in the business world. Labor intensive solutions are massively expensive and require huge amounts of constant maintenance, which cost large amounts of money.

IBM made this work for Jeopardy because there was no realistic budget or timelines (it ended up taking them five years), and all it had to do was win one game. In the real world of business, budgets matter. And high maintenance costs will destroy any ROI.

IBM Watson Server Farm

Figure 1: Wikipedia-in-a-box

So, as you can see, from the get-go Watson had every advantage possible. While it wasn’t allowed to connect to the internet (remember, that would be cheating), they didn’t have to. They’d already spent five years indexing the parts of the internet they knew they needed.

And then there were the rules of Jeopardy. These played directly into the plexiglass “hand” of Watson. Anyone who has seen the show knows how it works. The “answer” is displayed on the screen, then the host of the show gets to verbally read it out. When he has finished a light appears on a big screen and the competitors can press a button. The first person to press the button gets to state what the “question” is.

But here’s why a machine has a massive built-in advantage. On average it takes 6 seconds for the host to actually read out loud the question. That means that both Watson and the humans get six seconds to figure out if they know the answer or not. Six seconds for a room full of computers is a huge amount of time. Try typing in a question using Google on your phone and see how fast it is. Yes, less than one second.

What this means is that by the time the light comes up on the board, Watson already knows the answer (almost all of the time). But, oftentimes, so do really smart humans. So, theoretically, when the light flashes, it’s possible that everyone knows the answer (and that night the questions weren’t particularly hard).

The determiner of who gets to answer the question is who can depress a buzzer the fastest.

What the TV audience didn’t get to see was that, to make this seem kind of human, the Watson team had rigged the machinery with a plexiglass hand that would automatically depress the buzzer the exact moment the light appeared on the board. Due to various laws of science, this took exactly 10 milliseconds to happen. The best any human can expect to do this is 100 milliseconds (try running this test yourself and see how fast you can do it). Which meant that it was technically impossible for any human to click the buzzer faster than Watson.

The only real way for the humans to stand a chance was to anticipate the light going on (though if they clicked too early they invoked a quarter of a second delay before they could click again). But to help out the humans, Watson was programmed not to anticipate the light.

If you watch the show, you can see that Ken Jennings clicked his buzzer on almost every question that Watson won on. He just wasn’t fast enough. No human could be, unless they got really lucky with their anticipation of the light going on.

Also, what wasn’t apparent to the viewers was that Watson wasn’t “listening” to any questions. An ascii text file was transmitted to Watson the moment the question appeared on the board. Watson then parsed that question (which actually was a very impressive feat by IBM) to figure out the true intent, and then IBM did use a voice feature to actually read out the answer.

In 2011 this was a genuine achievement by IBM, and we do salute the team that worked on it. What they did was not easy and did advance our understanding of AI. But not really in the way the world understood it. Watson was one small (massively over-hyped) step forward, not the huge leap it appeared to be.

What IBM did was build a fantastic Jeopardy machine. It did use elements of AI, but wasn’t quite the miracle it seemed to be. Yes, it was part AI, but it was also part “Wizard of Oz”. And because it was pretty much a one-tricky pony in 2011, IBM has subsequently struggled to make it work in the Enterprise. Though they have tried.

What has become very apparent since 2011 is that what “worked” for winning at Jeopardy doesn’t work today in the real world. AI has come a long way since those pioneering days, and approaches to creating the ultimate Q&A machine have altered dramatically.

While we commend Watson, and the awareness it created, we believe there are better ways to implement an AI solution that do not require an army of PhD graduates. AI is something that should be, and can be, implemented in a matter of weeks. And which can also be easily maintained – delivering ROI on day 1.

Please contact us to learn more…

Contact Us