In January 2011, IBM amazed the entire world by hosting the television show Jeopardy, and pitched its AI computer, named Watson, against two of its best champions, Ken Jennings and Brad Rutter. Its victory was both shocking and exciting all at the same time.

Ken Jennings (who almost beat Watson), magnanimously stated, “I, for one, welcome our new computer overlords”. It was official, machines had finally usurped humans. It appeared that IBM had achieved the impossible dream that science-fiction novelists had been predicting for decades.

One of Watson’s lead developers, Dr. David Ferrucci, added to the quickly escalating hype with:

“People ask me if this is HAL in “2001: A Space Odyssey.” “HAL’s not the focus; the focus is on the computer on ‘Star Trek,’ where you have this intelligent information seek dialogue, where you can ask follow-up questions and the computer can look at all the evidence and tries to ask follow-up questions. That’s very cool.”

Seven years later the hype train appears to be still stuck at the station. All the promise of 2011 is yet to be materialized. How could something so utterly amazing back in 2011 still be struggling to find its place in the business world of today?

To answer that question, we need to go back in time to 2011 and look at exactly how Watson beat two extremely brilliant humans. What was its secret? And exactly how much AI was really involved?

Once you look at how Watson won, it’s not really that surprising that it was never able to capitalize on that victory in future years.

For those watching the show in February 2011 (when the show aired) they were told that Watson would not be allowed to connect to the internet. That would be cheating, right?

But what was not explained was that the team that built Watson had spent the previous five years downloading the internet onto Watson. Or, to be more precise, the parts of the internet that they knew Jeopardy took its questions from.

This was why the show was hosted at IBM offices. “Watson” was actually a massive room full of IBM hardware. Specifically, 90 IBM Power 750 servers, each of which contains 32 POWER7 processor cores running at 3.55 GHz, according to the company. This architecture allows massively parallel processing on an embarrassingly large scale: as David Ferrucci told Daily Finance, Watson can process 500 gigabytes per second, the equivalent of the content in about a million books.

Moreover, each of those processors were equipped with 256 GB of RAM so that it could retain about 200 million pages worth of data about the world. (During a game, Watson didn’t rely on data stored on hard drives because they would have been too slow to access.)

So, how did the Watson team know which parts of the internet to download and index? Well, that was the easy part. As anyone who has studied the game can tell you, Jeopardy answers can mostly be found from either Wikipedia or specific encyclopedias. “All” they had to do was download what they knew would be the source of the questions (called “answers” on Jeopardy) and then turn unstructured content into semi-structured content by plucking out what they felt would be applicable for any question (names, titles, places, dates, movies, books, songs, etc.). However, doing that was no simple feat, and one of the reasons why it took them five years to make Watson a formidable opponent.

It was a massively labor-intensive operation that required a large IBM staff with PhD’s many years to accomplish. This was the first red flag that the Watson team should have been aware of.

Unfortunately, their goal at the time was to build a computer that could win a TV quiz show. What they should have been building was something that could be implemented in any environment without the need for an army of highly paid professors. Instead they were told, “Build something that can win on Jeopardy”. And that’s exactly what they did.

To this day, Watson is still notoriously picky about the kind of data it can analyze, and still needs a huge amount of manual intervention. In the world of AI, automation is a key requirement of any solution that hopes to be accepted in the business world. Labor intensive solutions are massively expensive and require huge amounts of constant maintenance, which cost large amounts of money.

IBM made this work for Jeopardy because there was no realistic budget or timelines (it ended up taking them five years), and all it had to do was win one game. In the real world of business, budgets matter. And high maintenance costs will destroy any ROI.

IBM Watson Server Farm

Figure 1: Wikipedia-in-a-box

So, as you can see, from the get-go Watson had every advantage possible. While it wasn’t allowed to connect to the internet (remember, that would be cheating), they didn’t have to. They’d already spent five years indexing the parts of the internet they knew they needed.

And then there were the rules of Jeopardy. These played directly into the plexiglass “hand” of Watson. Anyone who has seen the show knows how it works. The “answer” is displayed on the screen, then the host of the show gets to verbally read it out. When he has finished a light appears on a big screen and the competitors can press a button. The first person to press the button gets to state what the “question” is.

But here’s why a machine has a massive built-in advantage. On average it takes 6 seconds for the host to actually read out loud the question. That means that both Watson and the humans get six seconds to figure out if they know the answer or not. Six seconds for a room full of computers is a huge amount of time. Try typing in a question using Google on your phone and see how fast it is. Yes, less than one second.

What this means is that by the time the light comes up on the board, Watson already knows the answer (almost all of the time). But, oftentimes, so do really smart humans. So, theoretically, when the light flashes, it’s possible that everyone knows the answer (and that night the questions weren’t particularly hard).

The determiner of who gets to answer the question is who can depress a buzzer the fastest.

What the TV audience didn’t get to see was that, to make this seem kind of human, the Watson team had rigged the machinery with a plexiglass hand that would automatically depress the buzzer the exact moment the light appeared on the board. Due to various laws of science, this took exactly 10 milliseconds to happen. The best any human can expect to do this is 100 milliseconds (try running this test yourself and see how fast you can do it). Which meant that it was technically impossible for any human to click the buzzer faster than Watson.

The only real way for the humans to stand a chance was to anticipate the light going on (though if they clicked too early they invoked a quarter of a second delay before they could click again). But to help out the humans, Watson was programmed not to anticipate the light.

If you watch the show, you can see that Ken Jennings clicked his buzzer on almost every question that Watson won on. He just wasn’t fast enough. No human could be, unless they got really lucky with their anticipation of the light going on.

Also, what wasn’t apparent to the viewers was that Watson wasn’t “listening” to any questions. An ascii text file was transmitted to Watson the moment the question appeared on the board. Watson then parsed that question (which actually was a very impressive feat by IBM) to figure out the true intent, and then IBM did use a voice feature to actually read out the answer.

In 2011 this was a genuine achievement by IBM, and we do salute the team that worked on it. What they did was not easy and did advance our understanding of AI. But not really in the way the world understood it. Watson was one small (massively over-hyped) step forward, not the huge leap it appeared to be.

What IBM did was build a fantastic Jeopardy machine. It did use elements of AI, but wasn’t quite the miracle it seemed to be. Yes, it was part AI, but it was also part “Wizard of Oz”. And because it was pretty much a one-tricky pony in 2011, IBM has subsequently struggled to make it work in the Enterprise. Though they have tried.

What has become very apparent since 2011 is that what “worked” for winning at Jeopardy doesn’t work today in the real world. AI has come a long way since those pioneering days, and approaches to creating the ultimate Q&A machine have altered dramatically.

While we commend Watson, and the awareness it created, we believe there are better ways to implement an AI solution that do not require an army of PhD graduates. AI is something that should be, and can be, implemented in a matter of weeks. And which can also be easily maintained – delivering ROI on day 1.

Please contact us to learn more…

Contact Us

In the 80’s and 90’s Sony ruled the world of portable music. Everyone remembers the game changing Walkman series of products that revolutionized the portable music landscape and enriched our lives. Then digital music came along in the form of mp3’s. And now all the music online was in this format, with the only problem being that Sony only supported their proprietary ATRAC format on all their meticulously manufactured hardware. This was not a recipe for success for Sony, and culminated in the dead-on-arrival Sony Network Walkman. In 1999 it was the smallest digital music player on the market. Beautifully crafted, like all Sony products were back then (though it kind of looks like a bicycle lock now), it defiantly only played ATRAC files.

Network Walkman

Figure 1: Beautifully crafted obsolescence

The death knell tolled for the Sony digital music division in October 2001 when Apple announced, in all its mock-turtlenecked glory, the iPod combined with iTunes. It was the ultimate mp3 player. It took Sony a further four years to fully realize what had happened to them (ironically right before the iPhone was announced), but the rest of the world got it immediately.

Today we are seeing a similar parallel. Chatbot vendors that are delivering solutions that require you to load all your data into their proprietary AI systems, with their proprietary formats.

These are AI systems that you have no control over, and, often, systems that are highly insecure. Why would any vendor require that when we have such a thing as API’s? Shouldn’t AI technology be smart enough to use API’s to look at your data in real-time and figure stuff out? The answer is simple: yes, it should.

In 2018, API’s are to AI, what mp3 was to digital music in 2001.

Let’s go with the most basic of examples to illustrate this. All chatbot vendors provide what’s called an FAQ feature (though the better solutions do a LOT more). Meaning that you ask the chatbot a question and it gives you an answer. But what the “bad” vendors do is that they require you to load all the answers into their proprietary AI solution, in their proprietary format. Why is this bad? Let us count the ways:

  1. Those answers already exist in some knowledge base in your current systems. Now you have to maintain the answers not just in your current systems, you also have to maintain the same answers in the chatbot AI too. This is called dual maintenance, and it’s bad. The chances of your web systems’ content being out of sync with your chatbot content just went through the roof.
  2. The tight control you have in your current content management systems (CMS), and all the sophisticated features they come with, almost certainly don’t exist within the chatbot vendors solution (which was never built to be a true CMS). So, you now have to deal with a knowledge base that is very loosely controlled, and one which would never pass muster as a CMS if you were actually in the market to purchase a new one. Which was never your intention in the first place (and nor should it be).
  3. From having your content tightly controlled on your systems (cloud or on-premise), you’ll now have your content stored on someone else’s server in an environment you have no control over.
  4. If you did decide to store all your Q&A’s within the chatbot vendors solution, then you now have what’s called vendor lock-in. The vendor will like that, but that doesn’t help you in any way at all.

The lesson to be learned is that you should not be so dazzled by the thought of AI that you thrust yourself into a world of disorder. As always in life, be aware of the Cynefin framework:

Cynefin illustrative sketch

Figure 2: Avoid selecting a solution formulated in the wrong domain (especially the fifth one – Disorder)

In simplest terms, the Cynefin framework exists to help us realize that all situations are not created equal and to help us understand that different situations require different responses to successfully navigate them.

So now we know how things shouldn’t work, let’s look at how it should work.

    1. Any true AI chatbot FAQ feature needs to be able to consume answers already stored in your, potentially, many knowledge base repositories. These include:
      • Oracle Content & Experience Cloud
      • Salesforce
      • PeopleSoft Interaction Hub
      • Oracle Service Cloud
      • PeopleSoft HR Help desk
      • ServiceNow
      • Drupal
      • SharePoint
      • WordPress
      • Etc.
    2. Also, the answer for each question could be contained in any of those systems, and your chatbot should know exactly which one.
    3. It needs to be role/group aware. Such that it knows key demographic data about who is asking the question and is therefore able to deliver the correct answer for that person.
      Ex: If a French employee wants to know what the time off policy is, don’t reply with the USA policy.
    4. It needs to be able to understand questions asked in potentially more than 100 languages.
    5. It needs to be able to utilize AI to summarize long answers down to a digestible length.

Doesn’t this sound better? It does to us too. And that’s why we built our chatbot this way. A chatbot that adapts to your content and your data on day one.

To summarize: an AI solution that requires a massive data and content dump of all your Enterprise systems is not a valid proposition that anyone should be considering. Other than the obvious data privacy issues, it’s also not a technically tenable solution. Yet in the rush to market by hundreds of vendors peddling chatbots, there is a general hope by them that all these giant red flags will be ignored.

Our advice is that organizations should be diligent, and only accept solutions that accommodate the scalability and security requirements of large Enterprise systems.

And as we have said in previous blogs, in the world of technology, all that glitters is not necessarily gold.

We hope that this blog has laid out a better way for your organization to take advantage of the next industrial revolution: Automation via AI.

To find out more, please contact us below…

Contact Us

Automation: the application of machines to tasks once performed by human beings

It’s hard to comprehend, but it was just over a hundred years ago when car manufacturers relied on horse-drawn carriages to deliver each frame to the workers. Then, in 1913, Henry Ford introduced the industry’s first moving assembly line at the Highland Park Plant in Detroit.

Now, complex robots with “laser eyes” perform everything from welding, to die casting and painting, alongside their human colleagues on the factory floor.

Meanwhile, automation in the so-called “hi-tech” world of software has been much slower coming. Servers need bouncing, patches must be applied, security alerts need investigating, performance needs to be tuned, and code needs to be developed. All by hand.

And this is just maintenance of the actual machines! If we then flip the script and look at the experience of the humans using the software, we see the same glaring inefficiencies.

Chatbot Service

Figure 1: The help desk of the future

Entire “call center” industries have been created, all predicated on the idea that Enterprise software is too difficult to use – and only manual intervention can fix that.

The advent of the Cloud was the first step in starting to solve this problem, but in itself was not the solution. What it was, was the enabler of the solution. Step one was about creating one infrastructure and one code-line supporting thousands of organizations.

Step one was great, but step two is where the Cloud starts to realize its true potential. Step two is all about automation.

And this is where the real benefits to organizations are realized, all powered by the driver of what will be the next industrial revolution: Artificial Intelligence (AI). Today we can confidently say that the future is autonomous. And that the future is now “open for business”.

Rosie from the Jetsons

Figure 2: The future, as described in The Jetsons, isn’t so far away!

But what is fascinating about this new era is that it will differ in one key way that will make it massively different to every revolution beforehand: barrier to entry.

In the past only the super-rich could take advantage of each industrial revolution. Want to build cars? Well, good luck finding the money to create a factory of robots and assembly lines. However, if you want to fully automate your entire IT infrastructure, and enable robots to automatically handle “help desk” calls? All you need is a subscription. Everything you need has already been built and it’s in the Cloud.

The new industrial revolution will be priced on a pay-on-demand basis, and will be available and attainable for all organizations, big or small. There will be almost no barrier to entry. The playing field will be leveled by the most egalitarian revolution in the history of mankind.

API’s and AI will combine to enable automation in the same way a faucet, when turned on, provides water.

A water tap

Figure 3: Once upon a time this was a revolutionary means of transporting water!

So, why is the autonomous revolution now market-ready? Two things:

1. Oracle’s Autonomous Cloud Platform
A self-driving, self-securing, self-repairing stack that exists at each level: IaaS, PaaS, SaaS.
https://www.oracle.com/autonomouscloud/index.html

Autonomy across the entire platform is key for many reasons:

  • Lower downtime
  • Improved performance and stability
  • Lower operational costs
  • Ease of compliance
  • Greater security

Note: the security aspect cannot be stated more strongly. Many Cloud systems are hardened at the perimeter only, like an egg with a hard shell. Unfortunately, once the perimeter is broken into, there is little to prevent further intrusion.

With Oracle autonomously baking security into every level, there are multiple levels of security that are all monitored, patched, and repaired in real-time.  This means the entire system is always in compliance without the need for human intervention.

85% of all security breaches occur when a patch is available but not applied.

2. IntraSee’s Autonomous Chatbot Solution (which resides solely on the Oracle Autonomous Cloud platform)
A chatbot that autonomously builds itself utilizing pre-built Enterprise skills that provide a fully-formed chatbot from day one. Also, because it resides on the Oracle Autonomous Cloud platform, it is the most secure chatbot solution available.

Figure 4: Automation, the new “Easy” button

What also stands out with Oracle’s and IntraSee’s offerings is the low barrier to entry. At IntraSee we pride ourselves on making AI a turnkey solution. Which is why both Oracle and IntraSee provide pilot options, where you can quickly see the value of automation for your organization.

With our chatbot solution we can implement and configure in just four weeks. Let your department kick the tires for another four weeks. Then roll it out to a larger pilot audience for another four weeks.

If you like it? Then roll it out to your entire organization in another four weeks.

Using IntraSee’s Chatbot 4×4 implementation methodology, the benefits will be instant. Chatbots dealing with complex calls for less than one dollar a conversation. Compared to $5 to $200 for an actual human. Plus, with the added bonus of higher customer/employee/student satisfaction and a lot more reasons too.

The future will belong to those organizations that embrace autonomy. And those that don’t will struggle with never-ending spiraling operational costs that will hinder their ability to compete.

Imagine if Ford were still trying to transport car frames via horse-drawn carriages. That’s right, they wouldn’t exist today. The lesson in life, as always, is adapt or perish.

typewriter vs laptop

Figure 5: Old vs. New

If you’d like to know more, please contact us.

Contact Us

Since the dawn of time (almost), people have been communicating via conversations. And this worked great for many thousands of years. Then computers happened, and suddenly we all had to learn a different mechanism for communicating. This was loosely referred to as computer-speak. And over the years it changed as technologies changed. From mainframe, to MS-DOS, Windows/Mac, client-server, and then the web. But the one constant over the years was that it was still computer-speak. Meaning that it had nothing to do with how humans actually spoke to each other.

And this is the crux as to why your Enterprise systems are so difficult for your organization to use today. And why it costs you so much money to support everyone, and everything.

Implementing a chatbot solution for your Enterprise, if done right, will save your organization millions of dollars per year, increase organizational efficiency, and improve user satisfaction with your systems.

The ROI is dramatic. A conversation with a chatbot should cost less than 50 cents per conversation. Whereas a conversation with a human is costing your organization at least 10+ times as much, and, if HR professionals are involved, often 100+ times as much.

But to get there everyone needs to be comfortable with what this new era will look like, and how to get it started. To that end, the recommendation of Gartner (and we wholeheartedly agree) is to pilot the solution to ensure that you don’t embark on a failed IT journey.

So, here’s our recommended 10 steps to ensure that you hit this one out of the park on the first pitch.

Step 1: Understand how much it will cost

First off, pilots should be cheap, and they should be short (and if they are not, then that’s a red flag). But it’s also super-important to understand what the cost structure looks like once you’ve rolled this out across your entire organization. So, work with the vendor that’s pitching the pilot to fully explain what things will cost after the pilot is done and you’ve decided to roll this out to everyone in your organization.

The last thing you want to tell your CEO is that the pilot was a success, but it’s not cost-effective to roll out the solution organization-wide. Remember, the #1 reason for implementing a chatbot solution is to reduce operational costs.

Step 2: Understand how much of your team’s time will be needed, and what is required to support it

All implementations will require some time from your internal teams. That’s normal and desirable. But you really should avoid solutions that could tie up teams of your people for months on end.

Also, once implemented, you should expect support and maintenance to be minimal too. If you suddenly discover you are now performing “dual maintenance” of content, then this means you chose the wrong solution.

A chatbot solution should require minimal involvement from your team to setup, monitor, and support.

Step 3: Ensure that infrastructure/security is discussed and vetted early on

It feels like every day we hear about a security breach at some of the biggest internet companies in the world. Concerns about security and infrastructure are very real and must be addressed. It’s critical to understand the architecture that the chatbot provider is proposing, as you may discover that your data is not only flowing through channels you’d prefer it not to, but is also actually being stored in places it shouldn’t be.

At IntraSee we take infrastructure and security very seriously. Which is why we reside solely within the fully certified Oracle Cloud.

Step 4: Identify your pilot audience

We would recommend around 200 people for an average chatbot pilot. And while it may be tempting to select them all from the IT department, we would recommend a broad range of your organization, but would focus on employees/managers or students/faculty (in the higher education world).

Try to select a broad demographic that most accurately represents your organization.  Including those who speak any languages you would want the chatbot to support.

Step 5: Understand any language requirements

Chatbots are capable of conversing in many languages, some better than others. So be aware of which languages the people in your organization will be conversing in, and check with your vendor to get feedback on how competent they believe the chatbot is in those languages.

Adjustments can be made for any languages that the chatbot may not be completely proficient in. Just let your vendor know up front.

Step 6: Have a pilot-to-production plan in place

Ok, so the pilot went great. Everyone loved it and now you want everyone in your organization to get to use it. Be sure that you have a pilot-to-production plan in place before you started your pilot. Because now you’ll know what to do next, and how much it should cost.

The key to pilot-to-production is that it should be a relatively quick process. We would say that four weeks should be attainable, but that anything more than twelve weeks would be excessive.

Step 7: Think in “conversational” terms when solving for use-cases: what do people need help with?

Talk to your help desk team, and your HR Generalists, they can provide you with the top 100 things that they get asked on a frequent basis. That’s a great start. But also get insight into how the conversations go.

  • How are the questions phrased?
  • Are there follow up questions based on specific answers?
  • Are discovery questions needed?
  • Would the inclusion of specific data help with the conversation?
  • What’s the best way to answer a question?
  • Does the answer vary based on location, seniority, etc.
  • Is a follow up required after the answer has been given?

This will help with configuring how the chatbot interacts with your people.

The idea is that the chatbot understands your best practices and follows them every single time. Just like a perfect employee would.

Step 8: Understanding context

It’s critical that your chatbot understands context, because then it can make helpful suggestions during a conversation. Just like a well-trained support person would. Plus, it’s also important that the chatbot understands who you are and where you are. This way a meaningful conversation can take place, as opposed to a simple Q&A.

So, for example, if someone asks a question about taking time off work, the chatbot should already know that if the person resides in Germany, that it should respond with whatever is the German policy (because that may be different to the American policy). And, also, that it offers to help the person complete a leave of absence request, if they so wish.

Step 9: Think big, but be agile

There is a school of thought that says that chatbot pilots should only include a small number of “intents” (aka functionality). We at IntraSee do not subscribe to that point of view. A pilot using this methodology may appear to be a success – but that could all be an illusion.

The main (but not only) purpose of a pilot should be to see what would happen in a full rollout to your entire user population, but with a smaller group of people where you can monitor and see what works and what doesn’t. And the only way to do that properly is to try and make your pilot scope close to what you expect your full production scope to be.

This means you also need to be agile during the pilot and can adjust to things you see, such that by the end of the pilot you have a close match to what your pilot-to-production path would be. HINT: This is why you need a configurable and fully automated solution.

Step 10: Don’t try and “roll your own” chatbot pilot

There’s no doubt that chatbots are the most exciting thing to happen in the software industry in the past 20 years. And there’s a ton of examples on the internet about how easy it is to create a chatbot that can accept pizza orders. But Enterprise chatbots are highly complex and sophisticated creatures. And to build one properly from scratch would take many years. So, the strong recommendation would be to use something already built off-the-shelf that can quickly be configured for all your needs. This isn’t just our recommendation, this is the recommendation of Gartner too, from last year’s Gartner Application Summit.

Gartner’s clear advice for IT is to stop building things that are already built. Ultimately, it’s not just a waste of time and money, it also contributes to IT being primarily focused on being in “maintenance mode”, instead of “innovation mode”.

If you’d like to know more, fill out the following form and we will send you our white paper.

With the proliferation of Amazon’s basic one-size-fits-all AI, millions of users have experienced Alexa playing music, summarizing the news, and providing information found on the internet. After the initial thrill is gone, many come to find out Alexa can’t really do much more without adding Alexa Skills to make her smarter. These “skills” are the bridge between your commands and actually getting something accomplished, like adjusting your thermostat or turning on a smart light bulb.

In the Enterprise world, IntraSee’s chatbot-automation bridges the gap between artificial intelligence and real-world business transactions, empowering AI platforms like Oracle Intelligent Bots to turn on the proverbial smart light bulb for your enterprise or institution.

When people ask us what we do at IntraSee, we explain that we are like Amazon Skills for the enterprise. These skills are what large organizations need to realize millions of dollars in savings. Instead of wasting time and money designing and maintaining websites that beat around the bush, IntraSee’s chatbot solution gets straight to the point and does the work users need to get done. There’s basically no web page, just a chat window. We’ve developed a means of automating chatbot implementations that makes configuring these solutions quick and simple, so the AI knows how to do the stuff that needs doing – right out of the box!

AI hand touching human hand

AI and Enterprise software finally collide

IntraSee’s suite of products empowers users to accomplish tasks like moving an employee to a different team, tracking a new hire’s to do list, aiding a nurse restocking medical supplies, or helping a student register for classes by phone. All the user has to do is ask straightforward questions, and the chatbot will do the rest. Without navigating through a website, a busy nurse can simply ask the chatbot to “Buy more bandages,” and the chatbot will guide them through the required steps: “What type of bandage? What size? How many? Would you like to add something else to your order?” College students can give the command, “Search for English classes,” and after choosing a class from a list the chatbot will ask, “Would you like to add English 101 to your cart?” Simplicity makes cents.

At IntraSee we have taken leadership in the Enterprise UX space, and transformed that into an automated means of generating a chatbot that is multi-talented from day one. It is so advanced that it can immediately solve problems that your development teams have spent decades, and masses of money, working on. And while automatically turning on a light bulb is a nice trick (and saves someone having to push a button). Our chatbot can perform things immediately that often take teams of people days to achieve. This is exponentially better than anything Alexa can do.

So, while programmers are churning out thousands of simple AI skills, none of these applications can handle complex business logic or the decision process required for multi-step transactions like tracking time in Kronos. Alexa can’t make intelligent decisions because she doesn’t know anything about the user. This is where IntraSee puts the “I” in business intelligence. We leverage all of the information we know about users without asking, utilizing known data such as reporting structure to logically streamline workflows. When the chatbot has all the information it needs it provides a summary of the transaction and executes the process securely and efficiently on the backend.

Many Alexa users worry about security and privacy because Alexa is “always listening,” and you would never want co-workers to hear you say something sensitive like, “Give Jimmy a big bonus.” Chatbots eliminate such security concerns by allowing users to type questions and commands. The entire suite of IntraSee products is designed to seamlessly work with highly secure data systems such as PeopleSoft, Oracle HCM Cloud, Oracle Student Cloud, Salesforce, Taleo, Kronos and many more. Each month we continue to add to the library of skills, so if it’s not there now, it will be soon.

In business it never makes sense to spend a dollar chasing a nickel. At IntraSee, our focus has always been to streamline business processes to save your organization money, increase productivity and reduce frustration. IntraSee has been helping organizations increase profitability by simplifying the user experience since our inception. To say that IntraSee is ahead in the AI skills game is an understatement. For nearly a decade IntraSee has been developing a suite of products that provide cost-saving self-service solutions straight out of the box. It turns out that our mature transactional architecture is the perfect fit for the next generation of AI, marrying complex transactional processes with the AI platforms of the future.

Artificial intelligence investment is intelligent business today, not something your organization should wait to do tomorrow. Building a library of AI assets is something that needs to start today. We encourage you to look at IntraSee chatbot solutions as you would Alexa Skills,  the necessary smarts needed to get meaningful work done. Being a business that is ahead of the curve in AI integration will make the difference between winning and losing.

Launching artificial intelligence for your organization’s users is simple and straightforward. The AI’s have matured, IntraSee’s got the skills covered, and our risk free pilot takes the guesswork out of adoption timing. Contact us to start your Chatbot Pilot today.

Contact Us