Paul Taylor and Andrew Bediz join Robbin Velayedam, Senior Director, Product Management – Oracle HCM, on this month’s PeopleSoft Chat podcast. The group talks about the Blueprint4D conference, artificial intelligence (AI) in the enterprise, and the factors to consider in a customers transition to cloud.

The podcast discussion covers a range of topics, including the challenges of getting users to adopt enterprise software, the importance of user experience, and the benefits of automation. The guests also discuss their experiences at Blueprint 4D, a PeopleSoft conference, and the future of PeopleSoft and enterprise software more broadly. They talk about the potential for integrating voice and conversation into enterprise software, as well as the benefits of cloud-based solutions. The podcast provides valuable insights into the challenges and opportunities of enterprise software and highlights the importance of user-centered design.

With the increasing popularity of AI, the issue of bias has become a significant concern. AI, after all, is designed to learn from data, and if the data it is trained on is biased, the AI will likely reflect that bias in its decisions.  

For example, when ChatGPT (produced by OpenAI) first debuted, people quickly noticed that it seemed to prefer one presidential candidate over another. This raised questions about whether an AI could have a political preference, and if so, where that preference might come from. 

Can a computer be biased? 

AI is not some kind of magical, all-knowing entity. At the end of the day, it’s just a computer program. It doesn’t have emotions or opinions, and it’s only as good as the data it’s trained on. As they say, garbage in, garbage out.  

Let’s take Jo, the Job Board Bot, as an example. Jo has been trained on resumes from people who have been hired for various positions. Based on this data, Jo can tell a candidate which job they’re most likely to succeed in based on their qualifications. 

Now imagine Barbara launches Jo and asks, “What position am I best fit for?” Even if Barbara has all the relevant experience, if 84% of engineers are men, Jo might not suggest that Barbara would be a good fit for a computer engineer position. That’s not the outcome you’re looking for!  

If the bot is just a computer, who is biased here? 

Now, let’s revisit the issue of bias in ChatGPT’s political preferences. ChatGPT relies on numerous public data sources, including Wikipedia, which is written by humans, and unpaid humans at that! Unfortunately, if these humans have any inherent biases in their writing, ChatGPT is likely to inherit those same biases. Essentially, ChatGPT’s biases reflect the collective biases of the content creators it relies on.  

Update: ChatGPT has since placed filters on certain outputs like those discussed above and instead will respond that the AI doesn’t have preferences. 

Who is teaching and training the AI? 

If AI is trained on biased data, it will produce biased outcomes, which can create a feedback cycle; for example, as Jo essentially screens out female applicants, the gender disparity increases, making new female applicants look even less likely to succeed. How do we break the cycle?  

The common technique is called supervised training, which OpenAI does employ. Per the Wired article linked, “[Supervised training] …was used to enhance ChatGPT. It involves having humans judge the quality of the model’s answers to steer it towards providing responses more likely to be judged as high quality.” However, the supervisors are humans, so how do we keep the human supervisors from creating their own bias? Oh boy! 

When implementing this technology in the enterprise, the stakes are significantly higher. Some AI products don’t even use your organization’s data. Some do, but they are not trained by your people. Instead, the AI provider has a team of their own people, many times in offshore centers, who are monitoring the AI’s decisions and providing feedback to supervise the training.  

It is unlikely, then, that the vendor team teaching the AI has the same view of what is or isn’t appropriate as the people who will be using the AI. Knowing what you know now about bias in AI, do you think it is a good idea to have a team in a different part of the country, or another country, being the sole influencer of the AI? 

A better approach 

At Gideon Taylor, we prioritize meeting the specific enterprise needs of organizations we work with. Our digital assistant, Ida, uses a language model from Oracle, a company with extensive experience handling enterprise data. However, we don’t stop there. We recognize that each customer’s needs and data are unique, so we train their bot independently, rather than relying on a one-size-fits-all approach. 

We also understand the importance of our customers influencing their own AI. By allowing your people to provide feedback to Ida, we ensure that the AI is constantly learning and adapting to better represent your organization, your language, and your culture. If you haven’t seen Ida, contact us below and setup a personalized demo. 

Contact Us

The biggest thing in tech the last few weeks has been GPT-3. GPT, or Generative Pre-trained Transformer, is a language model developed by OpenAI that uses deep learning to produce human-like text. It is based on the Transformer architecture, which was introduced by Google in 2017. GPT uses a self-attention mechanism to learn the context of a given sentence and generate text that is both coherent and natural sounding. It has been used to generate text for tasks such as summarization, dialog generation, machine translation, and more. GPT is a powerful tool for natural language processing and is one of the most widely used language models today.

What if I told you that the previous paragraph was not written by me, but by a machine? That “wow!” moment you just had is one of many to come, as we realize how powerful artificial intelligence (AI) is becoming. We are witnessing feats performed by machines where we can no longer easily tell the difference between human and computer.

May, 2023 Update: OpenAI has now released GPT-4 which performs better with less hallucinations. OpenAI has not disclosed how many parameters GPT-4 is using. Further, GPT is in beta for ChatGPT plugins allowing third party data and internet access.

What makes GPT-3 so good? 

To start, it is the sheer size. GPT-3 has 175 billion trainable parameters in its model. The next closest model has 17B. A training run uses 45 TB of data, almost 10 days of run time and over $10M in compute power.  To give you some perspective, the bot you are using on your job posting site probably trains in less than 3 mins. GPT-3 then takes this sophisticated model and huge training dataset and mixes it with the key ingredient: human expertise. In three waves, humans provide feedback to the machine, help it learn, and reinforce good outcomes. Yes, AI is not magic. Humans play a critical role. Version 3 of this model took 3 years of engineering, training, feedback, and revisions. 

Can business use GPT-3?

GPT-3 can be broken down into two primary functions: language understanding and generative AI (composing answers). With the former, GPT-3 excels at interpreting human language and deriving meaning from it. With the latter, it creates its own answers, but those responses can be wildly incorrect, making it a less-than-ideal choice for enterprise use. Additionally, GPT-3 is built on public information that is two years old, so it cannot handle current events, making it especially ill-suited for enterprise use. Let’s compare a Google search to GPT-3.

Google’s real-time results
ChatGPT results based on 2021 information

In a business setting when someone asks about how much vacation time they get, the answer depends on their employment status, their union contract and even the country they reside in. GPT-3 can’t handle that type of personalization. Further, you want your organization’s bot to understand your lingo, your acronyms, and your slang. GPT-3 is one model trained for everyone. Its training data is not your organization’s data. Even though GPT-3 has limits in an enterprise setting, it will no doubt influence the future direction of IT departments. 

How will GPT-3 influence enterprise software?

Because our blog readers are most interested in enterprise software, we must examine the effect GPT-3 will have on the enterprise. To start, it will change user expectations. Let’s take a trip down memory lane….it was 2008 and Steve Jobs had just launched the App Store followed by the famous ad campaign, there’s an app for that! I remember this year well. Most of our customers never spent a moment thinking about mobile or mobile apps. After all, enterprise software was too complicated for that tiny screen!

Apple’s Famous Ad Campaign: There’s an App For That

We can sometimes forget that our enterprise users are consumers, too. They go to movies, buy groceries, go to restaurants, seek entertainment, and so on. Enterprise users, like everyone else, fell in love with mobile apps, came back to work or school with new expectations, and started demanding mobile apps for enterprise functions. Schools and organizations scrambled. Everyone wanted an app. There were RFPs every day calling for proposals from vendors.

Do we think it will be any different this time? ChatGPT (the chatbot built on the GPT-3 model) will work its way into consumer life. Your users will relish the productivity gains it provides. Think about how we find and retrieve information today. We use Google and Google gives us pages and pages of links that all look the same. We have no idea where to click. If we ask that same question to GPT, we get a two-sentence answer that is right on the money. Imagine that instead of pointing, clicking, searching, reading, we just asked and got the answer…

The future of user interface is language

Students and employees are routinely on the search for information; be it about a policy, a form to fill out, a piece of data or analytics. Now imagine a user interface powered by chat AI that gives you an answer quickly, succinctly, and with accuracy that beats human beings! The old way of searching intranets or knowledge base articles is going the way of the dinosaur. (Or should I say punch card?) The flaws of the search result page UI are already being exposed.

GPT will also prove the demand for one-stop shopping: One place to ask any question. While each of these SaaS applications may have some semblance of a bot, they all live on islands without any connection to each other; islands with no bridges. ChatGPT has shown users the value of being able to handle any question you throw at it. 

Now Available: Ida and GPT

The GPT model, while not perfect, certainly has some capabilities that are pushing the AI universe ahead by enormous leaps. Ida, IntraSee’s very own digital assistant, operates much like ChatGPT, but at a different scale, where we build and train to each client’s uniqueness. Starting with version 23.01 due in April 2023, Ida will introduce its first integration with GPT-3.5. Ida taps into the power of the GPT model to improve upon the secured, personalized answers Ida provides. More to come with the release notes of 23.01.

Contact Us

Despite the current buzz, artificial intelligence (AI) isn’t magic. AI is just a collection of probabilities, but those probabilities feed on data and human input. Yes, humans play a critical role in AI. For example, the newly hyped GPT-3 model has 3 phases of human input. Alexa and Siri have teams of humans helping the AI grow “smarter.” In short, your AI can’t perform without the human AI experts. But what if you don’t have a team that lives and breathes this technology every day?

Introducing Oracle Digital Assistant Tune-Up

Your digital assistant is a digital worker. Just like your human workers, they need an annual review, training and feedback. IntraSee is now offering our annual Oracle Digital Assistant (ODA) Tune Up service to give your digital worker that annual review. The ODA Tune-Up will engage our highly trained AI teams to review your current digital assistant approach and training data to eliminate misunderstanding and increase accuracy in your bot. The Tune Up service will provide actionable feedback using our empirical methods and AI testing tools to show you exactly where your bot may need optimization. 

IntraSee has been building and running Oracle Digital Assistants (ODA) since the product was released over three years ago. As one of the first companies to deploy these bots, we’ve learned quite a bit. To achieve success in your chatbot project, accuracy is job #1. We’ve spent the past three years figuring out just the right methods for maximizing bot accuracy on ODA. 

Our clients are consistently achieving greater than 90% accuracy from their NLP (natural language processing) as a result. For your Machine Learning application to achieve high accuracy, you need a strong model and well-tuned training data; and to get those, you need AI experts. At IntraSee we use a team of data scientists, AI architects and computational linguists to produce our best-in-class accuracy results. 

Schedule Your Bot “Check Up” Today

Users tell us the most common reason they won’t use a bot is because they don’t believe it can help. In other words, the bot didn’t understand their questions. With improved understanding, adoption will increase, greater service will be provided, and more ROI unlocked. Just as humans see a doctor each year for a check-up, your bot needs an accuracy “check-up” from the leading team of experts in this field.

For a limited time, we are offering an attractive introductory rate. To learn more, please contact us below.

Contact Us

Gartner recently released their Hype Cycle for 2022. The Hype Cycle is a theory around the 5 phases of emerging technologies. Chatbots fell into Phase 3, the Trough of Disillusionment, this year. While that title sounds scary, it is a pivotal moment for an emerging technology where the best will be separated from the rest. In this post, we will review the chatbot market, where it was, where we are now and what is coming next.

Gartner 2022 Hype Cycle

Where we were

Around 5 years ago Natural Language Processing (NLP) took a big leap. We had a breakthrough in artificial intelligence (AI) and Machine Learning techniques where our ability to have machines understand human language took a jump. Suddenly, chatbots, as they were dubbed, were all the rage. This irrational exuberance, perhaps fueled by Alexa, created a flood of demand. Rest assured; supply always finds a way to meet demand. At this point, everyone and their sister built a chatbot.

Many of these bots were rushed. Some used fake AI and their NLP accuracy was abysmal to match. The experience felt more like AOL Keywords than it did a machine who truly understood the human. Bots were released covering many singular use cases from FAQs, to recruiting, to CRM systems and even enterprise applications like HCM. The investment ranges were so wide you could find a bot for $10k a year or build one on IBM Watson for $1M.

This rush just served to flood the market with confusion. Customers began looking at price as the most distinguishing factor. Suddenly, we were all apples and, boy, were there a lot of apples. How did we like them apples? Well, not so much… 

Where are we now

With the market flooded with poor quality and confusing messages, customers have become disillusioned. Was this all hype? Will it ever work? This feels just like IVR (Interactive Voice Response)!

To add to the confusion, sales teams were proclaiming AI magic. It just learns! It is automatic! If your success is less than automatic, you are going to be disappointed. You may even write the technology off (though we would caution you to look again).

That AI breakthrough we spoke about earlier? Well, that happened when we learned (pun intended) to mimic how the human brain works. The human brain divides everything into buckets like apples, oranges, bananas and so on. When we see a new fruit never before seen, we just know it is a fruit. But this only works with lots of learning. To learn, we read, we attend class, we discuss, we experience. Bots learn by consuming data and instructions from data scientists.

So, while the cheap bots fell way short of our expectations, the enterprise bots seemed complicated to implement and manage. Organizations threw people without AI experience into AI projects. 

Reality then punched us in the face. And so the disillusionment phase has arrived. Let’s dive into where today’s chatbot projects have failed. If we can learn from this phase, we can be better for it. We can focus with more depth on what makes a magical conversational experience.

Why chatbot projects fail

Many chatbot projects failed because many customers were caught at one of two extremes – both driven by the desire to minimize cost. On one side, the poor-quality bot which required little effort yet never delivered on the salesperson’s, “It’s magic!” promise. On the other extreme, customers put in significant effort to try and build their own bot only to realize building and running AI at scale is not the same skillset they historically have possessed.

False No-Effort Narratives

  • “No-Effort” implementations mean you are getting a generic product; Too many customers have been sold this and failed. Your users see right through a generic bot which really is no better than classic IVR.
  • While there is such a thing as AI which learns on its own, it is a major liability for your organization. Facebook and Microsoft have failed spectacularly in this regard.
  • Some customers attempt pilots to prove out the technology, but pilots rarely work. As an organization you have to be all-in on making automation work. Further, these are solutions that evolve over time and a short pilot is no time at all.

Crawlers and Links

  • Real AI personalized to your users and organization is hard. Some projects attempt to shortcut this hard work by having the bot “crawl” in existing web site for content. Remember that salesperson? It’s automatic!
  • Crawlers are how search engine’s work and if your name isn’t Google, those searches have been shown to not perform well nor do they have any personalization.
  • Our focus groups have shown that users have no faith in search not named Google. Disguising search as chat will ensure a lack of adoption.
  • Users don’t want links to a web site. They want the direct, 140-character answer. Just linking them elsewhere didn’t actually serve their needs and save them time. 

Lack of Personalization

  • Per Gartner CIO surveys, digital transformation is nothing without personalization.
  • Bots which do not know you, where you do not log in, just end up sounding like that IVR.
  • Every question can have dozens of answers solely based on the attributes of the user. Most bots today fail to personalize, and the results show.

No Breadth

  • Many of the early bots simply focused on a small sliver of questions, like recruiting or financial aid. As we talk about in Higher Ed Chatbot Usage: Perception vs. Reality, your users don’t care about your reasons to only tackle a sliver.
  • Our data shows users will ask every question that comes to mind, including questions completely unrelated to the page they’re on. 
  • If your bot cannot handle at least 250 intents, you stand little chance of success.

Comfort of Live Chat

  • Live-chatting features sound like a good idea. I can always talk to a human! Keep in mind, you need to staff/train/manage that human.
  • Live Chat is not the killer feature organizations should be requiring. AI accuracy is. Live chat just covers up the failure to do job #1.
  • Even more, vendors know Live Chat is a parachute, albeit one that eats ROI significantly. Knowing they have a parachute, they rest easy and don’t solve the hard problems.

Let’s Jump into a use case related to this topic. Candidate.ly recently published some survey data on recruiting bots. This is just a peak into a particular vertical and what the customers thought of their chatbots.

Candidate.ly Survey on Chatbot Effectiveness

You will notice some 71% thought their bots were average or below. If the parent thinks their child is average, then you are probably dealing with a D student. It is no wonder why organizations are disillusioned, but it doesn’t have to be this way.

According to the report, “Two in five people (42%) avoid chatbots when making a complex inquiry and 15% lack confidence in using technology to contact organisations”. And according to the Institute, if technologies like chatbots were “well designed and implemented”, most customers would be happy to use them.

This one report highlights what we all have experienced. We want chatbots to help us, but many of them are just falling short. Our expectations have shifted downward. However, this is the moment you can surprise and delight your users if we really focus on designing the experience.

…if technologies like chatbots were “well designed and implemented”, most customers would be happy to use them.

What comes next: Enlightenment

We started with irrational exuberance and now find ourselves in the Age of Disillusionment. Disillusionment just means that customers are no longer buying into the hype without a critical eye. They are now in prove-it mode. The success of the early adopters who chose wisely will begin to show. Those who chose poorly will cautiously wade back in, but will be much better informed. Bad solutions and methodologies will fade away and the market will gain clarity. We will be on a steady incline that is more predictable and sustainable.

We welcome this phase where the market will gain clarity and users will have their needs met. It is often lost that chatbots are really automation projects. You are replacing a function normally performed by a human. That means your best chance of success is to focus on outperforming the human. You have to be more accurate, more consistent, more available and more accessible including speaking your language.

When you pick a bot, remember that cheap is expensive when it comes to automation. If your bot does not offer any advantage when compared to speaking to a human, then user behavior won’t change. If behavior doesn’t change, you won’t be able to reduce spend and then you are simply paying for both the bot and the human. 

…cheap is expensive when it comes to automation

Good results take effort. Good results require personalization and NLP accuracy. The bot is mimicking a human and just like humans, they need to learn and grow. You are feeding them and should treat the bot like an evolution and not like a one-time project.

If you are ready to ramp up the bots, here are the top strategies we recommend you follow on your project:

  • Pick a platform/product that you can influence and that will learn about your organization. Be sure it is based on Machine Learning AI.
  • Commit to a three-year initial Agile cycle. Implement, monitor, learn and evolve in increments no longer than 3 months. Learn from your users!
  • Prioritize breadth and personalization. This is the key to user adoption. 
  • Create a funnel to maximize ROI. Drive users to the automated path first and let the bot pursue escalation paths such as live chat when needed. Do this after the initial learning period of six months.
  • Make the bot omnipresent. The bot should meet users where they are, not just on one website. That chat icon should be everywhere.

If you have any questions or want to see a demo of Ida, our own bot, please reach out below. As an industry, we are approaching the beginning of enlightenment; we believe our clients are already there. We hope that if you have experienced a failed chatbot project, you will try again with fresh eyes. If you need any help or just want to talk things through, just drop us an email! 😊

Contact Us