With the increasing popularity of AI, the issue of bias has become a significant concern. AI, after all, is designed to learn from data, and if the data it is trained on is biased, the AI will likely reflect that bias in its decisions.  

For example, when ChatGPT (produced by OpenAI) first debuted, people quickly noticed that it seemed to prefer one presidential candidate over another. This raised questions about whether an AI could have a political preference, and if so, where that preference might come from. 

Can a computer be biased? 

AI is not some kind of magical, all-knowing entity. At the end of the day, it’s just a computer program. It doesn’t have emotions or opinions, and it’s only as good as the data it’s trained on. As they say, garbage in, garbage out.  

Let’s take Jo, the Job Board Bot, as an example. Jo has been trained on resumes from people who have been hired for various positions. Based on this data, Jo can tell a candidate which job they’re most likely to succeed in based on their qualifications. 

Now imagine Barbara launches Jo and asks, “What position am I best fit for?” Even if Barbara has all the relevant experience, if 84% of engineers are men, Jo might not suggest that Barbara would be a good fit for a computer engineer position. That’s not the outcome you’re looking for!  

If the bot is just a computer, who is biased here? 

Now, let’s revisit the issue of bias in ChatGPT’s political preferences. ChatGPT relies on numerous public data sources, including Wikipedia, which is written by humans, and unpaid humans at that! Unfortunately, if these humans have any inherent biases in their writing, ChatGPT is likely to inherit those same biases. Essentially, ChatGPT’s biases reflect the collective biases of the content creators it relies on.  

Update: ChatGPT has since placed filters on certain outputs like those discussed above and instead will respond that the AI doesn’t have preferences. 

Who is teaching and training the AI? 

If AI is trained on biased data, it will produce biased outcomes, which can create a feedback cycle; for example, as Jo essentially screens out female applicants, the gender disparity increases, making new female applicants look even less likely to succeed. How do we break the cycle?  

The common technique is called supervised training, which OpenAI does employ. Per the Wired article linked, “[Supervised training] …was used to enhance ChatGPT. It involves having humans judge the quality of the model’s answers to steer it towards providing responses more likely to be judged as high quality.” However, the supervisors are humans, so how do we keep the human supervisors from creating their own bias? Oh boy! 

When implementing this technology in the enterprise, the stakes are significantly higher. Some AI products don’t even use your organization’s data. Some do, but they are not trained by your people. Instead, the AI provider has a team of their own people, many times in offshore centers, who are monitoring the AI’s decisions and providing feedback to supervise the training.  

It is unlikely, then, that the vendor team teaching the AI has the same view of what is or isn’t appropriate as the people who will be using the AI. Knowing what you know now about bias in AI, do you think it is a good idea to have a team in a different part of the country, or another country, being the sole influencer of the AI? 

A better approach 

At Gideon Taylor, we prioritize meeting the specific enterprise needs of organizations we work with. Our digital assistant, Ida, uses a language model from Oracle, a company with extensive experience handling enterprise data. However, we don’t stop there. We recognize that each customer’s needs and data are unique, so we train their bot independently, rather than relying on a one-size-fits-all approach. 

We also understand the importance of our customers influencing their own AI. By allowing your people to provide feedback to Ida, we ensure that the AI is constantly learning and adapting to better represent your organization, your language, and your culture. If you haven’t seen Ida, contact us below and setup a personalized demo. 

Contact Us

We are excited to announce the release of Ida version 23.01, packed with new features and enhancements to streamline your digital assistant. With a focus on improving user interactions and expanding data sources, this release brings exciting updates to Ida’s capabilities. Here’s a closer look at some highlights from Ida 23.01:

  1. New URL Answer Provider
    We understand the importance of delivering accurate and up-to-date information to users. In Ida 23.01, we have introduced a new URL answer provider that allows Ida to fetch answers from web pages. This means that Ida can now provide users with the most current information from the web, ensuring that they always have access to the latest information.
  2. New OpenAI GPT Answer Summary Provider
    To further enhance Ida’s ability to provide concise and informative answers, we have integrated the OpenAI GPT-based answer summary provider. This advanced feature allows Ida to generate short summaries of lengthy answers, making it easier for users to quickly grasp the key information they need. With the OpenAI GPT answer summary provider, Ida can now provide more precise and efficient responses, saving users valuable time.
  3. Enhancements to the Ida Live Chat Adapter
    We understand the importance of seamless communication which is why Ida now delivers its own live chat channel for escalations. In Ida 23.01, we have made significant enhancements to the Ida Live Chat adapter to improve the overall chat experience. Ida’s Live Chat adapter is now more user-friendly and efficient than ever before. These enhancements ensure that users will never get stuck talking to Ida and always have an escalation path to speak to a human.

In addition to these major updates, Ida 23.01 also includes various bug fixes and performance improvements to enhance the overall stability and reliability of the digital assistant.

To upgrade to Ida 23.01, please contact your account team to discuss a deployment schedule that works best for your organization. We are committed to continuously improving Ida to meet the evolving needs of our users, and we look forward to your feedback on this exciting new release.

Thank you for your continued support, and we are confident that Ida 23.01 will further enhance your digital assistant experience.


Release Notes

  • Enhancements to the Ida Live Chat adapter
  • New OpenAI GPT answer summary provider
  • New URL answer provider
  • Added answers link to both grid tabs in Maintain FAQs page
  • Improved and simplified High Training sampling
  • Improved Create/Delete FAQ+ pages
  • Improved Review Reminder functionality in federated mode
  • MS Teams channel authorization is now configurable
  • New report for “no match” utterances
  • Now providing friendlier messages when user doesn’t have access to remote data sources
  • Updated ODAChatUI to version 22.12
  • Updated student FAQ catalog
  • Updated user interface in IUC dashboard pages
  • Added ODA 500 errors to events table
  • Fixed an accessibility issue on Ida attention grabber slideout
  • Fixed an issue when building the main skill when using NLP segments
  • Fixed an issue where forced non-English mode was still returning certain messages in English
  • Fixed an issue where non-admins could add FAQs
  • Fixed an issue with case sensitivity breaking DE branching
  • Fixed an issue with NLP when determining secondary topic relevance
  • Fixed an issue with persistent chat logs on ODA ChatUI
  • Fixed issue with scrolling in the new web channel read more pane
  • Fixed webform mobile branding issues
  • Improved handling of CSS conflicts on the web channel UI
  • Improved performance of Environment Details console tile
  • Resolved an issue with translations for one word utterances
  • Resolved css conflicts in IUC pages
  • Various MS Teams channel fixes

Contact us below to learn more and setup your own personal demo

Contact Us

The biggest thing in tech the last few weeks has been GPT-3. GPT, or Generative Pre-trained Transformer, is a language model developed by OpenAI that uses deep learning to produce human-like text. It is based on the Transformer architecture, which was introduced by Google in 2017. GPT uses a self-attention mechanism to learn the context of a given sentence and generate text that is both coherent and natural sounding. It has been used to generate text for tasks such as summarization, dialog generation, machine translation, and more. GPT is a powerful tool for natural language processing and is one of the most widely used language models today.

What if I told you that the previous paragraph was not written by me, but by a machine? That “wow!” moment you just had is one of many to come, as we realize how powerful artificial intelligence (AI) is becoming. We are witnessing feats performed by machines where we can no longer easily tell the difference between human and computer.

May, 2023 Update: OpenAI has now released GPT-4 which performs better with less hallucinations. OpenAI has not disclosed how many parameters GPT-4 is using. Further, GPT is in beta for ChatGPT plugins allowing third party data and internet access.

What makes GPT-3 so good? 

To start, it is the sheer size. GPT-3 has 175 billion trainable parameters in its model. The next closest model has 17B. A training run uses 45 TB of data, almost 10 days of run time and over $10M in compute power.  To give you some perspective, the bot you are using on your job posting site probably trains in less than 3 mins. GPT-3 then takes this sophisticated model and huge training dataset and mixes it with the key ingredient: human expertise. In three waves, humans provide feedback to the machine, help it learn, and reinforce good outcomes. Yes, AI is not magic. Humans play a critical role. Version 3 of this model took 3 years of engineering, training, feedback, and revisions. 

Can business use GPT-3?

GPT-3 can be broken down into two primary functions: language understanding and generative AI (composing answers). With the former, GPT-3 excels at interpreting human language and deriving meaning from it. With the latter, it creates its own answers, but those responses can be wildly incorrect, making it a less-than-ideal choice for enterprise use. Additionally, GPT-3 is built on public information that is two years old, so it cannot handle current events, making it especially ill-suited for enterprise use. Let’s compare a Google search to GPT-3.

Google’s real-time results
ChatGPT results based on 2021 information

In a business setting when someone asks about how much vacation time they get, the answer depends on their employment status, their union contract and even the country they reside in. GPT-3 can’t handle that type of personalization. Further, you want your organization’s bot to understand your lingo, your acronyms, and your slang. GPT-3 is one model trained for everyone. Its training data is not your organization’s data. Even though GPT-3 has limits in an enterprise setting, it will no doubt influence the future direction of IT departments. 

How will GPT-3 influence enterprise software?

Because our blog readers are most interested in enterprise software, we must examine the effect GPT-3 will have on the enterprise. To start, it will change user expectations. Let’s take a trip down memory lane….it was 2008 and Steve Jobs had just launched the App Store followed by the famous ad campaign, there’s an app for that! I remember this year well. Most of our customers never spent a moment thinking about mobile or mobile apps. After all, enterprise software was too complicated for that tiny screen!

Apple’s Famous Ad Campaign: There’s an App For That

We can sometimes forget that our enterprise users are consumers, too. They go to movies, buy groceries, go to restaurants, seek entertainment, and so on. Enterprise users, like everyone else, fell in love with mobile apps, came back to work or school with new expectations, and started demanding mobile apps for enterprise functions. Schools and organizations scrambled. Everyone wanted an app. There were RFPs every day calling for proposals from vendors.

Do we think it will be any different this time? ChatGPT (the chatbot built on the GPT-3 model) will work its way into consumer life. Your users will relish the productivity gains it provides. Think about how we find and retrieve information today. We use Google and Google gives us pages and pages of links that all look the same. We have no idea where to click. If we ask that same question to GPT, we get a two-sentence answer that is right on the money. Imagine that instead of pointing, clicking, searching, reading, we just asked and got the answer…

The future of user interface is language

Students and employees are routinely on the search for information; be it about a policy, a form to fill out, a piece of data or analytics. Now imagine a user interface powered by chat AI that gives you an answer quickly, succinctly, and with accuracy that beats human beings! The old way of searching intranets or knowledge base articles is going the way of the dinosaur. (Or should I say punch card?) The flaws of the search result page UI are already being exposed.

GPT will also prove the demand for one-stop shopping: One place to ask any question. While each of these SaaS applications may have some semblance of a bot, they all live on islands without any connection to each other; islands with no bridges. ChatGPT has shown users the value of being able to handle any question you throw at it. 

Now Available: Ida and GPT

The GPT model, while not perfect, certainly has some capabilities that are pushing the AI universe ahead by enormous leaps. Ida, IntraSee’s very own digital assistant, operates much like ChatGPT, but at a different scale, where we build and train to each client’s uniqueness. Starting with version 23.01 due in April 2023, Ida will introduce its first integration with GPT-3.5. Ida taps into the power of the GPT model to improve upon the secured, personalized answers Ida provides. More to come with the release notes of 23.01.

Contact Us

The next release of Ida, our digital assistant, is now available. Clients can talk to their account teams about a deployment schedule that works for you.

22.04 in Summary

Ida 22.04 releases many housecleaning items and general fixes. The focus of new features started with improved support for a new ODA Web Channel complete with chat transcript storage, voice channel support, autosuggest, improved read more functionality and much more.

Next we have many new reports available and new/improved intents and PeopleSoft Campus data sources for Exams, Deadlines, Course Catalog and Registration Appointments.

Finally, we are adding an advanced Find/Replace tool for fixing answers in one swift click and the ability to schedule bot down time to bring the entire bot down and present the user a message about its status and when it will be available again.


Release Notes

  • Automated scheduled downtime for entire bot for support of maintenance windows
  • Configurable list of allowed languages for translation
  • New Class Deadlines Intent
  • New configuration options for Help Escalation
  • New Course Catalog Search intent
  • New dedicated config pages for adapters
  • New Find/Replace Answer Text Tool
  • New frequent Users report
  • New Ida bundled Live Chat integration
  • New Rater Progress Report(s)
  • New Registration Appointments Intent
  • New Suggestions reports
  • New Question Confidence Regression Report
  • Salesforce adapter ticket logging enhancements
  • Better automated duplicate feedback handling in High Training/Value modes
  • Fixed an issue where FBL was showing the wrong sub-org’s answer text
  • Fixed issue with long topic labels in KPI tile
  • Improved Dashboard Conversation Log
  • Improved disambiguation help support
  • Improved error messages for unexpected remote answers for Salesforce
  • Improved FAQ Read More styling
  • Revised Exam Schedule Intent
  • Streamlined client user IUC console access permissions
  • User counts in reports now more closely reflect distinct guest users
  • Web channels: Replaced lightboxes with Slide Out support
  • Web channels: removed debug web console messages
  • Web channels: dynamic down message support

Contact us below to learn more and setup your own personal demo

Contact Us

Despite the current buzz, artificial intelligence (AI) isn’t magic. AI is just a collection of probabilities, but those probabilities feed on data and human input. Yes, humans play a critical role in AI. For example, the newly hyped GPT-3 model has 3 phases of human input. Alexa and Siri have teams of humans helping the AI grow “smarter.” In short, your AI can’t perform without the human AI experts. But what if you don’t have a team that lives and breathes this technology every day?

Introducing Oracle Digital Assistant Tune-Up

Your digital assistant is a digital worker. Just like your human workers, they need an annual review, training and feedback. IntraSee is now offering our annual Oracle Digital Assistant (ODA) Tune Up service to give your digital worker that annual review. The ODA Tune-Up will engage our highly trained AI teams to review your current digital assistant approach and training data to eliminate misunderstanding and increase accuracy in your bot. The Tune Up service will provide actionable feedback using our empirical methods and AI testing tools to show you exactly where your bot may need optimization. 

Oracle Digital Assistants AI and Natural Language tuning service

IntraSee has been building and running Oracle Digital Assistants (ODA) since the product was released over three years ago. As one of the first companies to deploy these bots, we’ve learned quite a bit. To achieve success in your chatbot project, accuracy is job #1. We’ve spent the past three years figuring out just the right methods for maximizing bot accuracy on ODA. 

Our clients are consistently achieving greater than 90% accuracy from their NLP (natural language processing) as a result. For your Machine Learning application to achieve high accuracy, you need a strong model and well-tuned training data; and to get those, you need AI experts. At IntraSee we use a team of data scientists, AI architects and computational linguists to produce our best-in-class accuracy results. 

Schedule Your Bot “Check Up” Today

Users tell us the most common reason they won’t use a bot is because they don’t believe it can help. In other words, the bot didn’t understand their questions. With improved understanding, adoption will increase, greater service will be provided, and more ROI unlocked. Just as humans see a doctor each year for a check-up, your bot needs an accuracy “check-up” from the leading team of experts in this field.

For a limited time, we are offering an attractive introductory rate. To learn more, please contact us below.

Contact Us