We are thrilled to announce the release of Ida 23.03, the latest version of our digital assistant that brings along a myriad of new features, enhancements, and optimizations aimed at bolstering user engagement and streamlining administrative tasks. Drawing insights from our prior release (Ida 23.02), we’ve refined the user experience further and introduced functionalities that add a new dimension to interaction and content management within the Ida Console. Let’s delve into the spotlight features of this release.
Optimized Alerts and Tasks User Experience: Experience a seamless flow of alerts and tasks with our optimized user interface for Ida Alerts, making it easier to stay updated and take action on critical items without the usual hassle.
New Ida Alerts API: Unleash the power of real-time alerts with our new API, allowing for a more integrated and automated alert management.
New and Improved Rich Editor in Ida Console for FAQs: Crafting FAQ responses has never been easier with our enhanced rich editor, ensuring that your answers are both informative and engaging.
New Page to Review and Accept FAQ Suggestions: Manage FAQ suggestions effectively with a dedicated review page, streamlining the process of keeping your FAQ section updated and relevant.
New Support for Dynamic Entities: Elevate your interaction with dynamic content, allowing for a more personalized and contextual user engagement.
Full Feature List
Here’s a summarized list of all the features, enhancements, and fixes included in the Ida 23.03 release:
Improvements to greeting users by name with support for multiple system of record
Certified latest Oracle Web SDK 23.08
Improved handling when student is not activated in any term
Optimized Alerts and Tasks User Experience
Suggestion frequency update for guest users
Optimized process for ingesting production utterance for rating
New utterance exclusion list for reports
New Ida Alerts API
New universal update phone skill
New quick links tile in the console
Data storage optimizations and archive KB article now available
New help fall back option for user to suggest a question to learn
Thumbs rating responses are now configurable
New and improved rich editor now available in Ida console for FAQs
Resolved issue where Topic Accuracy KPI was sometimes blank
New page to review and accept FAQ suggestions
New support for Dynamic Entities
Improved UI for ratings while in HVL Mode
Auto Tests now automatically purge old data
Support for running smalltalk in a multi-skilled digital assistant
New source field for FAQs
Ability to make decisions based on which Web Channel is being used
Fixed Tracking Id field on skill exports
Optimized NLP thresholds
Define list of utterances to omit from reports and rating pages
Override-allowed questions now allow sub-org to opt-out of the question.
The Ida 23.03 release embodies our continuous effort to refine, innovate, and deliver a digital assistant that meets the growing demands of an interactive and efficient user engagement. With each update, we move a step closer to making Ida an indispensable asset for your organizational needs. Stay tuned for more updates as we continue to unravel more features and improvements in the near future!
Guided by Cristian Durca and his forward-thinking team, Seneca collaborated with IntraSee’s Gideon Taylor division to create ‘Sam’—a top-tier enterprise chatbot. The results? Transformative benefits for the campus community.
259,000+ questions answered during last 12 months
93.99% ACCURACY. Average confidence the AI has the right answer for the user’s question
Almost 50% of questions answered outside of business hours
Seneca’s Chatbot Transformation: The Power of ‘Sam’
Ubiquitous Assistance: Whether it’s students, prospects, parents, or staff, Sam is omnipresent, addressing a vast array of queries round-the-clock.
Operational Efficiency: Not only does Sam serve questions outside standard business hours, but it also communicates in multiple languages and provides instant access to enterprise policies and data, eliminating lag-time data synchronization between systems.
Centralized Knowledge: With Sam, the need for multiple bots is a thing of the past. All information flows from a unified Salesforce system, ensuring up-to-date content, reduced redundancy, and simpler maintenance.
“The collaboration with IntraSee proved indispensable for the project’s success. We take great pride in our current achievements and eagerly await what the future holds. Stay tuned for more exciting developments! Thank you IntraSee for your guidance and support.”
Cristian, Director of Information Solutions at Seneca
Why does someone use a digital assistant (aka AI, ChatGPT, chatbots, et al)?
Before you glide past that question, take a moment and really think about it: Why would your users/customers/team members use a chatbot?
The seemingly obvious answer: To get a quick and accurate answer to a question they have.
Would you be surprised to learn that many chatbot products aren’t focused on providing relevant, accurate answers to users’ questions?
According to IntraSee founder Andrew Bediz, most organizations introduce an artificial intelligence/chatbot platform expecting it to help users quickly find answers to their questions, only to experience poor usage rates and negative user feedback. “We often see organizations with a chatbot strategy of focusing on a single purpose (i.e., recruiting, financial aid, etc.) and they expect people will only ask questions relating to those pre-built topics,” Andrew shared. “But this approach isn’t accurate. Research data that IntraSee has collected finds that the most popular search topics on a chatbot max out around 20% of the total volume and the average topic sees only single digit percentages. Translation: If you implement a bot that has really narrowed its focus to the top question, you are only helping 20% of those asking questions, leaving 8 out of 10 frustrated. That’s a sure way to never get 80% of your audience back.”
Andrew often cautions prospective customers that “cheap is expensive” when it comes to implementing a digital assistant. “Without some effort and research, you’re not going to have a positive user experience,” Andrew said. “And if you don’t get good results, people won’t use your chatbot. If your organization isn’t committed to really investing in the project, you’re setting yourselves up for failure. Next thing you know, your three-year contract is up, and you discover that people don’t like your chatbot and that they’re not using it. They don’t think it’s accurate. They don’t think it’s relevant. They don’t think it covers enough of the questions they might be asking, or they might not think it’s personalized to them or have anything to do with their needs. That’s when cheap becomes very expensive.”
Another way chatbot projects become expensive is when a customer buys a solution and deploy it only with built-in questions and answers. At first this can seem like a great way to keep costs down if, for example, a customer buys a chatbot pre-loaded only with student financial aid Q & A. But what if a new student asks for help finding the library? In response to this concern, the vendor might counter with “We have a web crawler that will crawl your website and pull that data into your chatbot.” Crawling a website falls far short of delivering a personalized experience. Keep in mind that the user expects a different experience interacting with a chatbot than they do with classic web search. When that experience is disappointing, they stop using the offering. We’ve seen organizations spending $200,000+ over a three-year contract with very little to show for their investment and now they are considering starting from scratch. Cheap is expensive.
“If you don’t get good results, people won’t use your chatbot. If your organization isn’t committed to really investing in the project, you’re setting yourselves up for failure.”
Start Right, Start Now
Gideon Taylor CEO and Founder Paul Taylor often talks about how hard it can be to “catch up” on innovation. In Paul’s words, “It’s not as simple as thinking ‘We can start whenever we want and quickly arrive at the same place as Organization X who started innovating months or years before us. Enterprise-grade chatbot technology is constantly learning. It is building a database of understanding about your organization and what your users are asking. You can’t overcome that head-start without a lot of cost and sustained effort.”
Andrew adds that asking, “What does ‘cost’ mean to me?” can make a material difference in the outcome of your digital assistant investment. “What’s important is net cost. If I spend more on the bottom line but make more on the top line, then it’s not necessarily more expensive. Gartner metrics show that when you use a bot, at minimum, every interaction could represent a savings opportunity of $18 (live support is $19 per interaction and a chatbot is $1 per interaction.) Let’s imagine the difference between a good chatbot and a bad one. A good one has more adoptions, with more people using it to get answers instead of through an email or a phone call. Let’s say this difference leads to 1,000 more interactions per month. By that measurement, you’re saving almost $20K per month vs. the bad bot. Let’s presume the bad bot costs $70K per year and the better solution costs twice that much. Based on this comparison, you are making up the cost difference in just over three months. The remaining 8.5 months represents pure savings. It becomes a wiser economic decision to choose the better chatbot that is meeting more users’ needs and is being used much more often.”
“Chatbot technology is constantly learning. It is building a database of understanding about your organization and what your users are asking. You can’t overcome that head-start without a lot of cost and sustained effort.”
Here are two important questions that can help you to avoid having a negative and expensive chatbot project:
Will my chatbot be accurate?
Will my chatbot be personalized?
“All chatbot vendors will tell you they provide these two experiences but have a hard time backing these up with objective measures and analytics,” Andrew says. “Customers don’t always understand that there is a difference in chatbots because that difference can be hard to see at first. On RFPs they’ll include features they want or that would be nice to have, such as ‘Does the bot have the ability to go to live chat?’ or ‘Can it be placed on any website?’ or ‘Can it use SMS?’ They then confirm if the vendor checks these and other boxes but miss the most important question – ‘Can your bot quickly and accurately answer user questions?’ If it can’t do this, all the other features don’t matter. It’s like you bought a fancy shiny car with nice rims and a great sound system but you can’t get it to drive faster than 25 miles per hour.”
Accuracy is Job #1
Achieving accuracy is a matter of algorithms and training models. It is not something that is easy to see in a controlled environment like a sales demo. This is something that a website or salesperson may skim over in pitching the benefits of their chatbot platform.
At IntraSee, we use Oracle’s Digital Assistant (ODA) PaaS service for machine learning driven natural language processing (NLP). The language model is seeded by Oracle who used over forty years of enterprise data. They are also one of the largest enterprise data companies in the past forty years, which gives them access to massive amounts of enterprise-level data. Oracle has used this data to inform their language models so that things like misspellings, alternative words, and grammar mistakes are more easily tolerated. They also employ something called Machine-Learning Artificial Intelligence. What that means is the algorithms aren’t looking for specific words or strings or characters in a certain sequence. Instead, this type of artificial intelligence is designed to learn over time and can handle inputs it has never seen before, leading to remarkable results. You can ask ODA questions it has never seen in the past and has never been programmed to handle and still receive accurate responses. The AI tech that Oracle uses is like what Tesla uses to determine the difference between a stoplight, a go light, yield sign, or a line in the road.
“Can your bot quickly and accurately answer user questions? If it can’t do this, all the other features don’t matter. It’s like you bought a fancy shiny car with nice rims and a great sound system but you can’t get it to drive faster than 25 miles per hour.”
How’s Your Training Model?
Another frequent problem Andrew mentioned is the chatbot training model that some vendors employ. “Not all these products use true machine learning AI and those that do save by training one model to generically serve all customers. The downside to this approach is that the model knows nothing about your unique environment, not to mention any bias a vendor might be introducing. It doesn’t know your slang or acronyms. Up front it sounds like a good idea. ‘Oh cool! We don’t have to do any of this work.’ But the result is that the bot has become less personalized to you. It doesn’t understand you or your organization. Personalization in a chatbot is so important. When users encounter what they think is overly generic engagement, they just see it as spam. They don’t see it as a valuable interaction and won’t repeat it. Think of the value lost in that moment and then multiply that many times over each day. Once again, cheap is expensive.”
So, how is IntraSee’s Ida enterprise chatbot providing a measurably different outcome to customers?
We use the top-tier backend technology: Oracle Digital Assistant or ODA.
Security is always at the top of our list of priorities.
Each customer has a unique training model for their language. Not very many, if any, vendors are going to give you that. This means the bot gets to know you and your organization and that learning can make all the difference in user outcomes.
We know most customers don’t have an AI architect, a data scientist, and a computational linguist on their team. That’s not their day job or focus, nor should it be. AI Architects can be $200K or more in today’s market. They are also hard to find because they are in high demand. We couple our tech with a managed service team that’s always monitoring a customer’s data, accuracy, etc. If we see a dip in accuracy because people are asking new questions about parking, etc. and we haven’t trained the bot to handle those, we don’t just say “good luck with that.” Our work doesn’t end when someone buys a license. We have a team of people constantly engaged, always making sure an organization is successful, always making sure we’re calling attention to pitfalls and roadblocks.
“Personalization in a chatbot is so important. When users encounter what they think is overly generic engagement, they just see it as spam. They don’t see it as a valuable interaction and won’t repeat it.”
Right Model, Right Results
Add all these elements and more and you have an offering that is much deeper, much broader, and always learning. Accuracy is top-level – better than any competitor. Although the initial cost may be at a premium, the value you get in return is orders of magnitude better. When you net it all out, you may even call it cheap! We think it’s a good idea to pay a little more in exchange for 10x more in value. If you invest in the right investment, the return should be far greater than what you put into it. The stock price doesn’t matter if you get a great return on it. It matters a lot if you lose what you invested.
New on-demand webinar. Get the latest on enterprise AI.
In an era driven by digital innovation, the current business landscape is witnessing a significant revolution with the emergence of A.I. chatbots. This presentation explores the current market for A.I. enterprise-class chatbots, focusing on their unprecedented potential to enhance the experiences of employees, managers, applicants, retirees and more.
Learn from this insightful new webinar as we explore how A.I. is revolutionizing businesses both large and small. Discover the untapped value chatbots hold for all users and how Oracle’s strategic partnership with Gideon Taylor unlocks this technology’s full potential.
We’ll discuss the current market landscape and the transformative influence of generative AI and Large Language Models.
We’ll also look at real-world client case studies that showcase the benefits institutions have already gained with A.I. chatbots, from streamlined administrative processes to personalized, secure user experiences.
Witness a demo of an Oracle-powered enterprise chatbots in action, seamlessly integrating with enterprise systems. Prepare to reimagine the future of HCM and embrace the boundless possibilities that A.I. chatbots bring to enterprise!
Introducing Ida 23.02: Embracing the Future of Digital Assistance with Generative AI and Alert Features
We’re thrilled to unveil the newest version of our advanced digital assistant, Ida 23.02. With a series of transformative enhancements, crucial fixes, and forward-looking features, Ida 23.02 is poised to push the conversational UI to new levels. In this release, we’re particularly excited to highlight three key features that set a new benchmark in AI-assisted functionalities.
New Generative AI to Suggest FAQ Answers: This powerful feature is a game-changer in the realm of digital assistance. Our newly implemented generative AI technology takes Ida beyond just retrieving information—it predicts and suggests answers to your questions, significantly improving the efficiency of managing Ida.
New Generative AI Training Assistant: We understand the importance of training AI models efficiently, and our new generative AI training assistant is designed to make this process smoother. By assisting in the AI model training process, this feature enhances overall performance and results in more accurate, reliable predictions and responses while saving you tons of time.
New Ida Alerts Feature: Stay on top of important notifications and updates with our new Ida Alerts feature. This feature ensures your users are alerted to timely information and deadlines to drive your business processes.
Full Feature List
Here’s a summarized list of all the features, enhancements, and fixes included in the Ida 23.02 release:
Ability to disallow suborg overrides on certain questions
FAQ file importer now handles answer providers more completely
Fixed issues with click tracking while in federated mode
Fixed issues with suggestions snooze
Fixed performance issue with large lists
Fixed issue where summary provider produced blank summary
Fixed issue where thumbs up was being deleted when using the ODA Web UI [IUC]
Fixed issue with person name recognition
Improved configuration options around user identity determination
Improved efficiency of sorting question lists
Improved Help intent UX to match Related Intents UX
Improved utterance passing to livechat adapter
Improvements to ODA and Ida WebUI
Incremental builds are now federated mode aware
New feature to capture any User IDs denied access to chat
New filter for question type on ratings pages
New In-Chat Form seamless UX degradation based on security
New options for debugging cloud functions
Stability improvements to OpenAI integration
Update Ida support for date entity/values in new ODA versions
With these enhancements, we are reimagining what digital assistance can do. Join us on this exciting journey and explore the full potential of Ida 23.02 today! Contact us below to learn more and setup your own personal demo.