The next release of Ida, our digital assistant, is now available. Clients can talk to their account teams about a deployment schedule that works for you.

22.03 in Summary

Ida 22.03 focuses on continued automation, improvements to non-authenticated chat analytics and UX improvements to translated conversations along with the routine bug fixes and other minor enhancements. Our goal is to fully automate the training and deployment of Ida bots and 22.03 moves us a couple steps forward.

Additionally, this release introduces beta support for the Oracle SDK web channel UI. This web channel offers a different look and feel as well as some new features. The first is the ability to voice chat over the web with Ida. Ida can understand your voice and also respond with its own voice. This new channel also supports persistent chat logs as you move from page to page and auto-suggest questions to improve accuracy from its already stellar performance.

Release Notes

  • Entity event handler support
  • Streamlined UI Kit javascript library used in Ida web channel
  • Beta support for ODA SDK ChatUI, including: voice support, type ahead, saved conversation summary and more
  • Improved bot training automation (beta)
  • Improved top topic chart in IUC console
  • Sub-org rating suggestions now filter out peer orgs
  • Feedback loop now lists rating history of who have rated/approved/rejected
  • Fixed an issue with top questions report in Federated Mode
  • FAQ upload supports additional answer providers
  • Improved same intent dialog while in DA context
  • Improved conversation tracking for guest users
  • Support for fuzzy entity matching
  • Various federated mode fixes
  • Consolidated conversation creation logic
  • Maintenance mode for content/data providers (take systems offline with graceful bot responses)
  • Resolved an issue with duplicate rows in feedback loop
  • New dedicated config pages for integration adapters
  • Improved FBL display for utterances sent to help
  • Fixed an issue where interactions were not showing in Feedback Loop
  • Corrected behavior with feedback loop with legacy data
  • Export/import now available for all training data
  • Safeguard prevention of FAQ use of certain question types (Frustration, Greeting, Help)
  • Pre-translation text warning users the response is auto-translated
  • Fixed an issue with Thumbs ratings on non-content based FAQs
  • Additional safeguards to training data in production
  • Stopped enforcing “list max check” for FAQ Question Inputs
  • UI support for FAQ question overrides

Contact us below to learn more and setup your own personal demo

Contact Us

Gartner recently released their Hype Cycle for 2022. The Hype Cycle is a theory around the 5 phases of emerging technologies. Chatbots fell into Phase 3, the Trough of Disillusionment, this year. While that title sounds scary, it is a pivotal moment for an emerging technology where the best will be separated from the rest. In this post, we will review the chatbot market, where it was, where we are now and what is coming next.

Gartner 2022 Hype Cycle

Where we were

Around 5 years ago Natural Language Processing (NLP) took a big leap. We had a breakthrough in artificial intelligence (AI) and Machine Learning techniques where our ability to have machines understand human language took a jump. Suddenly, chatbots, as they were dubbed, were all the rage. This irrational exuberance, perhaps fueled by Alexa, created a flood of demand. Rest assured; supply always finds a way to meet demand. At this point, everyone and their sister built a chatbot.

Many of these bots were rushed. Some used fake AI and their NLP accuracy was abysmal to match. The experience felt more like AOL Keywords than it did a machine who truly understood the human. Bots were released covering many singular use cases from FAQs, to recruiting, to CRM systems and even enterprise applications like HCM. The investment ranges were so wide you could find a bot for $10k a year or build one on IBM Watson for $1M.

This rush just served to flood the market with confusion. Customers began looking at price as the most distinguishing factor. Suddenly, we were all apples and, boy, were there a lot of apples. How did we like them apples? Well, not so much… 

Where are we now

With the market flooded with poor quality and confusing messages, customers have become disillusioned. Was this all hype? Will it ever work? This feels just like IVR (Interactive Voice Response)!

To add to the confusion, sales teams were proclaiming AI magic. It just learns! It is automatic! If your success is less than automatic, you are going to be disappointed. You may even write the technology off (though we would caution you to look again).

That AI breakthrough we spoke about earlier? Well, that happened when we learned (pun intended) to mimic how the human brain works. The human brain divides everything into buckets like apples, oranges, bananas and so on. When we see a new fruit never before seen, we just know it is a fruit. But this only works with lots of learning. To learn, we read, we attend class, we discuss, we experience. Bots learn by consuming data and instructions from data scientists.

So, while the cheap bots fell way short of our expectations, the enterprise bots seemed complicated to implement and manage. Organizations threw people without AI experience into AI projects. 

Reality then punched us in the face. And so the disillusionment phase has arrived. Let’s dive into where today’s chatbot projects have failed. If we can learn from this phase, we can be better for it. We can focus with more depth on what makes a magical conversational experience.

Why chatbot projects fail

Many chatbot projects failed because many customers were caught at one of two extremes – both driven by the desire to minimize cost. On one side, the poor-quality bot which required little effort yet never delivered on the salesperson’s, “It’s magic!” promise. On the other extreme, customers put in significant effort to try and build their own bot only to realize building and running AI at scale is not the same skillset they historically have possessed.

False No-Effort Narratives

  • “No-Effort” implementations mean you are getting a generic product; Too many customers have been sold this and failed. Your users see right through a generic bot which really is no better than classic IVR.
  • While there is such a thing as AI which learns on its own, it is a major liability for your organization. Facebook and Microsoft have failed spectacularly in this regard.
  • Some customers attempt pilots to prove out the technology, but pilots rarely work. As an organization you have to be all-in on making automation work. Further, these are solutions that evolve over time and a short pilot is no time at all.

Crawlers and Links

  • Real AI personalized to your users and organization is hard. Some projects attempt to shortcut this hard work by having the bot “crawl” in existing web site for content. Remember that salesperson? It’s automatic!
  • Crawlers are how search engine’s work and if your name isn’t Google, those searches have been shown to not perform well nor do they have any personalization.
  • Our focus groups have shown that users have no faith in search not named Google. Disguising search as chat will ensure a lack of adoption.
  • Users don’t want links to a web site. They want the direct, 140-character answer. Just linking them elsewhere didn’t actually serve their needs and save them time. 

Lack of Personalization

  • Per Gartner CIO surveys, digital transformation is nothing without personalization.
  • Bots which do not know you, where you do not log in, just end up sounding like that IVR.
  • Every question can have dozens of answers solely based on the attributes of the user. Most bots today fail to personalize, and the results show.

No Breadth

  • Many of the early bots simply focused on a small sliver of questions, like recruiting or financial aid. As we talk about in Higher Ed Chatbot Usage: Perception vs. Reality, your users don’t care about your reasons to only tackle a sliver.
  • Our data shows users will ask every question that comes to mind, including questions completely unrelated to the page they’re on. 
  • If your bot cannot handle at least 250 intents, you stand little chance of success.

Comfort of Live Chat

  • Live-chatting features sound like a good idea. I can always talk to a human! Keep in mind, you need to staff/train/manage that human.
  • Live Chat is not the killer feature organizations should be requiring. AI accuracy is. Live chat just covers up the failure to do job #1.
  • Even more, vendors know Live Chat is a parachute, albeit one that eats ROI significantly. Knowing they have a parachute, they rest easy and don’t solve the hard problems.

Let’s Jump into a use case related to this topic. recently published some survey data on recruiting bots. This is just a peak into a particular vertical and what the customers thought of their chatbots. Survey on Chatbot Effectiveness

You will notice some 71% thought their bots were average or below. If the parent thinks their child is average, then you are probably dealing with a D student. It is no wonder why organizations are disillusioned, but it doesn’t have to be this way.

According to the report, “Two in five people (42%) avoid chatbots when making a complex inquiry and 15% lack confidence in using technology to contact organisations”. And according to the Institute, if technologies like chatbots were “well designed and implemented”, most customers would be happy to use them.

This one report highlights what we all have experienced. We want chatbots to help us, but many of them are just falling short. Our expectations have shifted downward. However, this is the moment you can surprise and delight your users if we really focus on designing the experience.

…if technologies like chatbots were “well designed and implemented”, most customers would be happy to use them.

What comes next: Enlightenment

We started with irrational exuberance and now find ourselves in the Age of Disillusionment. Disillusionment just means that customers are no longer buying into the hype without a critical eye. They are now in prove-it mode. The success of the early adopters who chose wisely will begin to show. Those who chose poorly will cautiously wade back in, but will be much better informed. Bad solutions and methodologies will fade away and the market will gain clarity. We will be on a steady incline that is more predictable and sustainable.

We welcome this phase where the market will gain clarity and users will have their needs met. It is often lost that chatbots are really automation projects. You are replacing a function normally performed by a human. That means your best chance of success is to focus on outperforming the human. You have to be more accurate, more consistent, more available and more accessible including speaking your language.

When you pick a bot, remember that cheap is expensive when it comes to automation. If your bot does not offer any advantage when compared to speaking to a human, then user behavior won’t change. If behavior doesn’t change, you won’t be able to reduce spend and then you are simply paying for both the bot and the human. 

…cheap is expensive when it comes to automation

Good results take effort. Good results require personalization and NLP accuracy. The bot is mimicking a human and just like humans, they need to learn and grow. You are feeding them and should treat the bot like an evolution and not like a one-time project.

If you are ready to ramp up the bots, here are the top strategies we recommend you follow on your project:

  • Pick a platform/product that you can influence and that will learn about your organization. Be sure it is based on Machine Learning AI.
  • Commit to a three-year initial Agile cycle. Implement, monitor, learn and evolve in increments no longer than 3 months. Learn from your users!
  • Prioritize breadth and personalization. This is the key to user adoption. 
  • Create a funnel to maximize ROI. Drive users to the automated path first and let the bot pursue escalation paths such as live chat when needed. Do this after the initial learning period of six months.
  • Make the bot omnipresent. The bot should meet users where they are, not just on one website. That chat icon should be everywhere.

If you have any questions or want to see a demo of Ida, our own bot, please reach out below. As an industry, we are approaching the beginning of enlightenment; we believe our clients are already there. We hope that if you have experienced a failed chatbot project, you will try again with fresh eyes. If you need any help or just want to talk things through, just drop us an email! 😊

Contact Us

Whenever you are in a conversation about chatbots and digital assistants in higher ed, without fail the topic of Financial Aid comes up. Like a good deep-dish pizza conjures up thoughts of Chicago, Financial Aid and chatbots are often linked. The prevailing wisdom is that Financial Aid questions are the most highly demanded questions amongst students. What if I told you the prevailing wisdom was wrong?

It makes perfect sense why we assume Financial Aid questions are ripe for bots when you consider a) the questions are seasonal with volume peaks that are hard to manage, b) Financial Aid is a complicated topic that invokes questions and c) the answers to these questions are common across all schools and easily automated. 

These explanations just didn’t feel right to us, so we were curious to test this thesis. What does the actual data say in regards to the demand for Financial Aid answers? We dove into our data to find the real story. We think you will be surprised by how far perception is from reality.

Financial Aid

Ida can categorize questions into topics. For the purpose of answering our question about the popularity of Financial Aid questions, we analyzed the occurrences of any financial question be it aid, account balances, fees, and so on. We narrowed in on a particular client who serves a wide range of questions across all the common topics a student may need help on. It wouldn’t be accurate to look at a client who only deployed their bot to Financial Aid pages or Admissions pages. We call that selection bias 😉.

We want to understand the totality of the student experience when the bot can help them with all their questions. Additionally, this user base has been exposed to the bot for at least two academic years, so the adoption curve isn’t introducing its own bias.

Immediately we noticed an expected pattern to the demand curve (picture below). The curve’s peaks hit right when you would expect it: when the money comes! Outside of those two times a year, students hardly ask that much about financials and financial aid. 

Financial Questions as a Percentage of Total

Perhaps the more surprising finding is that even during the peaks, the percent of total volume is quite low—around 2%! But wait, isn’t the whole reason to implement a bot because students had so many Financial Aid questions? Maybe not that many students get financial aid, so we took a look at that. Some 98% of undergrads get some form of aid, so it is relevant to virtually all users of the chat.

Now we know students are asking other questions at least 49 times more often. Let’s dig in and see what they are actually doing during a recent full academic year…

Popular Trends

Using Ida’s AI Categorization and Analytics, we can paint some broad strokes around what these same users are interested in. Over the same twelve-month period, we can see the distribution as pictured below.

Percent of Question by Topic

Let’s start with the winner and still champion: Academics! Academics are questions about things such as programs, policies, GPAs and getting your degree—things not specific to a particular course. These are questions you would ask at the Registrar’s or Advisor’s Office. Maybe we forgot the whole reason students are in school is to get a degree (at least in this 4-year institution!) No wonder it is by far the most popular topic.

Moving down the list, we have Student Life and Residence Life breaking the 10% barrier. These are matters that affect a student’s day-to-day life, so it’s easy to see why they are popular. This popularity lasts all year and not just seasonally like Financial Aid.

My Information and Health and Wellbeing are next up. Health and Wellbeing is obvious, having just come out of a pandemic and dealing with ever-changing policies. Do I need a booster? How long is quarantine now? Has Monkeypox been detected on campus?

My Information is information about my personal records such as name, address, emails, phones and so on. 

The big irony here is that Financial questions are just about as popular as studying abroad (International and Travel) and they both come in at the bottom of the list.

In Conclusion

At IntraSee and Gideon Taylor, we prefer a data-driven approach. So, while Financial Aid sounds like logical focus, the data tells us we need to be broad. We need to be a one-stop shop for all sorts of questions across different topics if we want to maximize our service to the student. This post also highlights the importance of being agile. When you launch a bot, pay close attention to how it is used and get ready to adjust quickly. Instead of predicting where the bot should go next, let the bot tell you where the next need is. If you want to talk more on this topic or see a demo, you can contact us below.

Contact Us

The next release of Ida, our digital assistant, will be available July, 2022. Clients can talk to their account teams about a deployment schedule that works for you.

22.02 in Summary

This release of Ida covers a few notable areas: new adapters, bot building efficiency and various bug fixes and improvements. Let’s start with the adapters. We have added or made major improvements in this area including an adapter for Kase ticketing and for handing over to SnapEngage’s live agent chat. Salesforce’s adapter has undergone improvements to use the more modern REST APIs and the addition of using Salesforce fields as a summary answer. Finally, some additions and fixes were made to the PeopleSoft Campus adapter.

Bot training and building has undergone major efficiency improvements which could decrease bot build time over 90%. Some new reports and analytics have been added as well as a new Analytics Center. The Analytics Center is a one-stop-shop page to get that cockpit view of how your digital assistant is running. Finally, many various bug fixes are also included in this release.

Release Notes

  • New 22.02 Training Videos
  • Improve security packaging for on-prem packages
  • Error code exceptions now included in chat
  • New Campus Intent: Tell me about History 101
  • New opt-in/out option for incremental bot rebuilds
  • Livechat adapter for SnapEngage
  • Breakout collision fix when running in DA
  • Organization Fixes
  • Improved remote call request error logging
  • Various Suggestions fixes
  • Various On-Prem Security Sync Improvements
  • Improvements to FAQ summary answers
  • New Kase Adapter
  • Updated archiving process
  • Additional Long Term Trend KPIs
  • Month by month conversation Location Report
  • Optimizations to bot training and building
  • Ida Suggestions Usage Report
  • Passing of sub-org to remote DSPs
  • Improved help and low-confidence dialogue text
  • PeopleSoft environment refresh guide
  • Make Suggestions configuration client accessible
  • Student immunization answer source fixes
  • Added Salesforce summary provider
  • Bugfix to Topic Accuracy KPI Tile
  • Streamlined sub org answer overrides
  • Dynamic location entity and answer source

Contact us below to learn more and setup your own personal demo

Contact Us

Today’s machine learning-driven AI (Artificial Intelligence) is a huge technological jump from just five years ago. However, when it comes to having a successful digital assistant, you need equal parts art and science. While the science is achieving substantial accuracy scores, clients often ask IntraSee, “What else can we do to increase adoption?” The answer to that question lies in the art of the bot response. This post will cover a few tips we have found to maximize effectiveness, drive adoption, and ultimately deliver ROI.


While it may not seem like a big deal, a bot’s personality is important. A clever name that is easy to recall with some witty responses will leave a lasting impression in a way a bland bot won’t. You will see this very technique with consumer bots like Siri or Alexa.

The bot should never pretend to be a human while making light of the fact you are talking to a machine. Further, it is important that you have a conversational style that is not overly robotic. This element of fun can bring a smile to a user’s face and have them coming back next time.

How’s the weather?

I wouldn’t know, I live to work all day and answer your questions.

Let’s say someone asks the bot how much time off they have. A poorly designed, robotic response may be:

how much time off do I have?

Here is your leave balance…

Paid Time Off: 143 Hours

Sick Time Off: 13 Hours

Compare that to a more conversational style response:

how much time off do I have?

Let me look up your time off balance for you. Everyone needs a day off!

Paid Time Off: 143 Hours

Sick Time Off: 13 Hours


All of us have had some bad experiences with a bot. Often poor AI training is at fault, but those bad experiences also happen when you get a distinct feeling that the bot doesn’t know you. Personalization is a fantastic way to build trust with the user. Consider an example in Higher Education where both Students, Faculty and Staff are all using the bot. If the user asks, “where should I eat?” Would you be comfortable recommending a dorm’s dining room to a faculty member?

Knowing your user is key to adoption. This is a primary reason why it is important to integrate into the authentication and HCM/Student system like Ida does.

Nothing is a Yes/No Question

A common mistake in conversational design is to assume you know the question asked when constructing your answer. Natural Language Processing (NLP) engines can match hundreds or thousands of variations of questions and statements to a single answer. As such, don’t assume you know the form of the question that got the user to your response.

For example, let’s say you want your bot to respond to, “do you have my phone number on file?” You may construct a response such as:

do you have my phone number on file?

Yes, I can look that up for you. Here is what I found…

What if the user’s question was, “let’s update my phone number.” Well, in that case the response would feel disconnected, wouldn’t it? What is the bot saying “yes” to? Consider a response with more global application which also repeats key words such as:

do you have my phone number on file?

Let’s see what phone numbers I have for you. From here I can help you update your numbers as well.

Living in a 140-Character World

Technology everywhere is competing for the user’s attention; not to mention that people have day jobs or degrees they are focused on. The reason they came to chat is because browsing or searching web sites is inefficient and slow. Curate your responses with brevity in mind. Get right to the point and do it without requiring a lot of reading. You can always present a way to “Read more” or “Tell me more.” Start with the simple, succinct answer and allow users to opt in for the more verbose detail.

NLP vs. Menus

With most bots you’ll tend to see one of two user experiences (UX): an NLP-driven UX and a Menu-driven UX. Bots present a menu-like experience by generating lists of links inside the chat. Menu styles (picture below) do not scale like a wide-open NLP style where a user can type anything they want into a message box. You can only show so many choices to a user, so the Menu approach quickly becomes problematic. Further, it diminishes the entire point of asking in your own words. Not to pick on the MLB, but you can quickly get a feel for the drawbacks when looking at the Ballpark Digital Assistant.

Menu-based Bot Example

Menu-style bots are often employed to make up for poor NLP capability. When the bot is encouraging you to click menu links vs. allowing free-form typing, it is often because of NLP accuracy issues.

At IntraSee, we prefer a wide-open, type-anything-you-want user experience. This approach scales to thousands of use cases and the user benefits from the true power of AI. Menu styles often result in a user being confined to a small set of capabilities and never fully exploring all the bot has to offer.

Click-less Responses

One of the most frustrating user experiences is to ask a question only to be pointed elsewhere. Think about that feeling when you call for help and they say, I need to transfer you to someone else, can you hold please? Wouldn’t you have preferred to just get the answer right then, right there?

A click-less response is a response where the user doesn’t need to click. They get their answer directly, succinctly and personalized to them. Giving someone a link may be convenient for the bot developer, but it is not a great experience for the user. By linking them to the real answer, they now must click and scan an entire page to find what may only be a small snippet of information they are really looking for.

Channel and Accessibility Considerations

Be sure not to overlook the accessibility and portability of your bot’s responses. A bot can be one of the friendliest mediums for assistive devices. The experience is linear, chronological, and hyper-focused on one area of content at a time. This can be quickly ruined with the use of images, video or other rich content. While those mediums can be made accessible, they create a noisier experience on an assistive device.

If you do have links in your bot response, be mindful around which words are linked. The link should surround the most descriptive text for accessibility reasons. For example, never have a response that says “to view your records, click here.”. Instead the response should read, “You can view your records…”

Bots don’t only talk to you on web sites. You can have a conversation over Microsoft Teams, Slack, Voice or even SMS Texting. How will a response with links, images or videos work on all those channels? If your response needs channel-specific variations, that will increase your implementation effort and take you further away from a consistent experience on all channels. Keeping your responses in text/html maximizes reach and ease of use.


If understanding the human’s natural language is half the battle, then the other half is your conversational response design. With our platform, Ida, every response can be configured so you can curate the ultimate bot for your users with the personality you want. Ida is not one-size-fits-all; she can become who you need her to be. If you are interested in chatting more or would like to see a demo, you can contact us below.

Contact Us