The next release of Ida, our digital assistant, will be available July, 2022. Clients can talk to their account teams about a deployment schedule that works for you.

22.02 in Summary

This release of Ida covers a few notable areas: new adapters, bot building efficiency and various bug fixes and improvements. Let’s start with the adapters. We have added or made major improvements in this area including an adapter for Kase ticketing and for handing over to SnapEngage’s live agent chat. Salesforce’s adapter has undergone improvements to use the more modern REST APIs and the addition of using Salesforce fields as a summary answer. Finally, some additions and fixes were made to the PeopleSoft Campus adapter.

Bot training and building has undergone major efficiency improvements which could decrease bot build time over 90%. Some new reports and analytics have been added as well as a new Analytics Center. The Analytics Center is a one-stop-shop page to get that cockpit view of how your digital assistant is running. Finally, many various bug fixes are also included in this release.


Release Notes

  • New 22.02 Training Videos
  • Improve security packaging for on-prem packages
  • Error code exceptions now included in chat
  • New Campus Intent: Tell me about History 101
  • New opt-in/out option for incremental bot rebuilds
  • Livechat adapter for SnapEngage
  • Breakout collision fix when running in DA
  • Organization Fixes
  • Improved remote call request error logging
  • Various Suggestions fixes
  • Various On-Prem Security Sync Improvements
  • Improvements to FAQ summary answers
  • New Kase Adapter
  • Updated archiving process
  • Additional Long Term Trend KPIs
  • Month by month conversation Location Report
  • Optimizations to bot training and building
  • Ida Suggestions Usage Report
  • Passing of sub-org to remote DSPs
  • Improved help and low-confidence dialogue text
  • PeopleSoft environment refresh guide
  • Make Suggestions configuration client accessible
  • Student immunization answer source fixes
  • Added Salesforce summary provider
  • Bugfix to Topic Accuracy KPI Tile
  • Streamlined sub org answer overrides
  • Dynamic location entity and answer source

Contact us below to learn more and setup your own personal demo

Contact Us

Today’s machine learning-driven AI (Artificial Intelligence) is a huge technological jump from just five years ago. However, when it comes to having a successful digital assistant, you need equal parts art and science. While the science is achieving substantial accuracy scores, clients often ask IntraSee, “What else can we do to increase adoption?” The answer to that question lies in the art of the bot response. This post will cover a few tips we have found to maximize effectiveness, drive adoption, and ultimately deliver ROI.

Personality

While it may not seem like a big deal, a bot’s personality is important. A clever name that is easy to recall with some witty responses will leave a lasting impression in a way a bland bot won’t. You will see this very technique with consumer bots like Siri or Alexa.

The bot should never pretend to be a human while making light of the fact you are talking to a machine. Further, it is important that you have a conversational style that is not overly robotic. This element of fun can bring a smile to a user’s face and have them coming back next time.

How’s the weather?

I wouldn’t know, I live to work all day and answer your questions.

Let’s say someone asks the bot how much time off they have. A poorly designed, robotic response may be:

how much time off do I have?

Here is your leave balance…

Paid Time Off: 143 Hours

Sick Time Off: 13 Hours

Compare that to a more conversational style response:

how much time off do I have?

Let me look up your time off balance for you. Everyone needs a day off!

Paid Time Off: 143 Hours

Sick Time Off: 13 Hours

Personalize

All of us have had some bad experiences with a bot. Often poor AI training is at fault, but those bad experiences also happen when you get a distinct feeling that the bot doesn’t know you. Personalization is a fantastic way to build trust with the user. Consider an example in Higher Education where both Students, Faculty and Staff are all using the bot. If the user asks, “where should I eat?” Would you be comfortable recommending a dorm’s dining room to a faculty member?

Knowing your user is key to adoption. This is a primary reason why it is important to integrate into the authentication and HCM/Student system like Ida does.

Nothing is a Yes/No Question

A common mistake in conversational design is to assume you know the question asked when constructing your answer. Natural Language Processing (NLP) engines can match hundreds or thousands of variations of questions and statements to a single answer. As such, don’t assume you know the form of the question that got the user to your response.

For example, let’s say you want your bot to respond to, “do you have my phone number on file?” You may construct a response such as:

do you have my phone number on file?

Yes, I can look that up for you. Here is what I found…

What if the user’s question was, “let’s update my phone number.” Well, in that case the response would feel disconnected, wouldn’t it? What is the bot saying “yes” to? Consider a response with more global application which also repeats key words such as:

do you have my phone number on file?

Let’s see what phone numbers I have for you. From here I can help you update your numbers as well.

Living in a 140-Character World

Technology everywhere is competing for the user’s attention; not to mention that people have day jobs or degrees they are focused on. The reason they came to chat is because browsing or searching web sites is inefficient and slow. Curate your responses with brevity in mind. Get right to the point and do it without requiring a lot of reading. You can always present a way to “Read more” or “Tell me more.” Start with the simple, succinct answer and allow users to opt in for the more verbose detail.

NLP vs. Menus

With most bots you’ll tend to see one of two user experiences (UX): an NLP-driven UX and a Menu-driven UX. Bots present a menu-like experience by generating lists of links inside the chat. Menu styles (picture below) do not scale like a wide-open NLP style where a user can type anything they want into a message box. You can only show so many choices to a user, so the Menu approach quickly becomes problematic. Further, it diminishes the entire point of asking in your own words. Not to pick on the MLB, but you can quickly get a feel for the drawbacks when looking at the Ballpark Digital Assistant.

Menu-based Bot Example

Menu-style bots are often employed to make up for poor NLP capability. When the bot is encouraging you to click menu links vs. allowing free-form typing, it is often because of NLP accuracy issues.

At IntraSee, we prefer a wide-open, type-anything-you-want user experience. This approach scales to thousands of use cases and the user benefits from the true power of AI. Menu styles often result in a user being confined to a small set of capabilities and never fully exploring all the bot has to offer.

Click-less Responses

One of the most frustrating user experiences is to ask a question only to be pointed elsewhere. Think about that feeling when you call for help and they say, I need to transfer you to someone else, can you hold please? Wouldn’t you have preferred to just get the answer right then, right there?

A click-less response is a response where the user doesn’t need to click. They get their answer directly, succinctly and personalized to them. Giving someone a link may be convenient for the bot developer, but it is not a great experience for the user. By linking them to the real answer, they now must click and scan an entire page to find what may only be a small snippet of information they are really looking for.

Channel and Accessibility Considerations

Be sure not to overlook the accessibility and portability of your bot’s responses. A bot can be one of the friendliest mediums for assistive devices. The experience is linear, chronological, and hyper-focused on one area of content at a time. This can be quickly ruined with the use of images, video or other rich content. While those mediums can be made accessible, they create a noisier experience on an assistive device.

If you do have links in your bot response, be mindful around which words are linked. The link should surround the most descriptive text for accessibility reasons. For example, never have a response that says “to view your records, click here.”. Instead the response should read, “You can view your records…”

Bots don’t only talk to you on web sites. You can have a conversation over Microsoft Teams, Slack, Voice or even SMS Texting. How will a response with links, images or videos work on all those channels? If your response needs channel-specific variations, that will increase your implementation effort and take you further away from a consistent experience on all channels. Keeping your responses in text/html maximizes reach and ease of use.

Conclusion

If understanding the human’s natural language is half the battle, then the other half is your conversational response design. With our platform, Ida, every response can be configured so you can curate the ultimate bot for your users with the personality you want. Ida is not one-size-fits-all; she can become who you need her to be. If you are interested in chatting more or would like to see a demo, you can contact us below.

Contact Us

The next release of Ida, our digital assistant, will be available April, 2022. Clients can talk to their account teams about a deployment schedule that works for you.

22.01 in Summary

There are two big, new features in this release of Ida. The first is called Ida Suggestions. Users of digital assistants tend to ask questions only when they have a problem. It is a very reactive pattern that is hard to break. This behavioral trend can be a barrier to discovering new ways the digital assistant can help you. We are focused on flipping this dynamic to a more proactive model where Ida routinely adds value to your user’s days. Ida Suggestions is a new, proactive feature which will suggest to users ways in which Ida can help. Whether it is what is popular lately to what is seasonally relevant, Ida will give you that little nudge to solve your issue before you even know you have a question.

The second feature of note is a new Feedback Loop mode called, High Value Mode. When High Value Mode is enabled, Ida will algorithmically target certain interactions where you can focus ratings/annotations for maximum value to the machine learning. We expect this mode to provide 10x more value per hour spent rating which ultimately will save our clients a ton of time while keeping the accuracy very high.

The release also includes routine fixes, new reports, training materials and catalog updates.


Release Notes

  • PeopleSoft on-prem environment refresh guide
  • New 22.01 Training Videos
  • Breakout collision fix when running in DA
  • Improve security packaging for on-prem
  • Dynamic location entity and answer source
  • Improved remote call request error logging
  • Added “Who built you?” intent
  • Feedback loop High Value Mode
  • Improved error handling when no questions available
  • Friendlier admin previews for remote answers
  • Thumbs Results Report
  • Long Term Trend KPIs
  • Cloud based, realtime thumbs satisfaction data collection
  • Update ChatUI to support embedding in ServiceNow
  • Add DA specific metadata fields to automated deployment
  • New live NLP data reports
  • Updates Convo Dashboard to use Convo Log Summary Table
  • Updated Convo Log reporting table
  • Added Question Type component for client use
  • Resolved an issue where phones/addresses weren’t using self-service display flag
  • Better handling of step-up authentication when user doesn’t exist in IUC
  • Added “incorrectly presented” to auto test output
  • Improved performance of chat locations report
  • Mobile MS Teams task module fixes
  • Ida Suggestions (what’s new, not tried, popular)
  • FBL simplified ignored outcome option
  • FBL filter by topic option
  • Report: Monthly Active Users (by Org)
  • Fixed an issue with an excessive margin on reporting pages
  • Support for groups in FAQ import file
  • Corrected FBL match calculation in Metrics Report
  • Added clarity to some intro text
  • Removed dependencies on IntraSee WebUX modules for address in-chat form
  • Added consistency to labels and naming
  • Ability to add an FAQ directly from FAQ Search page
  • Updated non-auth-to-auth handoff response

Contact us below to learn more and setup your own personal demo

Contact Us

Today we have a big announcement. IntraSee has joined the Gideon Taylor family. Both companies have been stalwarts in the Oracle ecosystem for more than 15 years. While IntraSee’s focus has been on the user’s experience in the enterprise, Gideon Taylor has been known for the automation of business processes. It was natural to join the two together. Our customers now benefit from the back end to the front end with a focus on driving real ROI whether you are on premise, in the cloud, or on SaaS. 

My co-founder, Paul Isherwood, and I started IntraSee in 2005 and what a ride it has been growing from a consulting company, to a software company and ultimately a SaaS Cloud company. We have successfully navigated through major shifts in the enterprise software market, the financial crisis of 2007, the beginning of the cloud era and most recently the pandemic. No matter what was thrown at us, we adapted to serve our clients. 2021 was no exception with the sudden passing of Paul.

In this next chapter of IntraSee, we become a new division of Gideon Taylor where we will continue to serve our existing clients and with our digital assistant, Ida, carve out an exciting path for both companies. I will lead that division and look forward to a long partnership with Paul Taylor and his leadership team. You can read all about our announcement in the press release issued today. 

I would like to take a moment to address all the important people who got IntraSee to this point.

To our customers:

Thank you for believing in IntraSee. It has been an absolute pleasure to help you improve your experiences for your employees, managers, students and faculty. We are only getting stronger from here with a broader cloud portfolio, the benefits of scale, and even greater investment in Ida, our digital assistant. We know many of you are planning major investments in the next ten years. We are excited for your future and to help get you there.

To our employees:

The IntraSee family is the reason we are here today. Each one of you, past and present, has contributed to our mission of bringing great usability to enterprise software. I owe you all a heartfelt thank you for your hard work and dedication. The support from the current team over the last year in particular is more than I could have imagined. I, and our clients, have been lucky to work with you and I look forward to continuing on the InstraSee journey with you as my colleagues.

To Paul Isherwood:

I remember the first presentation I saw you give back at PeopleSoft. The entire presentation was built with dynamic HTML and this was about 1999. When I asked you, “Why not use PowerPoint?” you simply responded with “Why would I use PowerPoint? This is so much cooler.” Throughout the 15+ years we were partners, you always helped us imagine something so much cooler. In your memory, I and the rest of the combined InstraSee/Gideon Taylor team, are going to push this mission to the next level like only we know how.

Sincerely,
Andrew Bediz

If 2021 has taught us anything, it is that what sounds like a consensus online is often not. Our world is dominated by algorithms whose output has shown the ability to skew our realities. Bad actors have discovered they can influence algorithms and they do so for financial gain or just a laugh. Artificial Intelligence (AI) can provide great value, but AI with bias and/or inaccuracy is something we must actively guard against. This post is going to explore the traps related to user feedback and how over reliance on that dataset can result in poor outcomes for any AI, but especially for chatbots and digital assistants which are your first line of support for your users.

For the purposes of this post, we will focus our examples on use cases we typically see our customers facing. Users, in this context, are the ones chatting with the bot and looking for support.

What is User Feedback?

User Feedback is a broad term meant to cover both direct and indirect feedback. Direct feedback is when the user is asked for their opinion directly and they reply. You will see this in various forms. For example, the thumbs up and down icons are meant to collect user feedback. You may be asked, “Did this solve your issue?” or “How would you rate this experience?”. Have you seen those buttons at a store’s exit where there is a smiley face, a sad face and something in between? That is a form of direct user feedback.

customer satisfaction buttons

The other type of feedback is far more subtle and indirect. We can look at a user’s actions and from those infer some level of feedback. These patterns can also be called user cues. An example of such a cue is when the user gets an answer and they respond, “you stink!”. The implication is that the user is unhappy about the previous answer. Another cue can be the circumstances under which a user clicks a help button or even asks to speak to a live agent. All of these indicate something may have gone wrong.

The Feedback Challenge

There is no problem with asking for feedback. In general, it is a good practice. There are some challenges, however, so let’s explore those.

Interpreting User Intent

Interpreting the user’s intended meaning is no easy task. Let’s focus in on a simple interaction to illustrate this point. With many help desk systems, upon completion of the experience, the user will be asked: Did this solve your issue?

Let’s imagine a digital assistant experience…

How much PTO can I borrow?

All our policies can be found in the Policy Center.

The user gives a resounding “NO” to the follow up question, “did this solve your issue?”. The problem is, we don’t really know why it didn’t solve their issue. If we present them with a big, long survey trying to find out why…well, you know no one is spending time on that. Back to the point at hand, there could be all sorts of reasons for the “NO”. For example…

  • They are annoyed because the bot didn’t answer directly. It simply gave a link and it is now the user’s problem to find the answer.
  • The user may have found the policy on borrowing PTO, but disagreed with the policy itself, thereby not solving the issue at hand.
  • The user may be unhappy that they are getting an answer about policies seemingly unrelated to the question which was about how much PTO can be borrowed.
  • The user is a bad actor and intentionally provides inaccurate feedback.

Experts say the key to effective user feedback is acting on it. However, the confusion around user intent puts you on a steep slope when trying to act.

Selection

The next problem with user feedback is that many studies suggest the data is not representative of the user community.

Anecdotal evidence from across the web suggests a typical response rate for an online survey is much lower than 10%. That means the vast majority of your customers (90%+ ) are not telling you what they think. You might be able to argue that away statistically, but in reality are you happy that so many of your customers don’t have a voice?

customerthermometer.com

We know user feedback tends to have a self-selecting effect. That is to say, the people who participate skew the data away from a true representation of the whole community. The most basic example of this is that unhappy people provide more feedback than happy people. This makes it very difficult to act on a dataset which lacks representation.

Intentional Manipulation

Famously, Microsoft released a bot using their AI to Twitter in 2016, a time when we didn’t fully understand the world of unintended consequences in AI. Without too much detail, let’s say this experiment did not go well. “The more you chat with Tay, said Microsoft, the smarter it gets.” Have you heard this before?

It is a case where users figured out they could influence the AI and they knowingly did so. We know humans are capable of this manipulation. Despite our speculation as to their intentions, we need to actively guard against it. So how does one know the difference between this manipulation and genuine user feedback? If Facebook and Twitter haven’t been able to tell the difference, we should be cautious in thinking we can.

IntraSee’s Feedback Data

Across our customers, many have deployed quick feedback mechanisms like thumbs or star ratings. This feedback is non-interruptive, and the user is not forced to answer. For this type of asynchronous feedback, we are seeing a 3%-4% response rate.

We will also collect feedback that is more synchronous, which the user can ignore, but it is not readily obvious they can continue without providing feedback. This method is getting about a 40% response rate +/- 7%. Clearly, more feedback is gathered with this method, but it can be annoying. To counter that potential frustration no user is asked too often. There is a delicate balance between getting feedback and being bothersome, but we feel throttling is necessary here even though it reduces the data significance.

For one customer, the asynchronous feedback (thumbs/stars) happens 7.5 times as much. Doing the math, we get almost the same amount (+/- 5%) of feedback data from both models!

Automating AI with User Feedback

We now understand that feedback, while valuable, can produce bad outcomes if you are not careful. It is hard to collect, it is often not representative and interpretation is rife with miscalculations. In the chatbot industry, there is a technique which will take user feedback data and feed it into the AI model, but doesn’t that sound problematic when our confidence of this feedback is on shaky ground? Remember how Microsoft said to just use it and it will get better?

Machine Learning AI is the most powerful type of engine behind enterprise-grade digital assistants. That AI uses a model that is trained with data just like a Tesla uses pictures of stop signs to understand when to stop. When we hear, “just use it and it will get better,” what is really happening is the training data is improving which should yield better outcomes. That is, of course, if the training data is of high quality.

How does training data improve? Two traditional ways: manually by a data scientist or automatically. How do you automatically update training data? You need to draw upon data sources, so why not use user feedback? For example, if a user clicks the thumbs down, we can assume the AI had a bad outcome, right?

It sounds like a good idea, but it can be a trap! As previously discussed, we see this data collected < 4% of interactions. Imagine you have 1,000 questions in your bot and get 10,000 user questions in a month. If every question was asked an equal amount of time, that would be 4 pieces of feedback per question! How many months do you need to wait before the feedback has data significance? This effect is even more pronounced if the question is not a top 20 popular question.

It's a trap!

Now consider you wait 6 months to have enough feedback to act on it automatically. What has changed in 6 months? The pandemic has taught us that everything can change! By the time you have enough data, that same data may be stale or, worse, incorrect.

This math all assumes feedback data is good and evenly representative, but as discussed above, we know it is not. Oh my, what a mess! We now have limited data, and it is overrepresented by the unhappy and we are considering automatically amplifying their voice into the AI model?

Time for another practical example.

Do I need another vaccine?

Information about health and wellness can be found by contacting the Wellness Center at 800-555-5555.

This answer isn’t wrong, but there is a better answer which specifically talks about booster shot requirements. The user doesn’t know this answer exists, so logically they click thumbs up or answer “yes” to the question, “did this answer your question?”

If we took this indirect user feedback and automatically fed it into the AI, we would be telling the AI you were right to give this less-than-perfect answer. The system is then automatically reinforcing the wrong outcome. Now amplify this by thousands of interactions and what happens? The AI drowns out the more helpful answer about booster shots. The end result of this slippery slope is continual degradation in the quality of service the user receives.

What’s the Solution?

This is a nuanced problem we spend time thinking about so our customers don’t have to. One solution is to not abandon the human touch. The dirty little secret about Alexa and Siri is that they have thousands of people contributing to the AI by tagging real life interactions. If Apple and Amazon still need the human touch in their AI, then it is probably for good reason.

When teachers teach students, they are curating the experience. Teachers don’t simply ask students, “do you feel you got this test question correct?”. They are grading those tests based on their expertise. Asking students to be the grader is flawed.

While we cannot discuss all our tricks, at IntraSee we will be introducing some new technology in 2022 directly aimed at this challenge. The lesson learned here is that while automating the data that feeds an AI model can be powerful, it is a power that comes with great responsibility. Ask your AI vendors how they solve this challenge. For our customers, these challenges are our problem at IntraSee, not yours. Rest assured, we are all over the challenges so you don’t have to spend a minute on them 😀

Contact Us