Statistical analysis revolutionized the game of baseball, to the extent that every single team in the major league now employs a huge stats team that drives almost every decision the organization makes. The days of a “good eye” and a “gut feel” being the deciding factor are long over. Now, it’s all about the numbers. 

And the parallels to the world of digital assistants are remarkably close. Successful implementation of a digital assistant solution entails applying advanced statistical techniques to both measure and improve performance. The premise being:

“You can’t manage what you can’t measure”

– Peter Drucker

So, let’s start with the concept, and then examine the details behind the ways that the worlds of baseball and digital assistants collide.

1.   Moneyball

The lesson from Moneyball was that smart organizations could compete with the likes of the Yankees if they used their limited financial resources in ways that were much more efficient. If they could generate a hundred runs a year by spending $1M, while the Yankees were spending $10M, then they would level the playing field. Doing more with less was possible if organizations could change their traditional ways of thinking. 

And so it is true in the world of digital assistants. If somebody offered you something ten times better than what you already were paying for, and it happened to also be twenty times cheaper (at least), wouldn’t it make sense to switch to it? 

Human vs. Ida support performance
Figure 1: The new reality

This is the new world we live in. Humans are fantastic are doing certain types of things, and woefully inefficient at doing others. And digital assistants just happen to be great at what humans are poor at. Humans can’t infinitely scale, they also can’t remember thousands of key facts, or update hundreds of records in a few seconds. Humans forget things, are not always fully motivated, and no matter how much you invest in them, humans will eventually leave you. 

Digital assistants are the new Moneyball. If you choose the right one, and pay heed to how you implement and grow your new digital worker, it will allow your organization to provide a better service at a tiny fraction of the cost you are paying now. 

Moneyball cartoon: digital assistants are better and cheaper
Figure 2: Moneyball for the Enterprise & Campus

2.   What is a good accuracy score?

It has been said that hitting a ball in major league baseball is the hardest thing to do in all of sports. Yes, hitting a gently tossed baseball in your back yard is something almost anyone can do. But hitting a 92 mph four-seam fastball from Clayton Kershaw? Well, that is something very few people can do. 

In the world of baseball, making contact with the ball is considered job #1 of the batter. In the world of digital assistants, being able to accurately respond to a human request is likewise considered job #1. 

In baseball, a batting average of over .300 is considered excellent, over .350 is elite, and anything close to .400 is other-worldly. With digital assistants, over .800 is excellent, .850 is elite, and .900 is other-worldly. 

Note: there are caveats to this that we will address is subsequent points.

In the following chart, extensive testing (2,500 questions) of the major consumer facing digital assistants took place, where voice recognition was removed as a factor such that this was purely a test of knowledge matching. What was even more interesting was that the test also measured how often a question was attempted to be answered, as well as how accurate it was when it did attempt the answer. 

Comparing the results below you can see that some digital assistants attempt to answer (swing the bat) more than others. Plus, the accuracy (makes contact with the ball) is also very different too. Google and Ida both hit in the 80’s. Cortana attempts to answer just over 80% of the questions, but only gets it right just over 50% of the time. Meanwhile, Siri swings at the ball 40% of the time, and even then only makes contact 70% of the time. Which is extremely poor. 

Accuracy rates for Ida vs. consumer virtual assistants
Figure 3: Accuracy comparisons

Please note: the scores from Ida are results in production environments across multiple customers and are an accumulated average. Also, Ida is being asked questions that are specific to a client’s organization that also may be specific to individual employees, managers, students and advisors. Therefore, the degree of complexity is much higher.

3.   Batting Cage Averages

There’s a reason that when batting averages for a player are published that they only include stats captured during an actual game. All players look great in the batting cage, as the degree of difficulty is much lower, and, generally, the player knows exactly which kind of ball is being pitched to them. 

Ted Williams was the last player to hit over .400 for a season in 1941. Almost all players hit over .400 in a batting cage. This is why when you see published stats for digital assistant accuracy, it’s important to know that the stats published came from actual usage in a production environment, and weren’t just the result of testing taken place in a QA environment with a bunch of teed up utterances that didn’t truly test the ability to match accurately. 

Just as an FYI, we run over 10,000 test utterances against Ida any time we make any change. And in QA Ida scores over 97% accuracy. Whereas in production, in client environments Ida scores around 85%, and that’s the number we publish. 

Kershaw pitching vs. pitching machine
Figure 4: Hitting against Clayton Kershaw is a lot more difficult than a pitching machine.

4.   The Curve Ball

While it’s obvious why Clayton Kershaw is harder to hit than a pitching machine, it may not be so obvious why humans interacting with a digital assistant is so much more difficult to handle than test utterances in a QA environment. So, let’s do this with some examples:

Ida Dialogue: someone asking about paycheck
Figure 5: Typical training/testing utterance

Now, let’s see an example that a human actually typed:

Ida Dialogue: someone asking about paycheck
Figure 6: Sample of an utterance by a human in a production environment

As you can see, as much as vendors try and emulate utterances in their DEV and QA environments, what actual people say often comes out of left field and can really fool a digital assistant that lacks the technological maturity to filter out the essence of what is being communicated. Complex utterances are really the equivalent of the curve ball in baseball (or splitter, slider, etc.).

5.   Swinging the Bat

So, exactly how does a digital assistant know when to try and answer a question (swinging the bat) and when to claim ignorance (lay off the ball)? For many it’s a simple case of confidence levels. When an utterance is passed to an NLP engine, what gets returned is a list of possible things it thinks it has a match to, plus a confidence level attached to each one. Then, for a basic digital assistant, it’s just a case of selecting the one with the highest ranking and presenting that to the human. Sometimes the digital assistant will also include, as an act of disambiguation, a list of those things that are closely grouped in confidence levels. 

For really advanced digital assistants like Ida that use nested NLP techniques, much more complex algorithms are used to determine what to present and how to present it. But, ultimately, everything is a combination of either single or multiple confidence levels that may or may not be passed into even more complex algorithms to further establish what the human is asking the robot. 

A really good digital assistant doesn’t just have a high accuracy rating, it also has a high rate of responding to questions (swinging the bat). A digital assistant that only replies to really obvious requests will score high on accuracy, but low on satisfaction, and low on its ability to truly solve problems. 

6.   The Pinch Hitter

As we roll into 2021, a subject that will continue to keep coming up at all organizations will be the concept of the digital assistant “concierge”. Given the inevitable plethora of bots being implemented at many organizations, typically on different technology platforms, the question that will be constant in 2021 is:

“Can’t we just have one digital assistant that everyone interacts with. It’s too complicated for our people to know which one to go to?” And the simple answer is yes. 

Organizations have begun to realize that one digital assistant should be the “face” to the organization, and that it is the job of the “face” to handle integration with all the other bots in the organization. Even those on a different technology stack. This is called a “concierge” solution, where the digital assistant can reach out to different bots at runtime to get the answers to questions it knows nothing about. Kind of like how the concierge in a hotel operates. 

Technically, and to keep with our baseball theme, “concierging” entails swapping different bots in and out of the lineup based on the question the digital assistant is faced with. Just like bringing in your lefty to face the right-handed pitcher. So, for example, if the digital assistants key strength is HR based requests and the human asks a finance question, the digital assistant will reach out to the bot, or skill, that can best handle it, and acts as an intermediary communicating the responses with the human. 

Ida’s underlying technology stack is based on Oracle technology, and so it is easy for Ida to concierge with any skill built on the Oracle Digital Assistant technology stack. Plus, because of the advanced nature of the stack and the middleware Ida uses to integrate with the stack, it is also possible to plug other completely different technology-based bots into the solution too. Like Microsoft LUIS or IBM Watson. 

7.   Laying Off the Ball

Of course, no digital assistant should be attempting to answer every question or request that it gets. If someone wants to know how tall Tom Brady is, that’s probably not a question it should be trying to respond to. But what it shouldn’t be doing is just responding with an “I don’t know the answer to that”. In baseball, if the ball is pitched a foot outside the plate, the hitter knows enough to know that they shouldn’t be swinging at the ball. That’s how baseball players, and digital assistants, make fools of themselves. 

The best way for a digital assistant to lay off the ball is for it to politely say, “I don’t know the answer to that. But here are the topics I do know a lot about. You can browse these topics, or I can pass you to a live agent if you need more help”. 

8.   Slugging Percentage

For most of this article we have focused on contact with the ball as being the key metric for judging a digital assistant. Was the human utterance correctly matched to what the digital assistant was capable of responding to? Which is why batting average is used as a direct parallel. In reality there is another dimension to all of this that is best described with the comparison to baseball slugging percentages. 

In baseball not all hits have the same value. One hit may get the player to first base, while another hit may score a home run. And, as everyone knows, hitting a home run is massively valuable, so slugging percentage is used as means to measure the power of a hitter. If you combine a hitter’s ability to get on base, plus the power they generate with each hit, that equates to a stat called OPS. And if the OPS is greater than 1.000 that means you are a superstar. 

Digital assistants are exactly the same. Just being accurate (getting on base) isn’t sufficient. It’s the quality of the response that really counts (slugging percentage). 

So, what exactly do we mean by that? Let’s see an example to illustrate the point:

An example of just getting to first base:

Ida Dialogue: simply link to other website
Figure 7: Barely getting on first base

An example of the home run:

Ida Dialogue: complex transfer
Figure 8: A home run hit out of the park

As you can see, the first example provides a link to web page, where all you can really do is initiate a request for someone to get this done for you. The second example clarifies any outstanding questions, completes the task, and triggers any appropriate workflow. This is called a home run, and it speaks to the deep integration capabilities of the digital assistant. 

As is very clear in the examples above, the first example is not much better than an improved means of navigating crude FAQ web pages. Whereas the second example shows the value a superstar digital assistant can bring to your organization. 

9.   Statistics, statistics, statistics

The world of baseball is measured with every statistic you could ever imagine. And, these days, all organizations used advanced statistics to make pretty much every decision. No human, no matter how good their “eye” is, can process 250,000 data points and make an informed decision without some kind of analytics engine to provide guidance. And this is true in the world of digital assistants. Despite almost every salesperson in the AI world telling their prospect that AI “just learns”, that’s just not true, and nor is it advisable. Once a digital assistant is introduced to your organization, every decision you make in terms of growing its knowledge, and improving its capabilities, needs to be driven from hard data. 

Yes, aspects of AI are a black box, but any black box can be measured, and by precise measurement you can predict behavior, and also alter it when needed. 

At IntraSee we ask Ida over 10,000 questions every time we add more knowledge to the corpus, plus examine a quarter of a million data points. And, because this is impossible to do by hand, we automate the entire process. This allows us to ensure Ida gets smarter each week, and that there is no regression. Without all this data it would be impossible to do that.

The orchestration of supervised training and measurement is the key to continuous healthy growth. Like humans, digital assistants start life being great at some things, and not so great at others. And it’s important for all organizations to understand exactly what those strengths and weaknesses are. And, just like humans, this information can be used to further train the digital assistant, and also further enhance the quality of the responses. 

With an array of statistics that are both high level for your executives, and detailed enough for your business analysts, you will have the tools to effectively manage your digital assistant. 

10.   Player Development

Most people assume that the use of statistics is confined to trading, drafting, and game strategy. But the most advanced organizations also use statistics to drive player development. It’s one thing to draft or trade for a great prospect, but unless they are coached correctly, you’ll never get the best out of them. Sometimes it will be a case of tiny fractions of adjustment to the length or angle of a swing that makes the difference between a good player and a great player. 

Similarly, with digital assistants, they are not a “set and forget” solution. How you develop your digital assistant with continuous learning is what turns a good digital assistant into a great one. Supervised machine learning via automation, and rigorous automated testing, is the key to ensuring that your digital assistant gets smarter every single week. 

In many ways you have to view your “digital worker” in the same light as when you hire a human worker. Performance appraisals, continuous feedback, and the setting of goals are just as important to your digital assistant. And, in some ways, more so. However, unlike your human hire, your digital worker can continue to get smarter and more knowledgeable each week, with no plateau. It will work 24/7 for you, and never call in sick. Plus, it will never leave you for a different organization, taking its knowledge with it. In short, it’s an investment that keeps on paying back. 

If you would like more information, or would like to see a demo, please contact us below.

Contact Us

It was late July 2019 when Loyola University of Chicago reached out to us about launching a digital assistant by Labor Day just 6 short weeks away. Loyola had established a five-year plan to advance the Loyola Digital Experience (LDE) strategy in part by leveraging artificial intelligence (AI). A digital assistant would be a perfect blend of digitizing previously analog services, like student support, by deploying AI. Loyola, with their long tradition in basketball, would call that an “and 1”!

The digital assistant would be piloted during the fall semester and eventually rolled out to 16,000+ undergraduate students. They named it LUie, and it would provide hundreds of answers to common questions while leveraging information sourced from bookstores, google maps, and PeopleSoft Campus. This posting will take you through the pilot, a sample of use cases, the data collected, and next steps for Loyola.

LUie is a clever play on the acronym Loyola University, or LU for short. Giving their digital assistant a personality while paying homage to its parent, the university.

The end result of this journey was an award-winning project receiving an Innovator’s Award from Oracle. The accuracy of LUie has even surpassed Alexa and on October 7, 2020 you can see LUie for yourself at Reconnect during the session, “Succeed with Chatbots in PeopleSoft”.

LUie dialogue in action: grades and buildings

LUie helping a student look up what matters most…their grades!

The Pilot

In a mere six weeks, LUie was configured with a mix of Ida’s pre-built campus answers and skills, plus Loyola’s own questions. It was then launched to students who were just arriving on campus. The only way to meet this timeline was to leverage the Ida platform and the IntraSee implementation methodology. Three departments took part in the pilot: the IT service desk, the bursar’s office and advising. Those departments would identify the most common questions fielded, which would then initiate automated testing and NLP learning sessions. Equipped with years of knowledge, LUie was then able to service hundreds of different undergraduate student and advisor requests. 

The pilot focused on a fully-authenticated digital assistant because the point of a pilot is to prove LUie can handle the sophisticated features contained in authenticated campus systems like PeopleSoft. LUie would look up secure enterprise data and perform transactions on behalf of the student and advisor, so it needed to be fully authenticated and aware of permissions and roles. Upon login, LUie provided personalized responses applicable to the person’s role at the university. The pilot started with 50 users, but gradually built to around 1,000 users by the end of the semester. The goals were to show LUie in action with real users, prove the integration of enterprise information was successful, and to collect a ton of usage data to prove its usefulness.

Pilot Results

During the pilot, LUie hit an accuracy rate of 86%. That is much better than most humans, and also added the benefit of being available 24 hours a day and responding almost instantly. LUie never even had to take a lunch. Interestingly, the top categories of user questions were ones asking about the student’s information and academic data (44% of all questions asked). This observation showed that students didn’t want a basic chatbot nor a link to go somewhere else. They wanted a digital assistant with personalized responses right within the conversation flow, and returned in an instant.

During the pilot, responses were provided by LUie at an accuracy rate of 86%.

Another interesting observation was that 28% of all questions asked during the pilot were on subjects LUie was never trained to answer. An example of one of these was, “how do I get my broken dorm window fixed?” AI is much like a young student in that it needs to be taught to understand a topic. However, it is unlike a student in that LUie only needs to be trained once; as it never forgets, it maintains its accuracy, and never leaves for greener pastures. The type of questions asked were so broad it showed that students expected their digital assistant to do it all. Even if a user was on the Bursar’s page when they launched LUie, it didn’t mean they kept their questions limited to student financials. This is an important fact to consider when launching a digital assistant vs. a chatbot.

On occasion, LUie asked users for feedback about how he performed. 91% of that feedback was positive. It’s hard to imagine having a 91% satisfaction rate on any support experience, but LUie’s speed, availability and accuracy made for a happy user.

Full Launch and COVID

In the Spring, LUie launched to all undergraduates and a portion of advisors for a total of over 16,000 users. During the full roll out, the COVID pandemic hit. Coincidentally, we realized how key a digital assistant would be during this time. There has never been a moment in recent history with so many questions from students, faculty and staff. And the answers to those questions seem to change daily. Speed and accuracy is king in this environment. There is no room for false information and LUie excels at just this task.

LUie took in almost a hundred new questions from Ida’s library concerning institutional policy related to COVID. Topics such as symptoms, living on campus, essential employee status, zoom tech support, drop deadline changes, grading basis, building access and so much more.

LUie promotional sign with sample questions

Sign used to promote LUie around campus

As with any good AI, Loyola now monitors how LUie is doing and provides it with weekly feedback (just as a teacher would for a student). That feedback automatically evolves LUie’s understanding of the campus such that it is getting better at its job every week.

Not Just Students

There is a lot of focus with chatbot providers on the student, but LUie went beyond students to help advisors too. After all, advisors are key to student success, the ultimate goal. If we make the advisors life easier and save them time, they can serve the students better. LUie’s benefit to advisors can be illustrated by examining one innovative use case at Loyola.

Students are limited to taking 18 credit hours a semester. Loyola doesn’t want them to take on too much and impact the student’s success. However, there are certainly cases where a student can handle taking more, or they simply need to do so in order to graduate in four years. Traditionally, the student would approach their advisor and the advisor had to:

  1. Look up their GPA to ensure they were around a 3.0 or higher and then…
  2. Update two different pages in the student system including about 10 different fields. 

This process could easily take 15-20 mins, and it was easy for the advisor to miss a step, or a field, which would create further problems.

LUie took a 15-20 minute advising process and automated it down to under 30 seconds

LUie now automates this whole process. An advisor simply asks LUie, “can you approve John Doe’s overloaded schedule?” LUie then automatically knows to look up their GPA and ask the advisor for approval to proceed. Once approved, LUie will automate the filling in of those 10 fields across two pages. The whole dialogue takes less than 30 seconds! All that time goes back to the Advisor to spend on strategic work.

Next Steps

LUie will not stand still. Different departments at Loyola see the potential and a line is starting to form to get LUie serving those departmental needs. LUie has a developing roadmap, with the following next phases being planned:

  1. Non-authenticated LUie: the ability for users to ask LUie for help and not have to sign in. As long as LUie’s answers are not sensitive or secured, it will provide the help. This includes parents, prospects and community members who can now use LUie. As of September 2020, this feature is now live!
  2. Graduate Students: the audience will expand to include graduate students which will have their own questions and even graduate-specific answers to current questions.
  3. Employees: the HR department wants to serve employee specific questions, including integration into HR applications and COVID related staff questions.

Conclusion

Industry surveys of Higher Education CIOs have highlighted the two top priorities in recent years: digital transformation and artificial intelligence. LUie has helped Loyola address both of these high priority areas while improving student service and helping to navigate a pandemic. 

LUie pilot statistics, 86% accuracy, 91% positive feedback, 24/7 and fast response times.

See LUie for yourself this October at the Reconnect conference on Wednesday, October 7 at 5pm ET. Just add Session ID 102210 to your schedule. If you can’t make Reconnect, just contact us below and we can setup a personal demo and show you how digital assistants can help your students, faculty and staff.

Contact Us

Every industrial revolution has been defined by increased efficiency and reduced costs. The new digital revolution we are embarking upon is no different. Things that took days to do can now be done in seconds, and things that used to cost hundreds of dollars can now be accomplished by spending less than one dollar.

Conversational AI is cool, but that’s not why it will change the world. It will change the world because it will be better and cheaper than many of the things we pay humans to do today.

In this blog we will focus on the impact of digital assistants in the world of human resources (HR). And how it will change how organizations can service requests and questions from employees and managers in a way that reduces organizational costs and improves the level of service. We will therefore break down the two areas that should result in large reductions in operating costs: the HR help desk and HR staffing levels.

What you will see is that even the most conservative approach to saving costs with a digital assistant will realize between a 10%-30% reduction in help desk and HR costs in one year. And that can be doubled in two years. Plus, you’ll be providing better service to your employees and managers too!  

As Larry Ellison pointed out last year at Oracle OpenWorld. It’s not the software that is the most expensive item, it’s the cost of all the people who have to deal with all the ramifications of running the software. 

1.   HR Help Desk Costs

It has been said that help desks are the cost of (a lack of) quality. Scattered, and often misleading, information and complex processes inevitably force employees to reach out to live agents to help them solve their problems, answer their questions, or complete a task. Help desks are, often, the cost organizations pay for failures elsewhere in their internal systems. 

So, let’s break down the staffing costs of a help desk in order to drive to an expected cost saving:

The average number of service agents per 1,000 seats ranges from 5.4 in the healthcare industry to 21.9 in the financial services industry. This is the metric that defines the staffing levels of an organizations help desk. 10 per 1000 would be a conservative average number across all industries, and so we will use that for the model in this exercise. 

This means that for an organization with around 20,000 employees, the number of agents is around 20. In North America the average salary for a service desk analyst is $41,000. Multiple that by 20 and you get $820,000 per year. But that in itself is not the complete picture. 

The ratio of agents to total service desk headcount is a measure of managerial efficiency. The average for this metric worldwide is about 78%. What this means is that 78% of service desk personnel are in direct, customer facing service roles. The remaining 22% are supervisors, team leads, trainers, schedulers, QA/QC personnel, etc. And those people are even more expensive. This takes the headcount up to 25.

The average salary for a service desk supervisor is $61,000 and the average for a service desk manager is $75,000. Which means that those extra 5 people push the staffing costs up by at least $305,000. Driving the total cost of salaries staffing the service desk for an organization of 20,000 up to $1,125,000. And then when you factor in utilities, technology and facility expenses this raises the number to over $1,325,000. 

And one final statistic to keep in mind. While the average overall employee turnover for all industries is 15%, inbound customer service centers have a turnover rate on average of 30-45%. It should come as no surprise that service center turnover is at least double what you’d see in other businesses.

Based on age, the differences are stark: workers age 20-24 stay in the job usually just 1.1 years, while workers 25-34 stay 2.7 years on average. 

And the key metric here is that it costs on average around $12,000 to replace agents that leave. Why? The costs of turnover include the following:

  • Recruiting
  • Hiring time (HR time, interview time)
  • Training, including materials and time
  • Low-productivity time when employees first start out
  • Supervisory time
  • Overtime (remaining staff may have to cover extra shifts)

So, going back to our original metric of 20 service desk agents, if 40% leave each year, that equals 8 annual replacements at a total turnover cost of 8*$12,000=$96,000. 

So, as a grand total, an organization of 20,000 employees has to pay an annual cost of around $1,421,000 year to staff their service desk. 

In terms of how cost per ticket is calculated (a key metric), this also depends on the number of tickets closed per agent per year. Again, this varies a lot by industry.

Help desk tickets per agent, per month by industry

Figure 1: Tickets closed per agent per month

The average number of tickets closed per month per agent is around 120 cross-industry. So, per year it is 1,440. Which means that with 20 agents the expected number of cases closed (not always successfully) is 28,800. 

This means that the average cost per service ticket is around the $49 mark ($1,421,000 / 28,800) if you take into account a broader range of costs than just service agent salaries.  So, while generally published average costs per ticket are estimated to be around $19-$20 per ticket, the true cost is much higher, but with massive variance based on industry. 

The good news is that the actual logistics around achieving ROI are therefore pretty straight forward. Instead of hiring 40% new staff every year due to attrition, just have the digital assistant pick up the slack and do not hire any new staff. This immediately saves your organization $96,000 in turnover/onboarding costs. Plus allows you to drop the salary costs by around $450,000 (40% of a $1,125,000 payroll). 

It also allows you to reduce other costs associated with your help desk. Utilities, technology costs, and facility space (you can downsize based on the reduced headcount). 

For a digital assistant, the average cost per ticket is less than $1. Which means that if you replaced 40% of your help desk calls with digital assistant calls, this would result in digital assistant costs of less than $11,000.  Factor in a reduction in headcount and other expenses, plus a hiring freeze, and you would see an overall reduction in costs from $1,421,000 to $806,000 in just one year (see diagram below). And even greater savings after two years.  

Help desk cost infographic

Figure 2: HR Help Desk ROI using a Digital Assistant

Also, and just as importantly, the quality and accuracy of the digital assistant will continue to increase each subsequent year and will not plateau (as it does with humans). This is due to two factors:

  • Digital assistants don’t leave your organization. There is zero turnover. 
  • Digital assistants benefit from machine learning. The more they see and the better training they are given, the more accurate they get. As an investment, they are a win-win all round. You teach them something once, and they remember forever. And they’ll never leave you or call in sick. And they’ll work 24/7, 365 days of the year. And can even speak multiple languages. 

But this is not where the story of ROI ends, it’s really where it begins. Help desks are really designed to handle the easy, first level stuff. Once you get to the next level (where the agent can’t handle the ticket because it’s too complex for them), the costs are in the hundreds of dollars per ticket as you are now dealing with a more expensive level of staffing and more minutes required to solve the problem or meet the request. This is where HR staffing levels come into play. 

2.   HR Staffing Costs

In the world of HR, HR experts handle many of the day to day HR activities and employee/manager requests in an organization. In the same way that there are agent staffing levels per industry, there are also HR staff to employee ratios too. And this ratio does vary per industry. Typically, the more complex the organization the higher the staffing level. But size matters too. There are economies of scale that kick in once an organization gets really big. But being global, having a mix of full time and part time employees, union and non-union, blue collar and white collar, will dictate higher ratios than a company where most people fit a similar profile. 

But this does not mean that the ratio is stuck and cannot be changed. There is one key aspect of the staffing ratio that is in complete control of HR, and, therefore, has a huge capacity for change. And by change we mean reduced! 

The role of HR is a key variable factor that influences the HR staff to employee ratio. A highly operational HR department will do different work and require a larger HR workforce compared to a highly strategic HR department. So, what specifically does this mean? How can HR move from being mostly operational to being mostly strategic (a much more fun and productive role btw). 

The answer is to move traditional HR admin tasks from humans to a digital assistant.  HR admin work is probably the least popular thing that any highly educated and highly paid HR expert has to do, so removing this onerous work from their plate is a good thing! 

Running reports, answering requests for data, following up with managers to ensure key tasks were performed, entering data into the HCM system. These are all repetitive operational tasks that can be automated and handled by a digital worker. 

All this stuff is boring and repetitive to humans, and it takes a lot of time. But to a digital assistant it is fun and can be done extremely quickly. And the “right” digital assistant, with the proper skillset and training, can do almost all the HR administrative tasks that an HR expert can do. Often better, as they don’t forget obscure details and business rules, they don’t make mistakes, and they bring their “A” game every single second of the day. And, as stated before, they don’t leave your organization, turnover is zero, so wisdom is accumulated and not lost via natural attrition. 

So let’s get into the math of the ROI. Bloomberg Law’s 2018 HR Benchmarks Report states that HR departments have a median of 1.5 employees per 100 people in the workforce. At the time, this represented an all-time high as it had long been around 1.0 per 100. Both the Society for Human Resource Management (SHRM) and Bloomberg numbers were very similar, so this number is considered very accurate.

SHRM also noted a clear reduction in the ratio based on organization size (an economy of scale). However, as the size of the company rises, so does the average compensation to HR staff (which explains why published averages are very misleading). Working in HR for a large company can be twice as financially rewarding as for a very small company. The reason being complexity (on many levels). If you want HR people who understand complex organizations then you have to pay a premium. 

HR to Employee Ratio Graph

Figure 3: HR staff to employee ratio’s cross-industry

Using our example of an average organization with 20,000 employees and ratio of 0.4 HR staff for every 100 employees, the HR staffing level would be around 80. At this size of an organization the average level of HR compensation would be around $100,000. Making the total spend equal to $8,000,000 per year. Note: that’s a lot more than the $1,125,000 spent on the service desk salaries. 

In the world of HR staffing, turnover is more inline with other industries, around 15% per year. Though the cost of hiring HR staff is much higher than the $12,000 for service agents. For HR Staff it costs roughly $30,000 to replace the turnover (recruiting, interviewing, training, etc.).  So, in our example, 12 new staff are required every year at a turnover cost of $360,000. Making the total annual cost equal to $8,360,000.

The big question then is how much of this work can be taken over by a digital assistant? The answer isn’t quite as clear as with the service desk. It all depends on the skillset of the digital assistant, and HR taking a proactive approach to how it replaces natural attrition of HR staff. 

But the expectation, and based on the results of early projects, is that for the best digital assistants it is at least 10-30% of HR admin work that can be transferred from HR staff to the digital assistant. But that is just for 2020. This number should leap forward in bounds each year for the top digital assistant performers. 

Using a conservative approach, if a company decided to hire just 5% new HR staff each year instead of the usual 15%, and used the digital assistant to pick up the slack of the 10% of the positions left unfilled, the savings would still be considerable. Let’s examine the resulting cost savings and see how this looks in detail. 

In this scenario, HR costs would reduce in year one from $8,360,000 to $7,680,000 as the new staffing level would drop from 80 to 72 (12 people would leave and only 4 new people would be hired). While at the same time digital assistant operational costs would amount to roughly $360,000 to cover the slack. So, the total net saving in year one would be $680,000 (excluding implementation and configuration costs). But with a huge potential for much bigger savings in future years. 

Implementation and configuration costs for a digital assistant that could handle both help desk and HR admin tasks would likely cost in the realm of $100,000 to $250,000 to implement. But this would be a one-time fee and would result in a year one total saving of HR staffing costs between $800,000 to $900,000. 

Year two would see greater savings, as there would be no implementation costs, and the new hire rate would again be set at (a maximum) 5%. Taking the HR staffing level from 72 to 65. 

Year two HR staffing costs would therefore be $6,500,000. With a turnover cost of $90,000 (10 people would leave and only 3 new people would be hired), giving a grand total of $6,590,000. Because the digital assistant would be taking on more work in year 2, that cost would rise to $396,000. Which would result in a total cost in year 2 of $6,986,000 (see graph below for details). 

HR professional cost infographic

Figure 4: HR Staffing ROI using a Digital Assistant

In summary, correct implementation of a digital assistant solution that can handle both HR help desk AND HR admin requests is by far the best approach to achieve maximum ROI.  Done effectively it will also realize superior service levels by providing faster and more accurate turnarounds for your entire workforce in a way that is far more convenient for them. 

Welcome to the world of high ROI, and welcome to the next industrial revolution. It’s ready and available now. 

Contact Us

At IntraSee we are super excited to announce that the latest version of Ida (IDA-20.6.1) is currently being rolled out to all our customers. As announced last week, IntraSee and Ida are now two separate entities, but it’s still business as usual for how we service our clients.

As usual, many thanks to Oracle for all their support and collaboration as we embed their excellent Oracle Digital Assistant (ODA) technology into the core of Ida, via Ida’s Hybrid-Cloud compatibleGDPR compliant, and world leading meta-data driven middleware architecture.  

Our goal of automating every aspect of the conversational Enterprise & Campus would not be possible without having such an awesome partner to work with. So, with that said, here are the highlights for Ida IDA-20.6.1:

  1. Automation of monitoring and learning into one fully integrated feedback loop.
  2. Structured unsupervised learning based on defined outcomes.
  3. Improved algorithm to accommodate and resolve ambiguity.
  4. Advanced statistics to better evaluate performance and automate learning.
  5. Enhanced reporting features to support decentralized reporting from a denormalized reporting structure that can be imported into a client’s environment.
  6. Support concierge services for bots on different technology platforms (ex: Microsoft LUISIBM Watson, etc.). 
  7. Advanced multi-lingual support to accommodate different levels of language support
  8. Additions to the Skills Library (more delivered HCM & Campus skills), including COVID-19 skills for both HR and Campus.
  9. Add support for new Oracle ODA 20.x features.

Product Update Notes

The focus for this release was the automation of how Ida is rated, reported on, and improved via machine learning. With one click by a subject matter expert, an entire train of events are initiated to measure accuracy, add to NLP training (when necessary), test for regression against over 250,000 data points, and track time to resolution in the production environment. What would take a team of 6 people 5 days to accomplish is now boiled down to 5 hours a week for 2 people

Adding multiple new data points for monitoring also provides unprecedented insight into daily performance of Ida. By comparison, typical human performance of the same things that Ida handles is only monitored with roughly 5% of the data points, and at much lower levels of accuracy.

As always, we constantly make advances with utterance matchingThis is job #1 in our world. If you can’t match an utterance to the correct question/answer then it won’t matter how elegant the bot flow is. So, we added more layers to the NLP engine and also implemented sophisticated ways of reanalyzing data from the NLP engine to improve the probability of success. This demonstrably improved the matching capability in real-world environments.

As mentioned in previous product updates. You can’t focus too much on the training, testing, and measuring of digital assistant performance. Your “downloadable worker” needs to be treated like a human worker in many respects. Only instead of bi-annual performance reviews, you need to be constantly evaluating performance every day, and providing weekly statistics to your organization on its growth and maturationIda does this out of the box now, to make the daily review a point and click exercise. 

One thing that is becoming apparent to many forward-thinking organizations is that a “concierge bot” isn’t a nice to have, it’s a necessity.  Given the proliferation of bots in every organization, your workforce or campus users need one place to ask a question. You can’t expect them to figure out which bot to talk to. Because of that, and the technical challenges of having different bots on different technology platforms, Ida now has the capability of being technology-agnostic when it comes to being a concierge. Ida doesn’t just speak over 100 human languages; Ida can also speak to multiple bot platforms too. Ex: Microsoft LUISIBM Watson, etc. 

Finally, as always, we added more skills to the library of Ida. Ida can now perform literally hundreds of things out of the box, and has the capability to be quickly configured to add hundreds more. At IntraSee, we used Ida to add multiple skills (for HR and Campus) to help with the COVID-19 situation in the past two months, and we can implement them in just 1-2 weeks. 

Contact us below to learn more and setup your own personal demo:

Contact Us

SaaS Technology Company Spun Out of IntraSee

SANTA MONICA, CA, June 17, 2020 — IntraSee, Inc., announces it has established Ida Artificial Intelligence, Inc. (‘Ida’), a technology company providing market-leading solutions for enabling the Conversational Enterprise and the Conversational Campus. 

IntraSee will continue to provide global implementation services in both the conversational UI and web UI markets.  Existing clients of IntraSee will see no material change in terms of how they have always worked with the company.

Ida provides SaaS-based technology services to companies, universities and colleges, and governments undertaking conversational UI, digital agent and chatbot initiatives to enable their users to more easily engage and interact with the organization through human-like, digital experiences. 

By establishing Ida, the two companies are respectively able to better serve the increasing market demand and diverse use cases for conversational UI deployments by large organizations.  

Existing and new clients will be able to take advantage of Ida’s capacity to develop and manage advanced solutions for increasingly sophisticated cognitive services, middleware, autonomics and the data analytics necessary for robust conversational UI deployments within large enterprises.  

Ida’s technologies lead the industry in their use of automation for conversational UI services, including automated configuration, testing, user feedback, remediation and learning. 

IntraSee will continue to provide the necessary professional services which support the unique demands, customization requirements, design and growth needs for such deployments of Ida.  

More can be found about Ida at https://meet-ida.ai.

Contact us below to learn more:

Contact Us