With the release of PeopleSoft Picaso, our clients are asking a lot of questions. It is not surprising because in the world of AI there is a lot of murkiness caused by sales and marketing messages. This post will bring some clarity so you can be well informed.

Many years ago our digital assistant, Ida, shifted to using Oracle’s ODA PaaS service for its natural language processing (NLP) due to its flexibility and power. ODA is often unheralded, but its NLP uses 40 years’ worth of enterprise data from “The” data company, Oracle. It is a unique advantage not held by competing products.

ODA is a stand-alone platform running in the cloud and has no requirement to be used with Oracle Applications. An entirely different team at Oracle runs the ODA division. As with all technology products, and just like Oracle’s database, Oracle’s applications can provide value by leveraging the company’s technology. HCM Cloud, ERP Cloud and now PeopleSoft products are delivering functionality, called “skills”, for ODA customers.

PeopleSoft’s skills are deployed through the brand “Picaso” and this post is all about Picaso, what it is, who should use it and the value it provides.

What is Picaso?

Picaso is the name given to the Chat UI widget (the actual window you chat in within PeopleSoft) which consumes multiple PeopleSoft delivered skills (from HCM and FSCM currently). Just like Workday’s or Salesforce’s chatbots, they are meant to extend the application’s functionality. PeopleSoft Picaso, however, is built on a far more powerful platform than the aforementioned products.

The purpose of Picaso is to provide a conversational user interface (UI) to PeopleSoft which is represented in accessing PeopleSoft data and even conducting a few self-service transactions like registering a PTO day.

What do I get?

Currently, Picaso comes with a web-based channel inside Fluid pages. Classic is not supported. When a PeopleSoft user logs in, they will see a chat icon they can engage with. PeopleSoft then delivers skills via the ODA Skill Store. The current skills as of the Fall of 2021 are:

  • Absence Skill (HCM)
  • Benefits Skill (HCM)
  • Payroll for North America Skill (HCM)
  • Requisition Inquiry Skill (FSCM)
  • Expense Inquiry Skill (FSCM)
  • Employee Directory Skill (HCM)

More information about these skills can be found in the Picaso documentation. These skills are built to access data from these modules and, in some cases, process transactions as pictured below. There are minimum PeopleTools version and PUM image requirements, so be sure to check those out.

PeopleSoft PICASO HCM Absence Skill

How much does it cost?

In the PeopleSoft world, you pay for the application (HCM) and you get the technology included (PeopleTools). Digital Assistants work in the opposite manner. Picaso and its skills are free for PeopleSoft customers, but you need to purchase the PaaS cloud service for ODA. Implementing Picaso may come with additional consulting fees should you need assistance.

How do I implement Picaso?

Implementing PICASO involves a few key steps:

  1. License ODA
  2. Setup your OCI tenancy, ODA instance and provision access
  3. Install the PeopleSoft Skills and setup a channel
  4. Setup connectivity to your PeopleSoft (must be accessible on the internet)
  5. Configure PeopleSoft to allow integration of ODA
  6. Create a Digital Assistant, add the Picaso skills, and setup the channel integration
  7. Train and test the bot
  8. Deploy

In our experience, step 4 tends to be the biggest stumbling block for customers and requires a Cloud Architect to fully understand. Additionally, to get the most out of Picaso, having AI people on hand is invaluable. If you don’t have these roles available, you can enlist an Oracle Partner to help such as IntraSee.

What is the difference between Ida and Picaso?

Customers are coming to us and asking: should I implement Ida or Picaso? This notion is really a false comparison. Each product has a purpose and goes after different value propositions. They are more compliments to each other than they are an either/or choice.

Let’s start with what is not different between Ida and Picaso. Both use the ODA NLP engine as a basis for the machine’s understanding of human language. They are seeded with very different training data, but the engine is the same. Both require licensing of the ODA PaaS service (Ida embeds this license within its pricing). 

Picaso was built with a focus on PeopleSoft. It is present in the Fluid Web UI and helps PeopleSoft users get to data, pages and transactions. It is a great step into digital assistants if your needs are focused on PeopleSoft. With Picaso you will need to handle the AI Ops such as log/analytics monitoring, migrations, uptime, learning, etc. You do have the option of using a partner to manage these services and IntraSee is one such option.

Ida is meant to be a one-stop shop for users including policies, content, data, workflow, transactions, integrations and analytics. PeopleSoft is merely a sliver of Ida’s value proposition. Ida has customers whose use is approaching 1,000 skills so the audience is broader with the scale to match. Ida is often found on many web pages outside PeopleSoft, including SharePoint or CMS systems, and channels like Microsoft Teams. Ida is also available when the user is both authenticated or not.

Ida’s integrations are a key part of the one-stop philosophy. Ida has a catalog of integrations including Salesforce, ServiceNow, HCM Cloud, PeopleSoft, Google, Microsoft Teams and Office 365, Canvas, Taleo, Kronos and others including the ability to configure custom integrations.

Finally, AI Ops is a critical part to your project’s success. AI Ops teams are often made up of Data Scientists, AI Architects and Computational Linguists. Despite what some marketing teams may tell you, AI isn’t magic and it needs human cultivation for it to achieve superior accuracy performance. With Ida, we automate many of these human tasks and include managed services to fill these roles, so you don’t have to (illustrated below). The budget saved on these salaries or consulting fees alone makes an Ida project an ROI winner.

An illustration of Ida’s Value Added features as of 21.03

Can Picaso and Ida work together?

Because both Picaso and Ida run on ODA, Ida can consume skills from Picaso and include them in a one-stop-shop chat. You can get the best of both options and they can leverage the same ODA license, so the cost is only incremental.

The most efficient path is to roll out Picaso and Ida skills at the same time. This path allows you to tune the machine learning model with the broader scope in mind from the get-go. The alternative requires regression analysis and re-tuning you could have avoided. That doesn’t mean you can’t have a gap between implementing Picaso and Ida, but it is not the most cost/time efficient path.

Which one is the right choice for me?

If your objective is to start with a focused use-case, get some experience and add functionality to a PeopleSoft Fluid deployment, Picaso can be a great fit for you. If your mission is to drive ROI through automation at an enterprise level, then check out Ida. For most clients, we implement in 6 weeks and are in production after 12 weeks.

Contact Us

The next release of Ida, our digital assistant, will be available for customers beginning at the end of September, 2021. This post contains the highlights of this release which focused on automation and improving the ROI of digital assistants.

21.03 in Summary

21.03 includes many fixes and enhancements with a focus on the Feedback Loop process and the new Oracle Digital Assistant (ODA) NLP module. We are making the process of giving the machine feedback easier, more streamlined and efficient so it takes you less time each cycle. Ida’s Feedback Loop is a differentiating feature which drives its high accuracy marks.

Ida also adds improved automation to accuracy tracking and will now warn you when accuracy starts to slip or there is a growing concern. This will save you time from having to analyze reports and instead just grab your attention when your attention is needed.

Improved user experience features are also part of this release such as the new ability to rate, trap and provide custom outcomes when users are expressing frustration. An example of this is getting them over to a live agent when you sense they are growing frustrated.

The Ida library was also updated and we now have hundreds of pre-built skills for employees, managers, students, faculty, advisors and guests. You can read more detailed release notes below.

Release Notes

  • Bulk Import/Export Utility for FAQ+
  • Streamlined CV Configuration Page
  • Streamlined bot build process
  • Updated Ida catalog & training data
  • Autotest support for negative testing
  • Improved Feedback Loop labeling/help
  • Ability to record who initiated a chat via handoff
  • Improvements to Lightbox UX in Feedback Loop pages
  • Adjust bot export process to allow “empty” NLP Segments
  • Automated topic-level training data management
  • Improved QA autotest output
  • Last Accuracy KPI Dashboard Tile
  • Accuracy Leak KPI Dashboard Tile
  • Streamlined UI and choices for Feedback Ratings
  • Feedback Loop: New unrated option for ‘too vague to rate’
  • Trap frustration utterances and direct accordingly
  • Fixed show utterances in FBL in some use cases
  • Improved thumbs rating UI
  • Metrics Report: Sort location data by conversation count
  • Add DE branching logic support for auto-suggest inputs
  • Friendly not authorized message when guest-to-auth handoff fails
  • MS Teams Authentication updates
  • Improved translation performance for HTML responses
  • Improved filtering and defaults for Feedback Loop
  • Allow small talk decoupling in a digital assistant setting
  • Support for custom disambiguation response pre-text
  • Updated Feedback Loop Metrics Report to accommodate data model changes
  • Scheduled archiving of old Feedback Loop and Autotest data
  • jQuery conflict resolution for chat ui

Contact us below to learn more and setup your own personal demo:

Contact Us

Ida 21.02 will be released and available for customers beginning at the end of this month. Here are the highlights in this release as we continue to fine-tune the incredible accuracy performance we see from Ida as well as make various bug fixes and improvements.

  • No-match text is now configurable
  • Feedback Loop language toggle button styling changes (see both English and native languages used in this tool)
  • Capturing auto-utterance & initiator analytics to better understand who is handing off to Ida
  • Add Oracle ODA 20.09+ NLP model support for improved accuracy
  • Manual chat-ui language setting to force a specific language vs. auto-detection
  • Pruning features for audit data
  • Pruning & archive features for chat data
  • Check skill version against IUC version prior to testing
  • FBL Row Padding Fixes
  • Clean up Thumbs UI/CSS/HTML
  • Support separate DE processes for Help FAQ and NLP Failure process
  • Easy On/Off for Thumbs user satisfaction ratings
  • Audit reports for blank lines in answers
  • New 80/20 split administration page for training/testing data sets
  • Feedback loop now filters at server for improved filtering
  • Fixed MS Teams Variable Error
  • Now Capturing MS Bot User ID values

Product Update Notes

The focus for this release was to continue to improve NLP and utterance matching performance even beyond the 90% mark most of our clients are seeing. The central part of this improvement is supporting updated NLP models. As part of this support, the automated regression testing was significantly changed to more closely model real life thereby ensuring better quality assurance.

A series of features were added to understand how Ida plays in the larger context of an enterprise by tracking any referrals it gets, where it is being used and what channels it is running on (such as Microsoft Teams).

Next we have features added for better multi-language support such as now having a choice between a configured language for a user vs. auto-detection. Additionally, the language being used can now be passed to integrated systems for an end-to-end experience in your language.

We continue to add more skills to the library of Ida. Clients can get up and running quickly by using this catalog. Recently we have been adding content around the return to work/campus. Finally, many bug fixes, performance improvements and other minor updates are included.

Contact us below to learn more and setup your own personal demo:

Contact Us

At IntraSee, we have been proponents of automation for a very long time. Every new industrial revolution has taken place because a new way of automating things was discovered. In the world of digital assistants (aka extremely advanced chatbots), the benefits of automating both conversations and back office activities were obvious: reduced operating costs and a better user experience (UX).

But despite the glaring need for more automation in the workplace, many organizations have been slow to make it a priority. And then COVID happened. In a matter of weeks, automation went from a low priority to mission-critical for many executives.

A recent study by Bain & Company brought to light a dramatic switch in how organizations now view automation as a way to create business resilience and mitigate risk during a crisis.

Survey results: most important automation goal pre-COVID
Figure 1: Automation Survey of 500 Executives

As you can see, the COVID crisis has acted as a major jolt to companies globally. The ability to successfully navigate any crisis with the least impact to how the organization is run is now deemed as the number one reason for implementing an automated solution.

Business continuity during any crisis is best ensured by automation. And if this also brings lower operating costs and higher quality, then it should be a slam dunk decision for any organization.

And because of this, barriers to automation are coming down, right across the board, due to the current pandemic.

Figure 2: Barriers before and after the pandemic

What is very clear is that executives have made a massive shift in how they intend to use their technology budgets, and for the upcoming year resources will be focused on the automation of the Enterprise, and previous barriers are all being swept to the side.

And this shift shows up in what has actually occurred since the pandemic kicked in. Looking at the chart below, you can see the adjustments being made that are primarily based on workforce actions – which is what you would totally expect. But you can also see that automation has now become a key focus in terms of delivery-based actions.

Figure 3: Automation shown as a clear outcome of COVID

Even before COVID, automation was on the roadmap across all industry sectors. Only now the focus has intensified.

Figure 4: Automation plans across industry sectors

Bain’s conclusion as to the impact of automation as a pandemic risk mitigation strategy:

While Forrester proclaimed:

The COVID-19 health crisis is on everybody’s mind. Once it passes — which it eventually will — one of its lasting legacies will be a renewed focus on automation

Meanwhile IDC noted:

“I haven’t talked to anyone who’s not doing automation as a way to become more competitive, and more resilient.”

AI in the workplace is set to explode in 2021 as part of the fourth industrial revoluton and the move to AI-fueled automation. COVID-19 has not slowed down this move. Quite the contrary, the current pandemic has accelerated the move to automation as a means of ensuring business-as-usual during a crisis.

Digital assistants are the most logical next step for automating how your entire organization interacts with the Enterprise. Given that, it’s clearly time to implement a robotic means of ensuring business resilience in the COVID and post-COVID world. And if you can reduce operating costs, and improve the UX, all the better.

Chatbot Service
Figure 5: The face of business resilience, risk management, reduced operating costs, and improved UX

If you would like more information, or would like to see a demo of how to automate your Enterprise, please contact us below

Contact Us

Statistical analysis revolutionized the game of baseball, to the extent that every single team in the major league now employs a huge stats team that drives almost every decision the organization makes. The days of a “good eye” and a “gut feel” being the deciding factor are long over. Now, it’s all about the numbers. 

And the parallels to the world of digital assistants are remarkably close. Successful implementation of a digital assistant solution entails applying advanced statistical techniques to both measure and improve performance. The premise being:

“You can’t manage what you can’t measure”

– Peter Drucker

So, let’s start with the concept, and then examine the details behind the ways that the worlds of baseball and digital assistants collide.

1.   Moneyball

The lesson from Moneyball was that smart organizations could compete with the likes of the Yankees if they used their limited financial resources in ways that were much more efficient. If they could generate a hundred runs a year by spending $1M, while the Yankees were spending $10M, then they would level the playing field. Doing more with less was possible if organizations could change their traditional ways of thinking. 

And so it is true in the world of digital assistants. If somebody offered you something ten times better than what you already were paying for, and it happened to also be twenty times cheaper (at least), wouldn’t it make sense to switch to it? 

Human vs. Ida support performance
Figure 1: The new reality

This is the new world we live in. Humans are fantastic are doing certain types of things, and woefully inefficient at doing others. And digital assistants just happen to be great at what humans are poor at. Humans can’t infinitely scale, they also can’t remember thousands of key facts, or update hundreds of records in a few seconds. Humans forget things, are not always fully motivated, and no matter how much you invest in them, humans will eventually leave you. 

Digital assistants are the new Moneyball. If you choose the right one, and pay heed to how you implement and grow your new digital worker, it will allow your organization to provide a better service at a tiny fraction of the cost you are paying now. 

Moneyball cartoon: digital assistants are better and cheaper
Figure 2: Moneyball for the Enterprise & Campus

2.   What is a good accuracy score?

It has been said that hitting a ball in major league baseball is the hardest thing to do in all of sports. Yes, hitting a gently tossed baseball in your back yard is something almost anyone can do. But hitting a 92 mph four-seam fastball from Clayton Kershaw? Well, that is something very few people can do. 

In the world of baseball, making contact with the ball is considered job #1 of the batter. In the world of digital assistants, being able to accurately respond to a human request is likewise considered job #1. 

In baseball, a batting average of over .300 is considered excellent, over .350 is elite, and anything close to .400 is other-worldly. With digital assistants, over .800 is excellent, .850 is elite, and .900 is other-worldly. 

Note: there are caveats to this that we will address is subsequent points.

In the following chart, extensive testing (2,500 questions) of the major consumer facing digital assistants took place, where voice recognition was removed as a factor such that this was purely a test of knowledge matching. What was even more interesting was that the test also measured how often a question was attempted to be answered, as well as how accurate it was when it did attempt the answer. 

Comparing the results below you can see that some digital assistants attempt to answer (swing the bat) more than others. Plus, the accuracy (makes contact with the ball) is also very different too. Google and Ida both hit in the 80’s. Cortana attempts to answer just over 80% of the questions, but only gets it right just over 50% of the time. Meanwhile, Siri swings at the ball 40% of the time, and even then only makes contact 70% of the time. Which is extremely poor. 

Accuracy rates for Ida vs. consumer virtual assistants
Figure 3: Accuracy comparisons

Please note: the scores from Ida are results in production environments across multiple customers and are an accumulated average. Also, Ida is being asked questions that are specific to a client’s organization that also may be specific to individual employees, managers, students and advisors. Therefore, the degree of complexity is much higher.

3.   Batting Cage Averages

There’s a reason that when batting averages for a player are published that they only include stats captured during an actual game. All players look great in the batting cage, as the degree of difficulty is much lower, and, generally, the player knows exactly which kind of ball is being pitched to them. 

Ted Williams was the last player to hit over .400 for a season in 1941. Almost all players hit over .400 in a batting cage. This is why when you see published stats for digital assistant accuracy, it’s important to know that the stats published came from actual usage in a production environment, and weren’t just the result of testing taken place in a QA environment with a bunch of teed up utterances that didn’t truly test the ability to match accurately. 

Just as an FYI, we run over 10,000 test utterances against Ida any time we make any change. And in QA Ida scores over 97% accuracy. Whereas in production, in client environments Ida scores around 85%, and that’s the number we publish. 

Kershaw pitching vs. pitching machine
Figure 4: Hitting against Clayton Kershaw is a lot more difficult than a pitching machine.

4.   The Curve Ball

While it’s obvious why Clayton Kershaw is harder to hit than a pitching machine, it may not be so obvious why humans interacting with a digital assistant is so much more difficult to handle than test utterances in a QA environment. So, let’s do this with some examples:

Ida Dialogue: someone asking about paycheck
Figure 5: Typical training/testing utterance

Now, let’s see an example that a human actually typed:

Ida Dialogue: someone asking about paycheck
Figure 6: Sample of an utterance by a human in a production environment

As you can see, as much as vendors try and emulate utterances in their DEV and QA environments, what actual people say often comes out of left field and can really fool a digital assistant that lacks the technological maturity to filter out the essence of what is being communicated. Complex utterances are really the equivalent of the curve ball in baseball (or splitter, slider, etc.).

5.   Swinging the Bat

So, exactly how does a digital assistant know when to try and answer a question (swinging the bat) and when to claim ignorance (lay off the ball)? For many it’s a simple case of confidence levels. When an utterance is passed to an NLP engine, what gets returned is a list of possible things it thinks it has a match to, plus a confidence level attached to each one. Then, for a basic digital assistant, it’s just a case of selecting the one with the highest ranking and presenting that to the human. Sometimes the digital assistant will also include, as an act of disambiguation, a list of those things that are closely grouped in confidence levels. 

For really advanced digital assistants like Ida that use nested NLP techniques, much more complex algorithms are used to determine what to present and how to present it. But, ultimately, everything is a combination of either single or multiple confidence levels that may or may not be passed into even more complex algorithms to further establish what the human is asking the robot. 

A really good digital assistant doesn’t just have a high accuracy rating, it also has a high rate of responding to questions (swinging the bat). A digital assistant that only replies to really obvious requests will score high on accuracy, but low on satisfaction, and low on its ability to truly solve problems. 

6.   The Pinch Hitter

As we roll into 2021, a subject that will continue to keep coming up at all organizations will be the concept of the digital assistant “concierge”. Given the inevitable plethora of bots being implemented at many organizations, typically on different technology platforms, the question that will be constant in 2021 is:

“Can’t we just have one digital assistant that everyone interacts with. It’s too complicated for our people to know which one to go to?” And the simple answer is yes. 

Organizations have begun to realize that one digital assistant should be the “face” to the organization, and that it is the job of the “face” to handle integration with all the other bots in the organization. Even those on a different technology stack. This is called a “concierge” solution, where the digital assistant can reach out to different bots at runtime to get the answers to questions it knows nothing about. Kind of like how the concierge in a hotel operates. 

Technically, and to keep with our baseball theme, “concierging” entails swapping different bots in and out of the lineup based on the question the digital assistant is faced with. Just like bringing in your lefty to face the right-handed pitcher. So, for example, if the digital assistants key strength is HR based requests and the human asks a finance question, the digital assistant will reach out to the bot, or skill, that can best handle it, and acts as an intermediary communicating the responses with the human. 

Ida’s underlying technology stack is based on Oracle technology, and so it is easy for Ida to concierge with any skill built on the Oracle Digital Assistant technology stack. Plus, because of the advanced nature of the stack and the middleware Ida uses to integrate with the stack, it is also possible to plug other completely different technology-based bots into the solution too. Like Microsoft LUIS or IBM Watson. 

7.   Laying Off the Ball

Of course, no digital assistant should be attempting to answer every question or request that it gets. If someone wants to know how tall Tom Brady is, that’s probably not a question it should be trying to respond to. But what it shouldn’t be doing is just responding with an “I don’t know the answer to that”. In baseball, if the ball is pitched a foot outside the plate, the hitter knows enough to know that they shouldn’t be swinging at the ball. That’s how baseball players, and digital assistants, make fools of themselves. 

The best way for a digital assistant to lay off the ball is for it to politely say, “I don’t know the answer to that. But here are the topics I do know a lot about. You can browse these topics, or I can pass you to a live agent if you need more help”. 

8.   Slugging Percentage

For most of this article we have focused on contact with the ball as being the key metric for judging a digital assistant. Was the human utterance correctly matched to what the digital assistant was capable of responding to? Which is why batting average is used as a direct parallel. In reality there is another dimension to all of this that is best described with the comparison to baseball slugging percentages. 

In baseball not all hits have the same value. One hit may get the player to first base, while another hit may score a home run. And, as everyone knows, hitting a home run is massively valuable, so slugging percentage is used as means to measure the power of a hitter. If you combine a hitter’s ability to get on base, plus the power they generate with each hit, that equates to a stat called OPS. And if the OPS is greater than 1.000 that means you are a superstar. 

Digital assistants are exactly the same. Just being accurate (getting on base) isn’t sufficient. It’s the quality of the response that really counts (slugging percentage). 

So, what exactly do we mean by that? Let’s see an example to illustrate the point:

An example of just getting to first base:

Ida Dialogue: simply link to other website
Figure 7: Barely getting on first base

An example of the home run:

Ida Dialogue: complex transfer
Figure 8: A home run hit out of the park

As you can see, the first example provides a link to web page, where all you can really do is initiate a request for someone to get this done for you. The second example clarifies any outstanding questions, completes the task, and triggers any appropriate workflow. This is called a home run, and it speaks to the deep integration capabilities of the digital assistant. 

As is very clear in the examples above, the first example is not much better than an improved means of navigating crude FAQ web pages. Whereas the second example shows the value a superstar digital assistant can bring to your organization. 

9.   Statistics, statistics, statistics

The world of baseball is measured with every statistic you could ever imagine. And, these days, all organizations used advanced statistics to make pretty much every decision. No human, no matter how good their “eye” is, can process 250,000 data points and make an informed decision without some kind of analytics engine to provide guidance. And this is true in the world of digital assistants. Despite almost every salesperson in the AI world telling their prospect that AI “just learns”, that’s just not true, and nor is it advisable. Once a digital assistant is introduced to your organization, every decision you make in terms of growing its knowledge, and improving its capabilities, needs to be driven from hard data. 

Yes, aspects of AI are a black box, but any black box can be measured, and by precise measurement you can predict behavior, and also alter it when needed. 

At IntraSee we ask Ida over 10,000 questions every time we add more knowledge to the corpus, plus examine a quarter of a million data points. And, because this is impossible to do by hand, we automate the entire process. This allows us to ensure Ida gets smarter each week, and that there is no regression. Without all this data it would be impossible to do that.

The orchestration of supervised training and measurement is the key to continuous healthy growth. Like humans, digital assistants start life being great at some things, and not so great at others. And it’s important for all organizations to understand exactly what those strengths and weaknesses are. And, just like humans, this information can be used to further train the digital assistant, and also further enhance the quality of the responses. 

With an array of statistics that are both high level for your executives, and detailed enough for your business analysts, you will have the tools to effectively manage your digital assistant. 

10.   Player Development

Most people assume that the use of statistics is confined to trading, drafting, and game strategy. But the most advanced organizations also use statistics to drive player development. It’s one thing to draft or trade for a great prospect, but unless they are coached correctly, you’ll never get the best out of them. Sometimes it will be a case of tiny fractions of adjustment to the length or angle of a swing that makes the difference between a good player and a great player. 

Similarly, with digital assistants, they are not a “set and forget” solution. How you develop your digital assistant with continuous learning is what turns a good digital assistant into a great one. Supervised machine learning via automation, and rigorous automated testing, is the key to ensuring that your digital assistant gets smarter every single week. 

In many ways you have to view your “digital worker” in the same light as when you hire a human worker. Performance appraisals, continuous feedback, and the setting of goals are just as important to your digital assistant. And, in some ways, more so. However, unlike your human hire, your digital worker can continue to get smarter and more knowledgeable each week, with no plateau. It will work 24/7 for you, and never call in sick. Plus, it will never leave you for a different organization, taking its knowledge with it. In short, it’s an investment that keeps on paying back. 

If you would like more information, or would like to see a demo, please contact us below.

Contact Us