Using AI to Enrich XR Training

AI can take your XR training program to the next level

AI is truly taking the technology world by storm, and for good reason. It offers a fundamental shift in how we can use computers.

Despite the huge leaps we’ve seen in the technology the last two years, a lot of people are still wondering how to take advantage of AI day to day. At Motive we’ve been working at the intersection of innovative technologies for years and are always looking for ways to fuse cutting-edge technologies to solve real-world problems.

Through a lot of prototyping and experimentation, we’ve so far found two very effective ways to leverage AI within XR training experiences. Motive XMS now offers this functionality out of the box.

Conversational and Soft Skills Training

Although generative AI and large language models (LLMs) are getting most of the attention, more “old school” AI like natural language processing (NLP) can provide enormous value in building out conversational and soft skills scenarios.

Briefly, NLP processes voice data to detect the meaning of what someone is saying. It scores the user’s speech based on the likelihood that it matches a particular “intent.” For example, if you say, “How’s it going?” it may determine that this matches the “Greeting” intent with a score of 80%.

With this scoring system at your disposal, you can now build out sophisticated, branching scenarios in Storyflow driven entirely by the user’s voice. Because it uses NLP and not just word-matching, the user can speak more freely and naturally as opposed to reading options from a list. The end result is training that feels very fluid, open, and natural, but that keeps the scenario on a well-defined path so that you can extract useful assessment information.

Generative Question and Answer System

LLMs and generative AI of course offers exciting new ways to deliver information to learners. We used the XMS plugin framework to integrate with an AI system that can answer a user’s questions based on a set of documents. The answers are sent to an LLM where we can configure the tone of the response (e.g. “more formal” or “friendly and cheerful”, etc.) as well as translate the results to the language of the learner.

How does this work in practice? You create an agent and feed it any number of documents in just about any format (PDFs, slideshows, web links, etc.) The AI processes these and is able to extract the most relevant information from them based on what the user is asking. We usually have the responses voiced by a character in the environment, giving the user an intuitive and friendly interface to a wealth of information.

Why Not Both?

These two AI systems offer similar, but distinct functionality.

  1. NLPs are great for structured scenarios where the user’s inputs are expected to follow a certain pattern
  2. LLMs are great for open-ended scenarios where the range of possible questions from the user would be too great to define ahead of time

The big question we asked ourselves was: is there benefit in bringing these two systems together? And the answer turned out to be a very resounding “yes.”

Mixing NLP and the Q&A system allows us to build “helper” characters that can answer a wide array of user questions. In our testing, we’ve found that users very quickly get accustomed to asking the characters for help. The help works the following way:

  1. The NLP system is listening for very specific questions about the training scenario they’re currently running. For example, the user may ask “What should I do next?” This information would be difficult to extract from the Q&A system (it doesn’t know how far they are in the scenario, what possible randomizations have been added to the scenario, etc.) However, this information is easily embedded into a Storyflow script. In this case, the character responds with the specifics of the next step the user should take.
  2. If, however, the question is not one that is recognized by the NLP system, the query is directed to the Q&A system. This might be the case if the user is asking more general questions, like “What should I do if there’s a fire?” or “Can I wear a hoodie to work?” Assuming the Q&A system has been primed with the right documentation, it will respond with the relevant information.

Motive XMS + Storyflow lets you mix and match these systems in any number of ways, even running multiple agents in parallel that understand and respond in different languages!

Want to learn more about AI and Motive XMS? Schedule a discovery call here.

Posted in
Avatar of Ryan Chapman

Ryan Chapman

Latest Posts

Stay in the Know

Want to stay up-to-date with what is going on in the world of immersive training? Subscribe to the Motive Blog.

Ready to revolutionize your training program?

We’re ready to show you how seamlessly you can create, edit and deploy  VR training modules. Our team is standing by to help you revolutionize your training program.