Potentials & Advantages of Knowledge Graph Based Conversational AI in Manufacturing & Industry — Examples, Use Cases, Best Practises

Ontotext
7 min readJul 21, 2023

--

This is an abbreviated and updated version of a presentation from Ontotext’s Knowledge Graph Forum 2022 titled “Potentials & Advantages of Knowledge Graph Based Conversational AI in Manufacturing & Industry — Examples, Use Cases, Best Practises” by Jürgen Umbrich, Senior Data Scientist and Head of Data, Onlim

We are Onlim, a leading Conversational AI company. Whether it’s a chatbot, a voicebot, or an enterprise search, our knowledge-based Conversational AI platform enables personalized interactions to increase customer satisfaction and improve the efficiency of businesses. We specialize in automated customer communication via chatbots and voice assistants. Our state-of-the-art Conversational AI platform serves customers across various domains such as tourism, finance, retail, energy, manufacturing, etc.

What makes us attractive to our customers is that we centralize corporate knowledge in our Conversational AI platform, where everyone can access corporate data and information easily via natural language. With our technology we can cover most use cases in one go, our platform is data agnostic and can do text and speech in different languages. Most importantly, our knowledge modeling is done with knowledge graphs, which enables us to merge different data types and enable answering complex questions.

We often say that just because a chatbot can understand a question, it doesn’t mean it can answer it. Text understanding is fairly straightforward, but to get to the deeper meaning is tricky and that’s where knowledge graphs come into play. We take data from any number of data sources, model it in a knowledge graph, train our chatbots on it and use it to dynamically build dialogs in natural language.

Why Combine Knowledge Graphs and Conversational AI in Manufacturing and Industry?

There are many data sources in Manufacturing and Industry, and usually, they are well structured. But, even when data is structured, most of the time it’s still in silos because different departments of an organization maintain their own data.

Our customers also struggle with limited knowledge access. Traditional web search interfaces make life difficult for non-domain experts who usually use their own terms. For example, typically, they wouldn’t know the exact product ID and may just use a generic name. Domain experts, on the other hand, use strict terminology but often have complex requests. As a result, they need to spend a lot of time browsing through a lot of details (only to figure out that the first 10 results are not what they need). So, users want to access information in a natural language (either type it in an interface or ask a voice assistant) and they want to have explorative knowledge access.

Knowledge graphs solve the problem of having too much data and not being able to find anything in it by connecting and merging this data. The Conversational AI modules on top of the knowledge graph help access the knowledge locked in the data quickly and efficiently. The modules are based on training Natural Language Understanding components, they provide synonym and context awareness and convert language into a query. In addition, they provide data-driven dialogue search or exploration, which helps users find the right information in a multi-step dialog.

Challenges Facing Conversational AI

One of the main challenges when building Conversational AI is domain modeling. We have to model the data in a way that reflects how domain experts and other users organize it and how they want to access the knowledge in it. We need to have support for different languages. We need to consider the potential complexity of the queries. We also have to ensure that the Conversational AI returns answers fast and these answers are compact but expressive. This involves a lot of feedback loops with the domain experts and the data stewards in the company.

Another challenge is the unstructured information sources. As mentioned at the beginning, our customers usually have many structured sources available to them. But there is also a lot of valuable information locked in unstructured text and tables. Extracting this information can be laborious and time-consuming.

For those unstructured information sources, we will use Large Language Models in the future to extract the information according to the ontology and structured data to the knowledge graph. When we need this data for a conversation we can use it directly with the Large Language Model. By June 2023 we are providing an App in our marketplace that allows customers to upload any unstructured information that we can process. We can generate structured data for the knowledge graph or directly run conversations over this unstructured data after we have added the information to our vector store and knowledge graph.

Use Cases in Manufacturing and Industry

Now let’s have a quick look at two of our customers and how they use Conversational AI.

Bora is a German manufacturer of extractor hoods and cooktops. Our Bora Bot is connected to their knowledge database and can answer questions about products in real time. Users can find information about where they can find the products, they can book service appointments or go over simple FAQs.

Wagner is another customer. They produce tools and accessories for do-it-yourself projects. The Wagner Spraybot enables users to search for products and supports several languages. The company plans to expand its functionalities.

Pilot Projects in Manufacturing and Industry

We are also working on several pilot projects in Manufacturing and Industry. For example, accessing information about operational instructions can be really challenging, but we can convert user manuals into knowledge graphs. We can model installation or maintenance instructions according to the iiRDS ontology , which provides well-structured and highly connected information. Then, our Conversational AI can point technicians to the right section in a 400-page manual, tell them what spare parts and tools they need and guide them through the instructions step by step.

With Large Language Models, we can directly generate the concrete answer out of these manuals.

Another of our pilot projects deals with information about product specifications. We are working on modeling and creating a knowledge graph that contains detailed information about products and product groups. The data there is very homogenous, with lots of attributes (e.g., size, diameter, temperature, material, connectivity, pressure, etc.) and different levels in the group hierarchy.

Here Conversational AI helps users find products or parts with a very narrow specification (typically a guided search) and enables them to ask very detailed questions (e.g., “Can I use a valve on the terrace in winter if it has the pressure of X bar?”).

Conversational AI in the post-LLM/ChatGPT Era

Conversational AI will become a changing technology to access information, execute tasks, or simply chat. ChatGPT has shown the comfort of a proper conversation. Nevertheless, the underlying facts of a conversation will be very important in the business world as well as in a private search or conversation. Here the knowledge graph comes in. We provide a seamless user experience where we access facts from a knowledge graph or just call the Large Language Model to provide a proper answer from an unstructured document. Technically, we will use intent and intentless conversational AI. This concept overcomes the hallucination of the Large Language Model and provides not only facts but also proper natural language generation and supports complex information retrieval by the knowledge graphs.

The future of conversational AI will offer a seamless combination of both technologies — knowledge graphs and Large Language Models — and will have use case implementations in any type of business application.

To Wrap It up

So, what are the advantages of a knowledge graph-based Conversational AI in Manufacturing and Industry? As already discussed, there’s a big variety of data sources in this domain and some of the data is very complex. Users want to be able to access the data quickly and efficiently. They also want to be able to perform complex searches, to access knowledge exploratively, and to use natural language by typing in or speaking to an interface.

Knowledge graphs allow smooth integration of data from multiple sources as well as rich interlinking. Their data model is very flexible and one can easily add new data and change the model as their use case evolves. Knowledge graphs also enable powerful reasoning, which derives new facts out of the existing data. On top of this graph, our Conversational AI enables users to access the data in natural language. It’s synonym and context-aware. It’s multilingual and can distinguish between regional dialects. And it offers explorative knowledge access with multi-turn dialogs.

Besides that, we see high potential in knowledge graph technologies as we can use them to transform unstructured data into structured data. This means that conversations can be conducted directly in natural language on the basis of unstructured data. In other words, the future of conversational AI lies in the combination of knowledge graphs and Large Language Models.

Jürgen Umbrich, Senior Data Scientist and Head of Data at Onlim

Originally published at https://www.ontotext.com on July 21, 2023.

--

--

Ontotext

Ontotext is a global leader in enterprise knowledge graph technology and semantic database engines.