Enhancing Knowledge Discovery: Implementing Retrieval Augmented Generation with Ontotext Technologies

Ontotext
10 min readFeb 21, 2024

Explore how Ontotext harnesses Retrieval Augmented Generation (RAG) and Natural Language Query (NLQ) to transform data interaction, using semantic technology products that simplify database accessibility for non-technical users. Learn how with GraphDB and Ontotext advanced methodologies, you can index complex data in vector representation, develop smart information retrieval systems, and construct a conversational interface to power data exploration and discovery.

This is part of Ontotext’s AI-in-Action initiative aimed at enabling data scientists and engineers to benefit from the AI capabilities of our products.

Natural Language Query (NLQ) has gained immense popularity due to its ability to empower non-technical individuals to extract data insights just by asking questions in plain language. This dramatically simplifies the interaction with complex databases and analytics systems.

In this blog post, we dive into the capabilities of Ontotext’s semantic technology products and solutions that facilitate NLQ. Specifically in its form that has been getting prominence in the last year — Retrieval-Augmented Generation or simply RAG. Join us as we demystify the methodologies empowering such implementations, shed light on their range of capabilities, and detail how Ontotext is harnessing these technologies to bring transformative enhancements to our data interaction landscape.

RAG and Ontotext offerings: a perfect synergy

RAG is an approach for enhancing an existing large language model (LLM) with external information provided as part of the input prompt, or grounding context. Most frequently, it uses a vector database indexed with proprietary content and available for a retrieval component to query it. Let’s see how we can achieve our own RAG using Ontotext’s RDF database GraphDB.

GraphDB integrates with the open-source ChatGPT Retrieval Plugin via the ChatGPT Retrieval Plugin Connector, which transforms RDF data into embeddings and stores them in a vector database of choice. So we can easily integrate the information from both textual documents and structured RDF entities into an LLM-driven application.

Example use case: Ontotext Knowledge Graph

For illustration, we will use a project developed internally by Ontotext that we call “Ontotext Knowledge Graph” or OTKG for short. Like many organizations, we want to get the most of the content we produce — technical documentation about our products, capabilities, past and current projects, research publications, marketing content, presentations, and webinars. So we have built a dataset using schema.org to model and structure this content into a knowledge graph.

To answer questions based on our content, we have created a ChatGPT Retrieval Plugin Connector, called creativeWorks, which contains embeddings for the text content of our Creative Work documents:

PREFIX :<http://www.ontotext.com/connectors/retrieval#> 
PREFIX inst:<http://www.ontotext.com/connectors/retrieval/instance#>
INSERT DATA {
inst:creativeWorks :createConnector '''
{
"fields": [
{
"fieldName": "text",
"propertyChain": [
"http://schema.org/text"
],
"indexed": true,
"multivalued": false,
"objectFields": []
}
],
"languages": [],
"types": [
"http://schema.org/CreativeWork"
],
"readonly": false,
"skipInitialIndexing": false,
"retrievalUrl": "",
"retrievalBearerToken": "",
"bulkUpdateBatchSize": 1000
}
''' .
}

With a similar configuration, we can index not only textual descriptions, but also structured information about entities — by listing all the fields of the entity, the connector generates a textual description of the entity.

Once we have our desired data in a vector database, GraphDB provides two out-of-the-box approaches for taking advantage of the RAG pattern to query the data in natural language. The examples below use OpenAI’s ChatGPT, but they can be applied against other LLM chatbots, including self-hosted ones.

Talk to Your Graph

GraphDB 10.4 introduced a new Lab functionality called “Talk to Your Graph”. It is directly embedded in the GraphDB Workbench and it enables users to ask questions about their own RDF data in natural language. It is a suitable interface for people without strong technical understanding of SPARQL, who still need to explore and understand data encoded in RDF.

Talk to Your Graph is a RAG approach for NLQ that comes out of the box given a ChatGPT Retrieval Plugin Connector has been set up. Unlike the standard RAG approach that first selects the relevant content and then queries ChatGPT, Talk to Your Graph instructs ChatGPT to generate queries for the vector database in order to collect the necessary information to answer the user question. Talk to Your Graph uses an iterative approach — it asks ChatGPT to generate a query, it sends the query to the vector database, and then sends back the result to ChatGPT. This is repeated until ChatGPT has all the necessary knowledge to answer the question or the limit of iterations is reached. This has several advantages:

  • ChatGPT has the freedom to construct the queries it needs. It can collect information in several iterations and can break down complex tasks into simpler ones.
  • ChatGPT is not restricted to the information we send it. The prompt instructs ChatGPT to only generate queries if it doesn’t have the necessary knowledge for the question. So it is possible that the answer contains information outside of the scope of our repository. For example, in our OTKG we don’t have explicit information about Joe Biden. However, Talk to Your Graph manages to answer us when we ask:

We can modify the query template that ChatGPT uses for query generation to include additional filters such as author, creation time of the texts or others. Talk to Your Graph supports a conversation, not just a simple question-answer exchange, thus keeping context across. You can even take advantage of the available knowledge to infer insights that were not initially modeled in the knowledge graph, as it can be seen with the second question below:

The following are some ideas we consider as possible future improvements:

  • Enhancing explainability: We know that the trustworthiness of such systems is a key feature. Users need to understand whether results come directly from the data the LLM has accessed or from its inherent knowledge base. So, we are aiming at greater transparency.
  • Expanding accessibility: Currently, Talk to Your Graph is available through the Workbench interface. Integration into custom applications requires programmatic access such as via SPARQL, for example, which would enable more versatile usage options ahead.
  • Optimizing query flexibility: Building flexible queries requires a rich model. This means refining the query templates to offer more comprehensive filtering options, ensuring you can navigate your data more effectively.

Talk to Your Graph is user-friendly and easy to set up. It significantly lowers the barriers to exploring your RDF data, allowing for an intuitive experience without the need for writing complex SPARQL queries.

RAG with SPARQL

Let’s now see how we can implement our own RAG using just GraphDB and SPARQL. We have our data indexed in the vector database and we want to answer our targeted question “What are some common applications of knowledge graphs? “. We can first query the index to select the most relevant information, in our case the top 10 chunks (referred to as snippets below):

PREFIX retr: <http://www.ontotext.com/connectors/retrieval#> 
PREFIX retr-index: <http://www.ontotext.com/connectors/retrieval/instance#>
PREFIX gpt: <http://www.ontotext.com/gpt/>

SELECT ?answer {
{
select ?question (GROUP_CONCAT(?snippetText; separator=". ") as ?chunks) {
BIND ("What are some common applications of knowledge graphs?" as ?question)
[] a retr-index:creativeWorks ;
retr:query ?question ;
retr:limit 10 ;
retr:entities
?entity . ?entity retr:snippets
?snippet . ?snippet retr:snippetField ?field ;
retr:snippetText ?snippetText .
filter (str(?field) = "chunk")
} group by ?question }
BIND(CONCAT("Answer the question taking into account the following
information I will provide next. The question is: \"", ?question, "\". And the
information is: ", ?chunks) AS ?prompt)
?answer gpt:ask ?prompt .
}

The query uses the “ask” SPARQL GTP function to send our prompt to ChatGPT. We can further tune the number of the chunks to select, and the prompt we use. A great advantage of this approach is that we benefit from the visibility of what snippets were included in the prompt, so we are aware of the source of the generated answer.

Although very simple to implement, there are some challenges and drawbacks of this approach:

  • What limit for the snippets should we choose?
    This is not something that could be easily determined or measured and it depends on the particular question. Sometimes, sending more information to the LLM leads to more noise and a less inaccurate answer. On the other hand, we are always limited by the size of the prompt we can send to the LLM, so for some questions we might not be able to send all the required background information.
  • How to identify the most relevant context?
    Direct similarity to the question is not always the best way to select the most relevant data for the LLM. For example, let’s consider the question: What US government agencies has Ontotext work with? In this case, we might not be able to successfully match mentions of NASA working with Ontotext products, because they are not necessarily close in the vector space to the asked question.
  • How to avoid restricting the LLM to the provided information only?
    The LLM comprises a vast amount of information that we might want to take advantage of. With the prompt from the example above, we get the following answer for our Joe Biden question, because our OTKG index doesn’t contain a description of who Joe Biden is:
Q: "Has Joe Biden been president?" A: "Based on the given information, it is not stated or implied whether Joe Biden has been president."

RAG in a wider context

The presented approaches are a generic foundation that applies with varying success to different use cases. Depending on the complexity of the data model and specific requirements for the NLQ interface (such as speed and accuracy of the responses, context awareness and recommendations), organizations often need to build up on top of what is available out-of-the-box. In such scenarios, the Ontotext Solutions division can assist with how to best model and transform the data, how to index it in the most appropriate storage, and how to extract the most value for the desired applications.

Let’s see how we’ve approached this with our Ontotext Knowledge Graph project.

How Ontotext uses RAG

We build on top of our products GraphDB and Ontotext Metadata Studio to develop a content enrichment system. It interlinks and organizes our internal knowledge with the help of semantic modeling and metadata into a well-connected knowledge graph. The system exposes an interface, where knowledge is easy to find and understand and can be used as the source of knowledge-driven insights. In its foundation lies the semantic metadata, which powers structured faceted search and an NLQ interface:

We consider trustworthiness central to our system. We provide visibility of where the source information for each of the stated facts in the generated answer came from. As shown on the diagram below, we build on top of the presented RAG approaches by enhancing them with semantic metadata for initial narrowing of the relevant data. After that, we can ask targeted questions to extract the desired knowledge with higher accuracy:

This combination is an enabler even for users who are not familiar with the specific terminology of the system. We further return suggestions for relevant questions, enrich the response with semantic metadata, and depict the connectedness between the user’s interest and the relevant portions of the knowledge graph:

In this way, users can easily find additional information related to their inquiry — either from the topics detected in their responses or via the recommended content based on the semantic metadata fingerprint. You can check out our demo system of the Ontotext Knowledge Graph and specifically the search and chat functionality on kg.ontotext.com.

Even with these improvements, the set of questions that you can cover with RAG hugely depends on how you’ve indexed your data. In cases where we want to answer questions that require broader visibility over the data (such as counts and aggregations combined with filters), we can consider another approach — a translation of the natural language question into a database query to select the requested information. This can be a SPARQL, ElasticSearch or a GraphQL query, which is then executed against the database. Check out our NLQ with LangChain blog post for more ideas. Depending on the nature of the data, a combination of approaches might be applicable to tackle the desired coverage of various types of questions.

Below is a table that compares the three RAG approaches we’ve presented. Naturally, better traceability and less hallucinations come at the cost of restricting the answer to the provided reliable grounding context:

Conclusion

GraphDB makes RAG easy. Through our products we enable users to benefit from the latest advancements in AI to unlock the value hidden in their data. We believe knowledge graphs and RDF data provide the richness, connectedness and flexibility required to power various NLQ use cases and with our research and solutions we are determined to explore more and more ways to benefit from this advantage.

You can always contact us to explore additional means for natural language-driven data discovery and exploration. Or you can take things into your hands directly.

Krasimira Bozhanova, Solutions Architect at Ontotext

Originally published at https://www.ontotext.com on February 21, 2024.

--

--

Ontotext

Ontotext is a global leader in enterprise knowledge graph technology and semantic database engines.