Matching Skills and Candidates with Graph RAG

Ontotext
7 min readMay 31, 2024

--

How we used Ontotext GraphDB and LLMs to improve CV selection

Motivations and Setup

Asking “business” questions in natural language and having an oracle produce the perfect answer has been a sought-after feature since antiquity. Where once people would confide in divine oracles, golems, or fairies, today we trust our search platforms to digest encyclopaedic knowledge and make it easily available.

In reality, the perfect answer may not exist, but surely there can be useful answers, in particular if formulating the question is made very easy. At the end of 2022, with the popularisation of chatGPT, it seemed like this feature became within reach overnight. But it was immediately clear that the various flavours of generative AI can only be great at… generating. It is a great human-machine interface, and an awe-inspiring creative co-pilot, less so a reliable storage of knowledge.

Grounding Large Language Models (LLMs) to factual knowledge of the world (or, of a specific world, like corporate knowledge) permits to mitigate hallucinations and to make profit of private, up-to-date knowledge without re-training an LLM every time (fine-tuning).

RAG (Retrieval Augmented Generation) operates this grounding, and permits users to ask the LLM questions over their documents of choice. What we like in particular about RAG is that it can easily be set up with free, offline tooling — being especially useful for companies wanting to experiment with it, or even try and cautiously use it on real use cases.

Technically speaking, RAG is an architectural pattern for LLM applications whereby, before answering, the application retrieves relevant knowledge from a Knowledge Base (structured or unstructured) and adds it to the LLM’s input context. In its basic form, it is composed of a system prompt (the general instruction for the LLM), a user query, and a retriever able to find relevant content to add to the prompt.

RAG over unstructured Knowledge Bases (for example, PDF documents) is a well-explored pattern: text is extracted from documents, divided in chunks, vectorised (for smarter retrieval) and stored in a vector database. Then, when a user asks a question, the question is vectorised too, and the most similar chunks of text are retrieved, and added as context for the LLM to respond.

At Semantic Partners, we explored this well-known pattern, and took it one step forward, by combining it with the so-called Graph RAG over a Knowledge Graph using Ontotext GraphDB. By combining structured data and unstructured pieces of content we can let a user query their own documents, and be backed by the contextual knowledge in the graph. To demonstrate the power of the approach, we chose a popular real world use case: candidate selection over resumes.

The Use Case

Suppose we have a good number of Curriculum Vitae (from dozens to thousands) and a job opening for our organisation. Equivalently, we might want to map our people and their skills’ matrix to a project. Ideally, we should be able to assemble some clever search solution (aided by an LLM) to help us find the “best” candidates (or a decent set of them) for our job description. Other than a good search solution, we also need some specific knowledge about our domain: the relevant disciplines, tools, skills for our job opening. If we are selecting candidates for a maritime organisation, we need to know all about cargo boats and logistics, whereas if we need a graphic designer, we would better know terminology and skills associated with design and web development.

To use a metaphor, our perfect recruiting team should include someone very good at finding information (our RAG over documents, using vector search), but also a domain expert knowing the lingo, the terminology and the particular skills that are interesting for the current job opening.

For our use case, in particular if we are a large enough organisation, we need smart search, but also background knowledge for the required sector. Knowledge that may come from our own Intellectual Property. Knowledge that the LLM in use might simply not have.

To remain in the metaphor, just like Nature does not give us an omniscient recruiter, generative AI does not give us the omniscient LLM. This is why we will introduce RAG over a Knowledge Graph to help our RAG over documents.

It is important to note that we are not so naïve, nor inhumane, to claim that we can replace recruiters, nor select people on the basis of some obscure AI algorithm: the approach we describe here is similar to what a smart search engine, aided by semantics, could do — but with the ease of natural language interaction given by LLMs.

Let us see first the search using documents’ RAG only. Then, we will introduce the graph and show what improvements it can make.

RAG Over CVs: Running Example

Suppose we have a set of CVs stored somewhere, in various text formats. Setting RAG over them is very easy. There are platforms offering this (for a free one, see GPT4All), but we like to build our research cases using LangChain (to build the application itself), and use small, offline LLM models served by Ollama. Our RAG will try to match the job description with the CVs, and it will find a number of chunks from various CVs — we decide how many.

(For reasons of space, we will limit the example to a few words.)

Notice that the embedding model used to vectorise and search the CVs was able to find “graph database” as relevant to “semantic technologies”. It is a good, non-trivial result of vector search. Now that the prompt is complete, the LLM will be able to elaborate a response along the lines of:

“CV1 and CV2 are both good candidates for the job opening. CV1 has worked 
with semantic web languages, while CV2 has more than 5 years of working
experience with graph databases”.

Enter the Expert

With RAG over our CVs we get decent results. However, we know that in our set of CVs there are other candidates with the right skills, and they are not found. Perhaps they expressed their experience in terms of the particular tools or languages used (graphDB, rdfLib, RDF, SHACL, etc.), that the embedding model might not be able to associate to the discipline of semantics.

We could train our embedding model or the LLM to know more of our space, but training a model is not a trivial task. What if we also need to cover another sector entirely? Or our situation changes, and we suddenly need our models to know everything about our other business department? This is one of the reasons why, instead of aiming for trained models to know everything, we can solve this using RAG again.

Let us add an “Expert in the field” to our RAG. Our expert will be a Knowledge Graph containing high-quality knowledge about vendors, their semantic technologies, the features they offer, and the skills associated with them. At Semantic Partners, we do have such an internal Knowledge Base to help us keep track of all our partners’ offers. We express it in RDF, to make sure we can use inference to infer all the logically consistent links between our entities. We loaded it into Ontotext GraphDB, which is able to vectorise a graph (its labels, descriptions, etc.) with the click of a button.

The above example shows some information attached to the entity GraphDB in the graph.

We can then apply RAG to our Graph (we describe how we do it in another article) to find clear and relevant pieces of knowledge that are relevant to the concepts mentioned in our job opening. We used some “magic tricks” to combine facts and textual chunks in the proper way, but the result is that the knowledge from the graph can now inform our retrieval of CVs, and we can now intercept more right candidates, and with more precision in their scoring.

In our example, the result now includes less generic work experiences too! And we don’t have to limit our result set to a couple of CVs: we can take less risk, and let the RAG return many facts from the graph, and many chunks from various CVs, and let the LLM do its work of synthesis. Productionising this approach is non trivial, but hopefully we gave an idea of how using a combination of RAG over unstructured and structured knowledge can be a win. Intuitively, the graph here acts as an Expert in the field that can help the LLM/RAG application navigate the hidden subtleties behind the words they find.

Statistical models go very far by brute force, but having an Expert ensures you are not missing the obvious links.

Conclusions

We started from a RAG search over private documents and added a Graph RAG to provide additional background domain information to better search and interpret the documents. This is the second type of Graph RAG, “Graph as a Subject Matter Expert” according to Ontotext’s classification.

In the case of selecting CVs, it is like providing our HR with a domain expert alongside them. We kept the presentation at its simplest, but we think this use case can give an idea of how RAG over a graph and documents is a good composite pattern.

Organisations that want to improve their content and document management can contact Semantic Partners to get an end-to-end solution or help for their IT teams. Enterprise architects, data scientists and engineers can read this blog post to understand how the different Graph RAG approaches can be implemented and what the principle differences are regarding explainability, hallucination, grounding context visibility, and other aspects.

Matteo Casu, Semantic Engineer at Semantic Partners

Originally published at https://www.ontotext.com on May 31, 2024.

--

--

Ontotext

Ontotext is a global leader in enterprise knowledge graph technology and semantic database engines.