CV Home
Home CV


LLM based mapping natural language statements into knowledge graphs updates




In a recent project, I had the exciting opportunity to explore the intersection of AI and knowledge management by integrating large language models (LLMs) with a knowledge graph. The goal was to enable natural language queries to perform real-time updates on a knowledge graph using custom CRUD (Create, Read, Update, Delete) operations. And I wasn't alone: this was a collaborative effort with my girlfriend, Rossella Tritto, and we had the solid foundation of the application framework provided by SWOT, a research team from Politecnico di Bari. Here's how we tackled this project together.


The Challenge: From Natural Language to Knowledge Graph Updates


At the heart of the project there was the idea to make an LLM interact with a knowledge graph built using the Web Ontology Language (OWL). The knowledge graph contained entities (devices, sensors, etc.), and we wanted the LLM to understand user queries like "Add a new device to the living room" or "What's the status of lamp 1?" and automatically convert them into knowledge graph updates.
This is where the application framework provided by the SWOT research team came into play. They built a robust system architecture that supported the knowledge graph operations, which made it easier for us to focus on extending and integrating it with our CRUD functionalities. Rossella and I worked together to build these operations, ensuring they could smoothly interface with the LLM.


Building CRUD Functions


The first step was to implement the basic CRUD functions within the existing framework:


Testing and Evaluating the System


Once the CRUD functions were integrated into the LLM, it was time to test how well the system performed. Rossella and I created a set of 100 test queries, divided into categories like: Using the automated testing framework that we have build, we ran these queries and evaluated the system based on three criteria: The system performed exceptionally well in most categories, achieving over 90% accuracy for function calls and parameters. The advanced queries, especially those conditioned by sensor data, were more challenging for the LLM to handle, which was expected due to their complexity.


Lessons Learned and What's Next


This project highlighted the incredible potential of combining LLMs with structured systems like knowledge graphs. The ability to manipulate complex data using natural language opens up countless opportunities, especially in domains like smart homes and IoT. However, we also recognized areas for improvement. Handling more advanced queries, for instance, requires either more powerful LLMs or specific optimizations, which we plan to explore. We also discussed experimenting with few-shot prompting to further improve accuracy in more complex scenarios. With the solid foundation provided by the SWOT framework, we're optimistic about the future. Offline LLMs, like the one we used (dolphine-mistral), offer excellent privacy and cost savings considerations in smart home applications. Collaborating on this project with Rossella and having the support of the SWOT research team has been an amazing experience. Together, we've created a system that truly highlights the future potential of AI in everyday life. And we can't wait to continue pushing these boundaries in future projects!

You can read in more detail the project in this paper:


Download the paper here: here.