Introducing innovation in any sector always brings new questions and challenges. In today’s rapidly changing world of technology, it is essential not only to adopt new tools that can make our work easier, but also to carefully consider how these tools affect the protection and privacy of our data.

As part of our ongoing efforts to innovate and improve the Evolio system used by law firms, we have integrated OpenAI‘s GPT technology. This integration allows us to provide advanced services, but we also recognise that it can raise data protection issues.


Data protection

OpenAI, the maker of GPT technology, places great emphasis on protecting user data. As stated in their official documentation, OpenAI does not train their models on inputs and outputs through their API. This ensures that any data that is passed to the GPT is not used for other purposes.

User control over the use of artificial intelligence

In Evolio, a GPT call is always initiated by a direct user instruction. This call is triggered by the click of a button, and is never sent spontaneously. This ensures that data is only sent to the GPT when the user explicitly requests it. You can see it more precisely in the following video.

How to train artificial intelligence

An LLM, or language model, is a type of artificial intelligence that is trained to understand and generate natural language. Training an LLM network involves using large amounts of textual data to teach the model how to recognize patterns in language, sentence structure, and word meaning. This process is often performed on historical data such as books, articles and websites, and involves the use of complex algorithms and computing power. The result is a model that can generate text that is understandable and relevant to a specific query or topic. It is important to note that in the case of OpenAI and their GPT models, the training does not include data that has been sent through their API, which provides an additional level of privacy protection.