Local interface with anonymization rules that filters personal information to securely interact with LLMs

The Hack

In this hack, we were concerned about how much data we input into open source LLMs (e.g., ChatGPT from OpenAI) and the distrust we have of chatting with these models, considering that they could use personal data to train models or they could leak your personal data. Often, we are not even aware of the information we share. That’s why we created AI Wall, a layer that helps you identify and modify sensitive information before sending it to the AI model.

How it works

AI Wall takes any prompts you send to ChatGPT and, locally on your computer, modifies them to remove all private data. It can replace this data with placeholders. For instance, it sees your social security number or your name and replaces them with "John Smith 1234." This modified prompt is then sent to ChatGPT, ensuring your data remains safe. ChatGPT will reply with a text mentioning "John Smith." For example, it might say, "For John Smith and his social security number 1234, I would advise doing XYZ."

We focus on safeguarding privacy when working with large language models (LLMs), which we believe should be hosted locally rather than on the internet. This approach addresses privacy concerns and ensures that sensitive data remains secure.

To effectively identify and replace sensitive information with placeholders, it's essential to use an LLM that operates locally. For this purpose, we utilize a technology called Olamo that allows users to download and run LLMs locally on their machines. It's a GitHub project that facilitates this process.

Additionally, we employ a tool called Open Web UI, or Olamo Open Web UI, which provides a web-based user interface for these locally hosted models. This interface simplifies interaction with the LLMs, making it more user-friendly.

Building on top of the Open Web UI repository, we have developed a custom layer that intercepts requests to the LLM. This layer enables us to modify and manage the data before it reaches the model, ensuring privacy and security throughout the process.

Check it now!