Introduction

Introduction

Welcome to ReLLM! ReLLM provides an API to make implementing secure context aware natural language interfaces to your users.

What is ReLLM?

ReLLM was created to fill a need when developing a separate tool. We needed a way to provide long term memory and context to our users, but we also needed to account for permissions and who can see what data.

Standard Workflow

Storing/Embedding Data

When your application receives a piece of data that you would like to embed.

  • Post request to the ReLLM embedding API
  • Include metadata so that you can target the data at a later point for deleting/etc...
  • Include any permission strings in an array. ie: ["permission 1", "permission 2", ...]
    • Users will only be able to have access to this data if they are able to see all of the permission array
  • Upcoming: include any access_array permissions. ie: ["access permission 1", "access permission 2", ...]
    • Users will only be able to have access to this data if they have at least on of the access_array permissions
  • Note: Data is always protected by both permissions and access_array permissions. Users must have all of the permissions in the permissions array, and at least one in the access_array.
    • You can send empty arrays of either in order for that feature to not be used to protect that data

Permission Setup

In order for users to have access to the embedded data, you will need to use the permissions api to tell ReLLM what the user can access. By default users have no access to contextual data.

Adding a permission

When a user gains a permission within your application you send a post request to ReLLM to tell us to store the permission. The request requires

  • User Id ie: "1111-11111-1111-1111111111"
  • Permission ie: "Permission 1"

Deleting a permission

When a user loses a permission you can make a delete request with the same data as adding in order to remove the permission

Chat Handling

When a user wants to ask the Large Language Model a question.

  • Make a post request to the ReLLM chat api
    • Include the user id
    • Include any initial messages
  • Make patch requests to the ReLLM chat api when new user messages
    • LLM responses will be returned
  • Get requests can be used to read the chat history

When using this system, the users questions will have context applied before the message is sent to the LLM. This allows the LLM to answer the question as well as possible.

ReLLM stores and manages these chats so you don't have to.

How does ReLLM Secure Data

All data that your store within ReLLM is encrypted and access controlled. Data is only decrypted when it needs to be sent to the LLM as context to help process the data.

Live Site

ReLLM is hosted at https://rellm.ai (opens in a new tab)