Skip to content

Instantly share code, notes, and snippets.

@fabiofa87
Created January 8, 2024 13:38
Show Gist options
  • Save fabiofa87/371401e2f51bb9896ce6ca9c9d68f997 to your computer and use it in GitHub Desktop.
Save fabiofa87/371401e2f51bb9896ce6ca9c9d68f997 to your computer and use it in GitHub Desktop.
GEN AI Context explanation

What is

GenAI is a platform from ABInbev, a global alcoholic beverages company with a presence in South America, Central America, and Europe. The platform under construction aims to create a Retrieval-augmented Generation (RAG) utilized for internal purposes such as training, daily sales insights, etc.

How it works technically?

Initially, there are 2 agents:

  1. RAG Connector
  2. Orchestrator

RAG Connector

RAG has several responsibilities. It is responsible for sending PDF files, reading these files, generating chunks that will be used to create questions/answers for training a Language Model (LLM). It also generates metrics for files referred to as "Groundtruth." These benchmarks are created to measure heuristics for each question/answer provided by the Orchestrator. In the future, through event sourcing, it aims to measure possible AI hallucination.

Examples of PDF flow

  1. Ingest a PDF document, such as documentation about Python, defining the context and sending it to the Connector.

  2. The document is divided into chunks, and metadata is stored in a Postgres database.

  3. Add to the API (FastAPI) the Background_task, which handles chunk splitting and sends them to Table Storage, interacting with Cognitive Search (Azure environment) to generate 5 questions and 5 answers.

  4. Trigger the Orchestrator for the learning process.

Examples of CSV/TXT flow - Benchmark Generation for Groundtruth documents

  1. Ingest TXT/CSV documents in the next step, containing the same context. Questions and answers are generated by humans (initially) and synthetically. Ingestion involves storing the raw document in a blob storage, as sent by the user, and then going to the background task.

  2. The API receives the document, splits questions and answers, and stores them as JSON in Postgres. Each document has a unique ID, a "data" column representing [questions, answers, answers from the orchestrator, distance between sent and received answers by the orchestrator, embeddings generated for the sent answer, and embedding generated by the orchestrator's answer].

  3. After completion, metrics are generated for the document using Bert Score, which is currently disabled due to excessive memory consumption.

Orchestrator

I cannot provide much context on this as it is a service used in C#, involving another team. It is essentially a solution utilizing Semantic Kernel to enable the creation of multiple connectors, each with its own history. The Orchestrator is responsible for generating responses, context routing, fine-tuning, etc.

This part is causing internal friction due to ongoing discussions about the implementation of context routing, etc.

The problem I am facing

Essentially, when submitting one or multiple documents, whether PDF or CSV/TXT, the API returns the status indicated in the controller and starts running the Background Task. While the background task is running, no other requests can be made to the API until the request is completed.

Possible emergency solutions attempted

Added more workers to Uvicorn.

Potential problems this may cause

We do not know how long scaling will be necessary, and it may be a problem for the future.

Use a Guinicorn in the Pod

/bin/bash,-c,gunicorn -k geventwebsocket.gunicorn.workers.GeventWebSocketWorker -b 0.0.0.0:5000 --workers=5 chat.app:app

This was one of the solutions implemented in another part of the project facing similar issues, but it is not clear how bringing in Guincorn would definitively solve this problem. It might be more of a temporary fix if it works.

Libs and Stack

  • Python 3.9
  • FastAPI
  • Sklearn for question/answer metrics
  • Scipy for metrics
  • Bert Score to define benchmark scores for documents

Where we are heading

In the upcoming sprints, we will implement OCR, for instance, which will be responsible for reading documents with images to insert into learning and have synthetic QA Groundtruths automatically inserted to stress-test the LLM further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment