Skip to content

Instantly share code, notes, and snippets.

@python273
Last active April 19, 2024 11:05
Show Gist options
  • Star 35 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save python273/563177b3ad5b9f74c0f8f3299ec13850 to your computer and use it in GitHub Desktop.
Save python273/563177b3ad5b9f74c0f8f3299ec13850 to your computer and use it in GitHub Desktop.
Flask Streaming Langchain Example
import os
os.environ["OPENAI_API_KEY"] = ""
from flask import Flask, Response, request
import threading
import queue
from langchain.chat_models import ChatOpenAI
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema import AIMessage, HumanMessage, SystemMessage
app = Flask(__name__)
@app.route('/')
def index():
# just for the example, html is included directly, move to .html file
return Response('''
<!DOCTYPE html>
<html>
<head><title>Flask Streaming Langchain Example</title></head>
<body>
<form id="form">
<input name="prompt" value="write a short koan story about seeing beyond"/>
<input type="submit"/>
</form>
<div id="output"></div>
<script>
const formEl = document.getElementById('form');
const outputEl = document.getElementById('output');
let aborter = new AbortController();
async function run() {
aborter.abort(); // cancel previous request
outputEl.innerText = '';
aborter = new AbortController();
const prompt = new FormData(formEl).get('prompt');
try {
const response = await fetch(
'/chain', {
signal: aborter.signal,
method: 'POST',
headers: {'Content-Type': 'application/json'},
body: JSON.stringify({
prompt
}),
}
);
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) { break; }
const decoded = decoder.decode(value, {stream: true});
outputEl.innerText += decoded;
}
} catch (err) {
console.error(err);
}
}
run(); // run on initial prompt
formEl.addEventListener('submit', function(event) {
event.preventDefault();
run();
});
</script>
</body>
</html>
''', mimetype='text/html')
class ThreadedGenerator:
def __init__(self):
self.queue = queue.Queue()
def __iter__(self):
return self
def __next__(self):
item = self.queue.get()
if item is StopIteration: raise item
return item
def send(self, data):
self.queue.put(data)
def close(self):
self.queue.put(StopIteration)
class ChainStreamHandler(StreamingStdOutCallbackHandler):
def __init__(self, gen):
super().__init__()
self.gen = gen
def on_llm_new_token(self, token: str, **kwargs):
self.gen.send(token)
def llm_thread(g, prompt):
try:
chat = ChatOpenAI(
verbose=True,
streaming=True,
callbacks=[ChainStreamHandler(g)],
temperature=0.7,
)
chat([HumanMessage(content=prompt)])
finally:
g.close()
def chain(prompt):
g = ThreadedGenerator()
threading.Thread(target=llm_thread, args=(g, prompt)).start()
return g
@app.route('/chain', methods=['POST'])
def _chain():
return Response(chain(request.json['prompt']), mimetype='text/plain')
if __name__ == '__main__':
app.run(threaded=True, debug=True)
@cyberkenn
Copy link

Отличный пример использования стриминга с Flask, спасибо! У меня это не сработало как ожидалось, но использование цикла 'for await' и API ReadableStream для чтения фрагментов из тела ответа помогло. Если считаете это подходящим, не стесняйтесь обновить код с этими изменениями или дайте мне знать, если предпочитаете, чтобы я внес свой вклад в ваш код, отправив запрос на слияние (pull request) или иным способом. Заметка: я не говорю по-русски, я англоязычный человек, который снова занимается программированием, но подумал, что вам может понравиться мой перевод с помощью GPT-4 :)

<script> var outputEl = document.getElementById('output'); fetch('/chain', {method: 'POST'}).then(async (response) => { const reader = response.body.getReader(); const decoder = new TextDecoder(); while (true) { const { done, value } = await reader.read(); if (done) { break; } const decoded = decoder.decode(value, {stream: true}); outputEl.innerText += decoded; } }).catch((err) => console.error(err)); </script>

@python273
Copy link
Author

python273 commented Apr 23, 2023

@cyberkenn Lol, the translation is not that natural sounding, with some phrases translated directly, making it sound like English in Russian 😃

Yeah, it works in Firefox with for await, but not in Chrome-like browsers. I'll update the example.

Also I have some updated code in my Eimi ChatGPT UI, might be useful as reference (not using LangChain there though. just fastapi + httpx for making API requests):
https://github.com/python273/eimi/blob/5fc4a1744191a5954955d77786e94bcaf8d6d5de/app/src/Session.svelte#L235-L240

@cyberkenn
Copy link

heh, thank you for my attempt at russian, I want to communicate well with people who don't speak English as a 1st language.

You are absolutely correct, I'm using chrome! Thank you for updating it for others. will look more at the eimi code, looks good for when I bring my prototype code up to production standards (fastapi, svelte, etc)

@Whatzer
Copy link

Whatzer commented May 5, 2023

.

@hh23485
Copy link

hh23485 commented Jun 6, 2023

That's amazing, learned a lot from the code, thanks!

@VionaWang
Copy link

Thanks! This is really helpful :) I'm wondering if the Response(chain("# A koan story about AGI\n\n"), mimetype='text/plain') can be put inside a webpage instead of one line writing?

@python273
Copy link
Author

@VionaWang not quite sure what you mean. You can send the prompt in the request, then you use request.json

JS code to send json:
https://github.com/python273/eimi/blob/5fc4a1744191a5954955d77786e94bcaf8d6d5de/app/src/Session.svelte#L207-L218

@VionaWang
Copy link

@VionaWang not quite sure what you mean. You can send the prompt in the request, then you use request.json

JS code to send json: https://github.com/python273/eimi/blob/5fc4a1744191a5954955d77786e94bcaf8d6d5de/app/src/Session.svelte#L207-L218

Thanks for your reply!

From what I mean is, I have an html template that specifies the place for the answer to put in. But using "Response(chain("# A koan story about AGI\n\n"), mimetype='text/plain')" as your code above would give me a one line write in the webpage instead. I'm wondering how to make the streaming response to show up in a webpage with a spot devoted for it instead of one line write?

I'm new to frontend and all these web development stuff so any help would be appreciated. Thanks!

@python273
Copy link
Author

@VionaWang You should look at index function. You can put this html to a template, the output will be streamed to element with id output (<div id="output"></div>), you can place this element anywhere on the page

@VionaWang
Copy link

@VionaWang You should look at index function. You can put this html to a template, the output will be streamed to element with id output (<div id="output"></div>), you can place this element anywhere on the page

Ahhh sounds good, thank you so much for your help and your prompt response!

@xerxes01
Copy link

This is very helpful, thanks @python273 ! I also wanted to implement similar streaming using my local huggingface models in Langchain Pipeline - however, the llm chain can't be instantiated everytime in a thread (takes ~10 sec to load all shards). Any idea on how to go about it?

@noreff
Copy link

noreff commented Jul 27, 2023

Thank you for posting this!

One minor thing: had to change mimetype='text/event-stream', can I ask why did you choose text/plain?

@python273
Copy link
Author

@noreff The mimetype shouldn't really affect anything. Also event stream is not correct mimetype, as the data is plain text, not in event stream format. My only guess why it might have changed something is there's a server in front of Flask than disables buffering if it detects text/event-stream mimetype.

@python273
Copy link
Author

@xerxes01 I would probably make a separate process with tcp server that keeps the model in memory and serves requests. Then connect to it from Flask.

I did something similar for stable diffusion back when it was released. Script for reference, though it's not that good:
https://gist.github.com/python273/ae9d085ce9f2968b50c6ab90f2017076

@DmitryCape
Copy link

CallbackManager was renamed into BaseCallbackManager
from langchain.callbacks.base import BaseCallbackManager

@promversioning
Copy link

promversioning commented Sep 13, 2023

Since the OpenAI library is deprecated, I have tried to replace it with ChatOpenAI without succeeding because it gives me these errors, do you know how to help me?

from flask import Flask, Response
import threading
import queue

from langchain.chat_models import ChatOpenAI
from langchain.callbacks.base import BaseCallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler

app = Flask(__name__)

@app.route('/')
def index():
    return Response('''<!DOCTYPE html>
<html>
<head><title>Flask Streaming Langchain Example</title></head>
<body>
    <div id="output"></div>
    <script>
const outputEl = document.getElementById('output');

(async function() {
    try {
        const controller = new AbortController();
        const signal = controller.signal;
        const timeout = 120000; // Imposta il timeout su 120 secondi

        setTimeout(() => controller.abort(), timeout);

        const response = await fetch('/chain', {method: 'POST', signal});
        const reader = response.body.getReader();
        const decoder = new TextDecoder();
        let buffer = '';

        while (true) {
            const { done, value } = await reader.read();
            if (done) { break; }

            const text = decoder.decode(value, {stream: true});
            outputEl.innerHTML += text;
        }
    } catch (err) {
        console.error(err);
    }
})();

    </script>
</body>
</html>''', mimetype='text/html')


class ThreadedGenerator:
    def __init__(self):
        self.queue = queue.Queue()

    def __iter__(self):
        return self

    def __next__(self):
        item = self.queue.get()
        if item is StopIteration: raise item
        return item

    def send(self, data):
        self.queue.put(data)

    def close(self):
        self.queue.put(StopIteration)

class ChainStreamHandler(StreamingStdOutCallbackHandler):
    def __init__(self, gen):
        super().__init__()
        self.gen = gen

    def on_llm_new_token(self, token: str, **kwargs):
        self.gen.send(token)

def llm_thread(g, prompt):
    try:
        llm = ChatOpenAI(
            model_name="gpt-4",
            verbose=True,
            streaming=True,

            callback_manager=BaseCallbackManager([ChainStreamHandler(g)]),
            temperature=0.7,
        )
        llm(prompt)
    finally:
        g.close()


def chain(prompt):
    g = ThreadedGenerator()
    threading.Thread(target=llm_thread, args=(g, prompt)).start()
    return g


@app.route('/chain', methods=['POST'])
def _chain():
    return Response(chain("Create a poem about the meaning of life \n\n"), mimetype='text/plain')

if __name__ == '__main__':
    app.run(threaded=True, debug=True)

the error can be found Here

if someone can suggest a solution I think it would be helpful for many developers who currently do not know how to do it. Thanks

@python273
Copy link
Author

Updated the example

@promversioning
Copy link

thank you very much, it works great

@SaiFUllaH-KhaN1
Copy link

That is really great code. Thank you so much. If possible can you update it so that with new user input the responses are not replaced by new responses and they are in a chat like manner with previous responses.

Anyhow, great work and really appreciate you shared all this.

@Houss3m
Copy link

Houss3m commented Mar 8, 2024

what if we are using Tools, how can we get streamings for each tool being invoked?

@YanSte
Copy link

YanSte commented Apr 19, 2024

Hi all !

I wanted to share with you a Custom Stream Response that I implemented in my FastAPI application recently.

I created this solution to manage streaming data.

You can use Stream, Event of Langchain but I'm doing special things with the Handlers that's why I need it.

Here examples:

Fast API

@router.get("/myExample")
async def mySpecialAPI(
    session_id: UUID,
    input="Hello",
) -> StreamResponse:
    # Note: Don't write await we need a coroutine
    invoke = chain.ainvoke(..)
    callback = MyCallback(..)
    return StreamResponse(invoke, callback)

Custom Stream Response

from __future__ import annotations
import asyncio
import typing
from typing import Any, AsyncIterable, Coroutine
from fastapi.responses import StreamingResponse as FastApiStreamingResponse
from starlette.background import BackgroundTask

class StreamResponse(FastApiStreamingResponse):
    def __init__(
        self,
        invoke: Coroutine,
        callback: MyCustomAsyncIteratorCallbackHandler,
        status_code: int = 200,
        headers: typing.Mapping[str, str] | None = None,
        media_type: str | None = "text/event-stream",
        background: BackgroundTask | None = None,
    ) -> None:
        super().__init__(
            content=StreamResponse.send_message(callback, invoke),
            status_code=status_code,
            headers=headers,
            media_type=media_type,
            background=background,
        )

    @staticmethod
    async def send_message(
        callback: AsyncIteratorCallbackHandler, invoke: Coroutine
    ) -> AsyncIterable[str]:
        asyncio.create_task(invoke)

        async for token in callback.aiter():
            yield token

My Custom Callbackhandler

from __future__ import annotations
import asyncio
from typing import Any, AsyncIterator, List

class MyCustomAsyncIteratorCallbackHandler(AsyncCallbackHandler):
    """Callback handler that returns an async iterator."""
    # Note: Can be a BaseModel than str
    queue: asyncio.Queue[Optional[str]]

    # Pass your params as you want
    def __init__(self) -> None:
        self.queue = asyncio.Queue()

    async def on_llm_new_token(
        self,
        token: str,
        tags: List[str] | None = None,
        **kwargs: Any,
    ) -> None:
         self.queue.put_nowait(token)

    async def on_llm_end(
        self,
        response: LLMResult,
        tags: List[str] | None = None,
        **kwargs: Any,
    ) -> None:
          self.queue.put_nowait(None)

   # Note: Ect.. for error 

    async def aiter(self) -> AsyncIterator[str]:
        while True:
            token = await self.queue.get()
           
            if isinstance(token, str):
                yield token # Note: or a BaseModel.model_dump_json() etc..

            elif token is None:
               self.queue.task_done()
               break

https://gist.github.com/YanSte/7be29bc93f21b010f64936fa334a185f

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment