Skip to content

Instantly share code, notes, and snippets.

@brylie
Created November 6, 2023 19:42
Show Gist options
  • Save brylie/9107e976f403193457d16a5d13cb98ec to your computer and use it in GitHub Desktop.
Save brylie/9107e976f403193457d16a5d13cb98ec to your computer and use it in GitHub Desktop.
CodeWithBrylie - e36
Hello and welcome to a live code hangout where we will be working with the Lang chain project and OpenAI, GPT 3.5 I believe, possibly 4.
I've got a repository here on GitHub if you'd like to check it out.
I'm going to place it in chat so it's clear where you can view the code.
The initial steps, this is just familiarization. Just trying things out and seeing how it works.
I'd like to get familiar with the basic concepts graphically if possible. I've got this project running called Langflow, which is a drag and drop user interface for Lang chain.
Sorry, that's the wrong project. I was sponsored a Langflow.
It's Python based.
It's modular.
I'm looking forward to these integrations, particularly being able to read and write data from sources like Hacker News or Wikipedia and then perhaps saving information to Notion, Google Drive.
So it seems like a really interesting project.
So the documentation is also fairly good and one thing I wish I had would be a dark mode.
I'm just a bit hard on my eyes to read bright mode.
So we've got it installed. It allows you to use hugging space models, which is pretty cool.
But for now, I will just stick with open AI.
But essentially, the flows.
So we don't need this repo. It's basically just running Langflow. It's the only thing that the code is doing right now.
So we can go and maximize this, I suppose, really bright.
Interesting dark mode, please.
I used to think it was a fad, but no, it's just really it hurts my eyes to read bright takes background. So just like having a flashlight in your face.
So bright mode only websites are somewhat painful to read and feel like having a flashlight shining right into your eyes. Please support dark mode for length flow and length flow documentation.
Bright brightness. You can see my video is like I'm almost getting a suntan.
Oh my gosh. I'm a granted I have lighting because the video is dark. The room's kind of dark, but I mean the lighting doesn't hurt my eyes like the length chain length flow UI.
Maybe there is a dark mode. Oh man.
Do they need a screenshot here?
I don't think so.
It hurts, it like hurts my eyes.
Out of curiosity, how difficult would it be to implement a dark mode?
What are we using?
For example, what are we using for the Docs?
I think the Docs are here.
Oh, Docs right here. Okay, so for Docs, Docsaurus.
Docsaurus theme.
I'm going to make the MDR-V2.
Docsaurus, styling layout, theme configuration.
Look at this nice dark mode. Dark mode.
Hoyle, hoyle, hoyle.
That's a good question. So it seems like it should be just a default mode.
All main themes, tell me index is a main theme.
Docsaurus, Docu-saurus theme classic.
And we are using preset classic theme, MDR-V2.
Just what is this? What is MDR-V2?
Okay, so that's markdown. Yeah, I have seen that JSX markdown. Okay, so that's not quite the right. I'm getting close though. I'm getting close.
Just is there dark in here? Dark.
Light.
Light called theme.
It's only used, oh, here we go, two and three. It's used there, it's created there.
Not really used.
Well, maybe if I, this would be an interesting thing just to get off on a tangent, but maybe if I install this and run it locally.
Yeah, let's just try it out. All right, so going into the source, this is what I like about open source. If there's a problem, you can try to fix it.
Code, we'll clone it.
So, BS code done with spectacle.
One moment.
I'm going to go on again, Lime.
Code, lane flow.
Lane flow, yeah, we'll open the whole thing.
I trust.
There is a dev container.
That's kind of cool. I'll leave it alone for now. I appreciate that though.
We're going to go to the docs that don't work with those a bit.
Docs.
I'm going to get swept away in the NPM yarn, NPX, bun, debacle.
I just don't have time.
But basically, okay, that's hell in config.
Where was I reading this? Okay, so docusaurus configuration.
And docusaurus dark mode.
Theme configuration color mode.
The classic theme.
So we can theme config.
Docusaurus config export default. The idea is that this is export default, basically module exports. I think the same thing.
Theme config. So here we go. Let's find that key.
Pre-set classic. Looking good. Looking good.
Color mode.
And they just disabled the switch. Why?
Just give people a choice. Seriously.
Probably something to do with some syntax highlighting or other diagrams or some other plugin, but no, really give people a choice and fix the bug with the...
Foths.
I'm going to open a polar quest with this.
All right.
Let's see what it looks like with a dark mode switch.
Okay.
There it is. Simple as that. Wow. It's a lot easier on the eyes. Now why?
I'm thinking.
It has to do with diagrams or... You know, these aren't so bad.
But I can see like it is a bit of a...
Inconsistencies, but it's less lightness. Less brightness on that. But you can see when I scroll past there, it's like flashlight going through a tunnel, out of the tunnel, out of the tunnel.
Honestly, there's only a few flashes, but it's not like a strobe effectory. Doing that all the time. So that's a simple polar quest. I will do that.
Excuse me. Langlo.
I will edit this.
In order to do so, I will fork it.
And yeah, respect user preferences.
This whole thing, you know, if I've got dark mode in my operating system for whatever reason, and it's evening and I'm in low light mode, be respectful of user's preferences.
Too long.
So allow users to use dark mode. It's nice to allow users to choose light or dark mode based on their own needs, such as sensitivity to bright light. I think that's a pretty fair request.
Hopefully, you know, I did it in a polite way, hopefully. And with comments to describe why I thought it was important. So let's check it out. Creating the pull request.
Dark mode.
That was 115.
Yes, create pull request. And they can edit it, of course. They don't like my comments.
So there's one improvement to the documentation. Hopefully, that'll get merged. I'll take a quick break and refresh my tonic water.
Great.
Out of curiosity, you know, for Langflow itself, how is the color mode set?
It might be more complicated there, but might be using a front end CSS framework that supports that because, you know, it's fairly conventional for the CSS frameworks to now support dark mode. Like I was saying, I originally thought it was a fad to have dark mode. It even took me a while to adopt dark mode in my IDE circa 2015. I was like, what? I want light mode. But really, literally, just I get it now. It hurts. It's bright. And these screens are getting increasingly large and they're essentially big flashlights, high definition flashlights.
So you have a white page that is basically beaming light in your eyes all day. Our bodies are affected by that. Our neurology and physiology is affected by this brightness.
So let's see. We've got post CSS config, tailwind, invite.
Interesting.
So I'm thinking invite is the build tool, isn't it? No, it's the playwright. Interesting.
Prettier.
Let's just see if I can find the framework they're using. Start engine X.
Start proxy. So first I would start with package Jason to see what their commands are, I guess.
Radix. So probably react.
Yeah, react stuff.
Okay, so white is the build tool and command runner.
So what is white got in it?
I know it's supposed to be saying it some other way, but that's the way I'm saying it.
Dark mode.
Right there.
Class.
Maybe I'm just totally missing it, but it was literally in the documentation excluded. They disabled the toggle.
At least I'm seeing some light here so that we can enable dark mode.
Okay. Daisy why? Okay.
Nice.
Where do I dark mode?
In source. In source.
Run in. Oh wait.
No.
Dot dot dot. There we go.
Dark.
Almost dark gray. So some color definition.
Save dark theme.
Again, good possibility here.
Maybe there's something up here. I was just overlooking.
I just a little toggle.
Or my setting.
It's not running anymore.
All right.
So it's not running. It's not running. I'm not running. We're not running. So it's not running. It's not running. It's not running. So. It's not running. Okay. Okay. Yeah. So it's not running. It's not running. So. We did find a doc source. So I can close that. I know now we're marked out x is.
It's like they have the support for dark mode but no toggle.
So that's cool. Maybe I can just add the toggle.
This is more of a feature that would have to be added I think.
Using the class strategy.
I think that's what they mean here.
Ah, you have to use it on your HTML element. Okay. Talking about media is the default value and it does not require to write explicit. So what happens when I say media?
You can do both apparently.
Because I'm afraid if I do...
Well media then class.
Perhaps if I specify them in that order it'll prefer media if it's available or default to the class.
Now that dark mode is a first class feature of many operating systems it's becoming more and more common to design a dark version of your website to go along with the default design. To make this as easy as possible, TAY1 includes a dark variant that lets you style your site differently when dark mode is enabled.
By default this uses pre-first color scheme CSS media feature but you can also build sites that support toggling dark mode manually using the class strategy.
Interesting.
Dark mode class, BGY, dark BGY will be white. Dark mode not enabled. Dark mode enabled. Class is dark.
Interesting.
You can set a prefix on your tail in config, be sure to add that dark class. Supporting system preferences and manual selection.
The class strategy can be used to support both the user system preferences or a manually selected mode using the window match the API.
Some frameworks like NativeScript have their own approach to enabling dark mode and add a different class name. When dark mode is active, you can customize the dark mode selector by name by setting dark mode to an array with your custom selector.
So yeah, now it's not trivial. Tailwind does support it. In order to toggle that, the front end would have to have some JavaScript in the base template layer, some more nav bar or whatever that sets some thing in local storage.
Which is typically done by higher level themes.
So I don't think I'm going to be able to do this change. I'm not familiar enough with TypeScript and things like that.
Okay, so I'm just clicking randomly now.
That is just a heck of a lot of code. You have to add this all throughout your codebase now because your utilities splattered everywhere. That's just ridiculous.
[silence] Man, I could just imagine if they hadn't done this yet, you'd have to go through every single template, every single file, every partial. [silence]
That's crazy I'm making. Rather than some kind of a cascade or inheritance.
[silence] Maybe it has some inheritance scheme if you just set the higher level components, but yeah, wow.
Incredible. Yeah, if you just set it here and everything inside of here is going to follow those two. Yeah, man.
[silence] I did a cursory check of that. [silence] All right. [silence]
So, localhost 3000, welcome to Langflow. How to install. I think it's the code blocks is why, but you know what? A little bit of bright code blocks compared to having everything just blaringly bright, I'll take it.
Yeah, same thing with these prefixes. I think there's a few elements that are... [silence] So, if I print screen here, it'll work. There we go. Rectangular region. [silence] Copy. [silence]
[silence]
[silence] [silence]
[silence]
Code blocks. What was that other one? So, I just...
Code fences. You know, these are minor details. Particular region. [silence] However, these are code fences also, so there's something special about those.
Copy that. [silence] Okay. Spans with hard coded color. There's something... GitHub Lite. Inline code GitHub Lite theme.
[silence]
I just think it's all GitHub Lite theme things. So, perhaps videos and some diagrams, but those aren't loading for me.
There's another way to get that toggle to swap the GitHub Lite and GitHub Dark. I'm not sure exactly how that is even done, but I'm going to just look at all the pages as requested.
I think I did the tools already. Oh, man. Utilities.
Vector stores. Double names. ACKPI, I did a nice one. Same.
Hey, that's interesting. Prompt template. This is what I wanted to do is read through this, but it's so painful. It's so painful.
Even the code examples, they jump out at you. [silence]
Yeah, it's just all the same. Anything with GitHub, anything with GitHub Lite. Could we settle on something? Why do these need to be GitHub Lite? They probably don't, in fact. Inspect that. Container here. Code wrapper. CHM GitHub Lite. So, yeah, that would be good.
[silence]
I would say code should just be on GitHub Dark anyway. Maybe I can make that change.
[silence]
That should be a configuration setting.
[silence]
So, what is doing the code generation for the front end? [silence] Oh, no, no. It's staging. That's what I'm doing.
So, perhaps DocSource has... Let's open this. Still looking bad in closing this. Code.
[silence]
Prison reactor under... [silence] Right there.
[silence]
Oh, man. Really? Oops. Didn't work. Let's see.
Let's run it again. Close these things I'm not using. [silence]
That did work, though. Now, what if I have the Lite mode on and...
Oh, yeah, I can just talk about that because I've done that. Honestly, though, that doesn't look bad.
Honestly, though, I would recommend this.
I'll add that change as well. In my branch.
[silence]
Oh, okay. I forget where I... Did I change that?
[silence] Right there. DocSource. Copy. Yes. [silence] Okay.
[silence]
And it looks more natural because typically, code editors in CLI environments are dark mode for historic reasons. The terminals didn't have...
I suppose they didn't want to use a lot of energy and also had monochrome displays and, yeah, it just hurts your eyes to look into a bright beam of light.
Yeah, so our... Mine's already accustomed to having these dark mode backgrounds behind programming language things. And then this just kind of makes it work holistically. And it's a compromise in between the two. They can... Let's say default to light mode. [silence]
Cool. Yeah, very quick response from Gabriel Luis Freitas Almeida. Interesting. Cool.
So I like that. The responsiveness in the community is a good sign. Logspace AI.
All right. So we'll just see how that goes. Got some handy comment there.
[silence]
So yeah, that's why I included the pictures so that it was kind of clear. Dark mode should be supported in both, I believe. I think I'm not alone in that kind of need or belief. It's becoming increasingly common to have dark mode support in our tools. Every tool on my screen, GitHub and OBS and Twitch and you name it, they're... My operating system is in a dark mode theme, my phone, because we just did so much light. So much light and we can only take so much.
There we go. My camera adjust a little bit. My eyes are still straining. So I do want to work with it a bit. It makes it hard... It legitimately makes it hard to work with this tool.
I'm surprised. Okay. So what we're going to do now, now we're in, is look at how things work.
So you create a flow with Langflow. You drag the sidebar components onto the canvas and connect them to create your pipeline.
So the flow provides a range of Lang chain components to compose from, including large language models, prompt serializers, agents and chains.
Fork. The easiest way to start with Langflow is by forking a community example. Forking an example stores a copy in your project collection, allowing you to add and save the modified version as a new flow.
Build. Building a flow means validating the components, have prerequisites fulfilled and are properly instantiated. When a child message is sent, the flow will run for the first time, executing the pipeline. So that builds the little lightning button, I think. None of these work for me.
Strangely enough, it's like it tries to load and then fails.
So Harry found it. Now the embedded player. It's not a problem with that. The video is gone.
Honestly, a screenshot would do here, but okay, video is fine. So this is feeling a bit, it's well documented, so it's not like, feeling alpha, but I mean, some drawing pains, I guess.
All right, so we'll come back to this.
Ah, nope. So is it still running? Let me see here.
Luckily, or...
What's going on?
So back to the link flow. Actually, I'll open my own little link flow repository and link chain experiments.
Just open the terminal again.
And this will just write the source.
Activate the virtual environment.
Here's link flow start.
I'll be right back.
All right, so it's running again.
And I've got some basic examples running. It took me a bit of tinkering to figure out I didn't have any credits on my OpenAPI account.
OpenAI account.
So we'll go.
But I do have now a working flow.
It doesn't... That's not it, probably. Let me find it. My collection.
The time travel guide, I think, was a good one. And...
I wonder if I can persist this key in an environment variable. Open API key so I don't have to set that each time.
Okay, and my last session I used one penny.
So it's fairly cheap.
So one thing I would like to check... We'll go half dark.
Is...
Can I use an environment variables here? Okay.
So this search...
Is there a search? Okay. So this search...
Is there a search?
Yes, you can use the environment variables to store your OpenAPI key. This is a common practice to keep sensitive information.
Like API is secure. You can set the environment variable in your operating system.
All right, so yeah, I was thinking I should just try that. But it's a generic response.
Apparently this minable is just...
Not just searching the Langflow documentation.
So I don't have the project running in a development mode here to view this source.
But now I can use this search.
So essentially do they look for an environment variable for the OpenAPI key?
I was trying to think I would be better off using just using Langchain.
Directly, I don't know. Learning Langchain, learning how that works before going into Langflow.
You know, like a JupyterLab project.
Yeah, that's where I'll start doing it. That will all commit more code into this experiment. But man, it's nice because you get a streaming chat interface real quick.
Well, honestly, it's not that hard to copy and paste it. Create a new API key, copy and paste it in each session. It's a bit...
Granted, I don't have to. I can store my... I can put it in plain text or something.
All right. So yeah, I'm just hoping I could put it in an ENV file and these widgets and the like would load that up.
Shift horizontal scroll would be kind of cool.
So there we go. Temperature.
Now let's see if I've got GPT-4. I tried that before, but it's out of money. Money, so I think it just didn't work.
So now we'll commit the changes.
And give it a try. I didn't get any bells. So when there's an error, oftentimes it comes over in these little bells here.
I might get an error here when I try it out.
Testing.
Hello, I'm ready to assist you. Which time period or future time would you like to explore? Okay, very cool. So I'm able to use GPT-4.
The reason I like this graphical approach is it makes it more apparent what's going on, where these things are coming from and what they're doing. And it sort of shows me all the building blocks and it's organized in a nice categorical scheme.
So I can start familiarizing myself with the basic building blocks and how they interrelate. You know, it's also got these inputs and outputs that are, that are documented and it tells me where to get a memory from and I can go over here to memories. So that's really cool. Conversation on buffer memory, buffer window memory,
entity key, KG memory, conversation summary memory.
Postgres chat message history. Okay, so I can persist the messages in a Postgres database in a vector store retriever. Okay, so we've got the idea here with creating flows.
They do have sign up and sign in, API keys, asynchronous processing.
You know, I'm not sure the value of this just, I'm going to read about it. From version 0.5, length flow introduces a new feature to API, the sync flag. This flag allows users to opt for asynchronous processing of their flows, freeing up resources, enabling better control over long running tasks. This feature supports running tasks in a cellular worker queue or any IOTask group.
Interesting.
Okay, so component. Components are the building blocks of the flow. They are made of inputs, outputs, and parameters that define their functionality, providing a convenient and straightforward way to compose LLM based applications.
Learn more about components, how they work in their LinkedIn documentation. So I'm glad it's not inventing its own constructs. It's sort of adhering to the LinkedIn. You know, I've heard complaints about LinkedIn being a very thin abstraction and you could implement all the LinkedIn components. You know, I think the abstraction is the point, even though some of the abstractions might only be a few lines of code.
Having a consistent interface over multiple models and giving you a structured way to think about building LLM applications,
I believe is adding value.
Though granted, I'm just starting out. So let's see if we can find the component documentation.
That's the LinkedIn developers itself. Components. They have a dark mode. I'm already in it. Because my operating system is set to dark mode. I'm already in it. I'm already in it.
Because my operating system is set to dark mode.
So here's the thing I can fix this right off the bat. The components docs. These broken docs, LinkedIn.com docs category components, docs, LinkedIn.com docs, integrations component.
Conversation chain. Let's just see if there's a conversation chain here.
I believe so. I think it's going to be.
I think this is the right link. Link.
But it would probably be.
Link chain, API chains, complete classes.
I don't know the equivalent.
Also by LinkedIn. Oh, this is the JavaScript one.
But conversational memory sounds about right.
However.
I think I was at the right one here.
Yeah. Vector stores, retrievers. These are the components. So I'll just fix this.
Okay. Docs.
Highlines components.
Okay. Oh yeah, there it goes.
Nice wrapping.
Oh, that's Python though.
So we get a 404.
I do notice this.
I've gone to the Python by default.
Perhaps it should be the default.
Damn it.
So the components documentation contains a broken link to the Lang chain components documentation.
This pull request fixes the link.
Two point to the Lang chain integration slash components.
Okay.
We'll see. Let's see what happens.
Anything else?
Oh, thank you. Thank you. They merged it. Wow. That's cool.
I like that. That really helped. It helps. Now maybe they'll get deployed soon. I don't know what they're if they've got a CI process for that. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. They've got a CI process for that. How that works. An action happening.
Hey, you're running on that. CodeQL is still running.
Deploy to GitHub pages. Let's see. Released.
One minute for last week. All right. So it looks like there's a release process there. Makes sense. You probably don't want continuous integration on your doc site. Well, I don't know.
Can I trigger it?
Last month. Hey, it doesn't happen very often.
Well.
Yeah, we're not that far into the month. So yeah.
Anyway. All right. A couple of weeks in. We'll have dark mode in the lane flow docs.
Right up here. Okay.
So yeah. Components. Features. During the flow creation process. You will notice. If there's anybody in chat. Let me just check. Nobody. During the flow creation process. You'll notice handles colored circles. Attached to one or both sides of a component. The handles represent the availability to connect to other components. While their colors are type hints of Rover handle. To see the connection details. I like this strongly typed graph programming. For example, if you select a color or conversation chain component. You can see the connection details. You can see the connection details. For example, if you select a color or conversation chain component. You'll see orange and purple input handles. They indicate that the component accepts an LLM and memory component as inputs. The red asterisk. Means. That at least one of the input types is required. So the LLM is required here.
On the top right corner. You'll find the component status icon. Make sure make the necessary connections and build the flow with the. Zap. Zap icon in the bottom right of the canvas. And once the validation is completed, the status of each validated component shouldn't be light green. So when we come over here to my.
Flow. We see everything is light green.
I zapped it. My open API key. If I deleted that and I zap it, it'll turn.
Red. Because it's misconfigured.
And the parameters. parameters. Lang flow components can be edited in the component settings button. Hide parameters to reduce the complexity and keep the canvas clean for.
And intuitive for experimentation. Okay.
Components can be edited in the components. Settings button.
Curious.
Curious. That is an interesting.
Sounds interesting.
Too bad the videos are all missing.
The component settings button. Hide parameters to reduce the complexity and keep the canvas clean.
And intuitive. I like that.
Hide the parameters.
Oh.
I see.
So we have a component selected.
Some components.
All of them you just have to.
Make sure not to drag it all. And there's edit.
Documentation.
There. Okay. We're at the Lang chain docs. That's pretty cool. It jumps you right there. Okay. So. We're at the Lang chain docs. That's pretty cool. It jumps you right there. Copy and deletes. Time. Okay. So this is our.
It's a prompt chain.
This is grayed out. So nothing happens.
This is not. So something happens.
Interesting. It's just the same. Okay. So let's see. Let's see. Interesting. It's just the same.
Ah.
Save the changes. Hey, that's cool.
So I can say, let me just configure the temperature there. Hide everything else.
So that makes my graph a bit more manageable. Cool.
And for example, this doesn't have any docs. So.
It might be its own meta thing.
Strange.
Strange.
Not sure how that was defined. Perhaps in code. Maybe this is defined in code.
These would be custom chains. This is what I'm actually really interested in learning about these custom chains.
And not only.
Not only being able to string together the agents to do the chains like off screen, so to speak.
But to chain agents to interact with the user in sequence and walk the user through a chain of thought so that they essentially we prompt engineer help the user prompt engineer.
By prompting the user to get better and better input and more specific input and clarifications.
That way people don't have to learn so much details about how it'll impromptu
work. The LLM will meet us in the middle and kind of help us draw the information out of us. That's the idea.
But anyway, all right. So now we know.
And so we've got to open a chain and more documents here. System message human message for the prompt.
And.
You can use templates for system for system prompt.
And then to become parameterized.
Very interesting.
All right. Some more details on that later.
So that's the component features. When you click for a new project, you'll see on the top left corner of the screen some actions such as import export code and save as displayed below.
Import export.
For some reason these seem backwards. Import should come down and export should go up.
I think they're backwards.
Because it's pointing to the cloud and this is going to download code.
Generate the codes integrate your flow into an external application. Hey, that's cool.
Wow. Cool. Flow ID tweaks.
You can tweak the flow by adding tweaks to a dictionary. Ah, so then you can add a dictionary.
Ah, so then yeah, each of them. Components in the pipeline has these standard properties.
Yeah, it's just what we saw in the user interface.
Cool.
So this is the neat thing is that what you build here is already on track to be.
Useful as an application or integrated into another, you know, interface or application.
I'm wondering.
Run the flow.
Get dictionary back.
Wondering if it can stream.
It looks like you're just getting passing on the Jason.
So, okay. Streaming is another thing. It's all right.
It's more complicated.
Import and export. Watch for API keys being stored in local files.
The code button shows snippets of your flow as a Python object in API.
Through the link flow package, you can add a flow from a Jason file and use it as a link chain object. From link flow load from Jason.
Very cool.
Once you save a flow, the API endpoint is created with your test changes. Click the code button to use the flow as an API. That's really cool.
You can post adjust component parameters using the tweaks.
Yes.
A collection is a snapshot of the flows available in the database. You can download your entire collection to locals for local storage and upload anytime for future use.
Prompt customization.
The prompt template allows users to create prompts and define variables that provide control over instructing the model.
Prompt template.
You get three variables there.
Variables can be used to define instructions, questions, context, inputs, or examples for the model and can be created with any chosen name in curly brackets.
E.G. variable name. The access placeholder is for parts of the text that can be easily modified. All right. So here's a prompt.
That's the default one. And then we have some curly braces here.
And this text one, I guess, is the one that's predetermined. This will be the input from the chat.
And context, it looks like.
It comes from here. So you get the template. You're going to get these two variables, context and text.
Once inserted, these variables are immediately recognized as new fields in the prompt content.
Here you can define their values within the component itself or leave a field to be adjusted over the chat interface.
You can also use documents or output parsers as prompt variables.
By plugging them into the prompt handles, they'll disable the feed for that input field.
Interesting.
With this, users can interact with documents, web pages, or any other type of content directly from the prompt, which allows us for seamless integration of external resources with the language model. If working with an interactive chat-like flow, remember to keep one of the input variables empty to behave as the chat input.
For example, the text.
So, for example, if we go back.
Edit this flow, hook everything up.
Plug in my key.
Leave the temperature there.
So we have a...
a little LLM.
Some memory. Some memory. Which also doesn't need...
so many visible keys here.
Then the prompt template.
Okay, so this one only has one.
But it's going to plug the history in somehow because...
the conversational buffer memory uses the memory key history.
Interesting. So it's not explicit.
You can chain...
chains.
Interesting.
The following is a friendly conversation between human and AI.
The AI is talkative. Okay, let's try this.
So, the following is a conversation between a human and an AI psychologist. The AI psychologist asks open-ended questions to encourage the human to explore their life
So, the AIs, "Icologist encourages deeper exploration of topics that arise by asking probing and clarifying questions that help the person get closer to the root of topics that arise and difficult emotions they may be experiencing." Okay, let's try this out. So, we're going to have a little bit of a history here, some AI, some human input and some AI prompts or encouraging the AI to prompt the person, probing the person's psyche. This is sort of the general pattern I believe I would like to employ.
These AIs to help with various tasks and help us in various ways to improve our lives, work more effectively, think through ideas a little bit more holistically, things like that.
So, I've saved that. We'll commit the changes. Everything is green and that way we will chat.
Hello there.
Incorrect API key. Okay.
That's...
It printed out a little bit. Oh, I see what I did wrong.
I didn't copy the whole thing.
Save changes, lightning bolt and go.
Hello there.
Just start with a simple opening.
Hello. How can I assist you today? So, now I'll be a little bit defensive.
I don't know. You're the doctor. You tell me.
I'm here to listen and help you explore any concerns or issues you may have. Is there something specific on your mind that you'd like to discuss?
There we go.
Now, is there ever? Is there always? Yes and no.
So, yeah, I should probably continue this offline.
This is private information here.
What's something? What's something?
Let's see.
[typing]
Genuine concern here.
I'm a bit worried about work. Sometimes it feels like something is going to break when I release new code, even when there are tests. A lot of this is because there's a lot of systems and all these systems are integrated, but I've only got a little field of view of my own system. But let's just start here.
I understand that feeling of worry. It's not uncommon to feel anxious about the outcome of your work, especially when it involves releasing new code. Can you tell me more about what specifically worries you? Is it fear of something going wrong or is it a pressure to meet in certain expectations?
[typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] [typing] It's mostly due to the fear of something going wrong. Something might break somewhere that wasn't visible to me when making the change.
[ picked a coffee Cafe] [other ticket noises] [p 215 essential Bieber competition Blue fall patients were earths people
Yeah, I have got stuck in this, oops, wrong button.
This sort of loop a bit before, not just at work, but in general with the, the length flow gets stuck sometimes. Oh, there we go. I see. It sounds like you've, you have a fear of introducing unforeseen issues when releasing new code. This fear is quite common in the software development field as the complexity of code can sometimes lead to unexpected bugs or errors. Have you experienced any instances in the past where something went wrong after releasing new code? Okay, at this point I'm about to divulge, I could probably do this in a way that's not disclosing anything, but yeah, definitely.
I've released code and something broke and I was like, whoa, and it was a big break. Caused an outage. So yeah, I guess I can just generically say that.
Yes, I've released code in the past that triggered a cascading effect that ended up causing a several hour outage.
And you know, this could be anywhere I could get any of my jobs or any volunteer work I've done. So I'm not saying specifically when or where this happened, or maybe I'm making it all up, maybe I'm hallucinating.
I can understand how the experience would contribute to your fear and worry. It's unsettling when something we release has unintended consequences. Can you tell me more about what happened during the incident? How did you handle it? And what was the outcome? All right, well, this is exactly what I asked the bot to do to really ask probing questions and get into it, into the depths of the issue, into the root of the issue.
And it might not be the specific issue that's the root. You know, there could be a deeper psychological need or that's not being made, for example. And that's what I'd like the bot to sort of transition away or through these details into the deeper human humanity of the issue.
So there's this general pattern called the five whys.
You ask why something happened, why?
Why are you worried about that?
Why does that upset you? And when you each response, you ask why about that response, and it kind of pulls you deep into a topic.
This is asking how.
It's also asking closed ended questions.
These are closed ended questions. So I just noticed.
So I can say no, no. Oh, wait.
Oh, well, that's not so close ended, but it's like choice. Okay.
So let's work with this prompt a bit.
This is really interesting.
(clicking) - So, where possible, use a technique such as the five why's to get to the heart of a human need or experience behind everyday issues. The goal is to help the person work through their experiences while building understanding, awareness, and resiliency.
All right, so just a slight modification to the prompt. We'll check and save and...
You know, I could ask it to introduce itself, but...
Oh, okay, so...
Actually, it's...
Preserve the history.
Okay.
(clicking)
(clicking)
Yes, it can be quite worrisome, and sometimes hard to get off my mind. You know, when incidents happen, when we break things, when we have difficulties, it doesn't just stop there at the moment, and even if you clean up the mess and things will get better, those linger. Those emotions and anxieties, they linger, and I'm finding myself literally to this day,
literally today, in fact, I'm going to merge code and it's in a similar pattern is what caused this thing. This is why this is fresh on my mind, I guess, caused this outage and triggered the thing, and I was just feeling really worried about it and nervous about it. Ended up, you know, released to the code, and it's in development, and it was okay in development environment, but I think a tool like this can help us process, you know, we're a friend, of course, or an actual psychiatrist. Psychologist can help us process these emotions and these experiences, so I'm kind of seeing how this can be more helpful to people who possibly can't afford a psychologist as a start.
I understand that worries can be difficult to shake off. Can you tell me more about what specifically is on your mind and causing a concern? Well, this is interesting. Maybe it did sort of lose the thread, but the chat history is there.
Let's see.
Make something up.
I'm afraid of the oven.
It can be dangerous. I'm afraid of being burned or burning food
or leaving it on for a prolonged period of time. That's legitimate. I'm not trying to be just too silly here. I see, can you tell me more about why you're afraid of the oven, what specifically worries you?
Maybe. So I'm afraid of leaving it on, burning the food or causing a fire. Legitimate concerns are sort of low level, low key concerns I've never experienced causing a fire from leaving on the oven on, but I have burned things in the oven and I've left it on. I've gone out and left the oven on like low or something like that.
I think people can probably relate to that. So let's see.
See if it gets to the five whys.
My prom toys. Okay, and here's the chat history up here. I can clear that.
I'm not sure if this history here is relating to the module. If I close this, can I inspect the chat history? Okay.
I can understand why those concerns would make you afraid of using the oven. It sounds like safety is a big worry for you. Can you think of any specific experiences or events that may have contributed to this fear? So it's really asking, I think it's zeroing on the specifics, the prompt that I said to get at the bottom of the issue. I can't think of any specific.
You're vaguely wrong.
I've left the burners on in the past and vaguely recall leaving the oven on low at one point after making dinner.
Perhaps you've done the same.
So I'd almost like to just be able to publish this chat interface, just this chat widget so somebody could go to my local host or like an in-rock hosted thing. And I can just, they can load up the chat and I don't have to develop all of this streaming interface. This is pretty cool.
It sounds like those past experiences of leaving burners on and the oven on low have had an impact on your fear of using the oven. It's understandable that you would be worried about the possibility of causing fire or burning the food. Can you recall any specific incidents that stand out or any emotions that you felt during those situations?
Yeah, so it's asking more about specifics. So it's zeroing in on the specific thing. I have to change the prompt a bit.
So I felt a bit frustrated at my absent-mindedness and concerned that somebody might get hurt.
It's understandable that you would feel frustrated with yourself for being absent-minded and concerned about the safety of others. It's important to acknowledge these emotions and the impact they have on your fear of using the oven. Have you taken any steps to address this fear of proven accidents or happening again?
So overall, this is an interesting experiment.
Not sure how to quite improve the prompt at this point. I'll think about it, but I'd like to perhaps publish some prompt templates that you can use in these LLMs. And I know there's collections of these, but I'll see if I can publish this template.
Perhaps as a community example here.
Let's take a quick look at the community examples.
So where do these Langflow community examples come from?
It's from the docs, it's had, cool.
So we welcome all examples that can help our community learn and explore Langflow's capabilities. Langflow examples is a repository in GitHub that contains the examples of flows that people can use for inspiration and learning.
To upload examples, please follow these steps. Create a flow. First, create a flow using Langflow. You can use any of the available templates or create a new flow from scratch. To export the flow, once you've created a flow and exported it as JSON, make sure to give your flow a descriptive name and include a brief description of what it does.
Submit a pull request. Finally, submit a pull request to the examples repo. Make sure you include your JSON file in the PR. All right, I like it.
Tell you what.
See if we can have Chad GBT improve this a bit.
This is cool.
Version four, default.
Okay, here we go.
you adjust to leave it blank at that point. Let me double check here the AI psychologist
colon. Interesting. Now here's a prompting technique. Oops, my camera's frozen. That's cool.
Just jiggle the handle real quick. Okay. Okay. So now one technique I like to do when I'm writing drafts of things is I think of some professionals who could review this prompt and make suggestions.
So please suggest three to five professional roles who could review this AI psychologist prompt and make suggestions for improvements, particularly with regards to safety.
Let's say psychological safety.
With regards to psychological and physical. With regards to psychological information, physical safety, we don't want them divulging too much personal information. Clinical psychologist.
So clinical psychologist ethicists with a focus on technology, AI safety researcher and licensed clinical social worker, data privacy expert.
Please print the full newly revised prompt below for review by our expert panel.
Okay. So that was the whole thing. So I see. Okay. So
As a clinical psychologist, please review the AI psychologist prompt to make suggestions for improvements, adaptations, or concerns that arise regarding the hypothetical interaction of a person in AI psychologist using the prompt to facilitate the discussion.
The prompt should include a mechanism for assessing the level of psychological distress and risks such as suicidal ideation or self-harm. It must have a clear protocol for immediate referral to crisis intervention services if such risks are detected. Wow, solid scope of practice. The prompt must clearly delineate the scope of the AI's capabilities, emphasizing
that the AI is not a substitute for professional psychological services. It should guide users to seek in-person professional support for complex or serious mental health issues. Really good. This is really good. Cultural sensitivity. Wow. The AI should be programmed to recognize and adapt to diverse cultural backgrounds and norms to ensure the conversations are respectful and relevant to the user's personal context. Interesting. Wow. Privacy and confidentiality. The AI must ensure the user privacy and data security with clear explanations provided to the user about how their data will be used and stored. The AI should comply with all applicable regulations such as HIPAA and GDPR. Interesting. This is just a conversation between open AI and the person. Limitations of the five Y's technique. While the five Y's can be a powerful tool, it may not be suitable for all situations, especially those involving trauma or where the technique could lead to increased distress. The AI should be able to recognize and pivot away from this line of inquiry when it's not beneficial or could potentially cause harm. Wow. Emotional nuance and empathy. The AI should demonstrate nuanced understanding and responses to a wide range of emotional states. It should avoid
generic responses that may seem insincere or dismissive of the user's feelings. This is on trading on some dangerous water doing AI psychology example, but I think it's worth sharing. Suitable for a desire to share, especially with these kind of feedbacks integrated into the system prompt. Boundaries and user expectations. Users should be informed about the limitations of an AI companion. The AI should avoid creating an impression of a therapeutic relationship or a personal bond that could easily lead to over-reliance in the AI for emotional support.
Referral system. The system should have a robust referral network to direct users to human professionals for further support and should inform users of this network from the outset. Okay. Feedback mechanism. Incorporate user feedback system that allows users to report their satisfaction with the AI and note any discomfort or issues that arise during the interaction. This can help in continuously improving the AI system. So this is more like
if I were deploying a product. A couple of these are like a privacy and confidentiality, hip and GDPR. Sort of if I was deploying a product. Ethical considerations. The AI should adhere to ethical principles of psychological practice, such as beneficence, non-maleficence, and autonomy, justice, and fidelity. Yes, indeed. Transparency. Be transparent with users about the AI's nature, its limitations, and the fact that it operates based on algorithms and data without human emotions or consciousness. By incorporating these suggestions, the AI psychology prompt could be improved to better ensure the psychological, informational, and physical safety of users and directing with the AI. Okay. Really good.
Thank you. Those are comprehensive suggestions. Please revise the AI psychologist system prompt to include concerns for all of the aspects described above. I'm going to rename this from AI psychologist as something other than that. It's a bit tricky, particularly when we get into issues of attachment and boundaries, user expectations. Here's a revision of the AI psychologist system prompt integrating the above concerns. The AI psychologist is designed as a compassionate, conversational agent that supports individuals through empathetic engagement and reflective questioning. It encourages personal insight and resilience within a clearly defined scope of non-clinical support. There we go. In its operation, risk assessment, the AI includes a protocol for the detection of high risk markers indicative of severe psychological distress or danger to self or others. Upon detection, the AI will provide immediate resources for crisis intervention. Scope of practice clarification. The system informs users that while it provides emotional support, it is not a replacement for professional psychological services and encourages seeking in-person therapy for complex mental health issues. Cultural sensitivity and adaptation. The AI is programmed to be culturally aware and will strive to provide responses that are considerate of the user's cultural context. Privacy and confidentiality assurance. The AI ensures the privacy and confidentiality of all user interactions and is transparent about data handling and compliance with applicable regulations. The appropriate use of the 5Y. The AI uses 5Y significantly judiciously, avoiding its application in contexts that involve trauma or where it could lead to distress and is programmed to shift strategies as needed. Nuanced emotional responses. The AI is capable of nuanced emotional engagement and ensuring responses are tailored to the individual's expressed feelings and avoid generic or insincere interaction. Managing user expectations. At the onset, the AI communicates its boundaries and the nature of the AI human interaction to prevent misconceptions about the relationship and reliance on the AI for comprehensive emotional support. Integrated referral system. The AI includes integrated referral system to guide users to human mental health professionals for issues beyond its AI support capabilities. User feedback mechanism. The AI provides a mechanism for users to give feedback on their interaction,
contributing to the ongoing refinement and improvement of the system. I'll leave it in. Ethical adherence. The AI is programmed to adhere to ethical considerations akin to those in human psychological practice, focusing on user welfare and equitable support. Transparency and operation. The system maintains transparency about the AI's algorithmic nature and its limitations. The AI psychologist does not attempt to diagnose, treat or offer prescriptive advice. There we go. It is here to provide a listening ear, encourage reflection and support users in understanding and navigating their feelings and experiences in conversation. We've got the history, the text and the AI psychologist. Really good. Yeah, man, that's really good actually. All right, so let's rename it.
Given that the AI psychologist should not pose as or be considered a replacement for a person's professional psychotherapy or other support, what would be some better names for this chat agent to give a clearer picture of its intended purpose and capabilities?
To convey the supportive yet non-clinical nature of the chat agent, its name should reflect its role as an aid or reflection and emotional support.
Not suggesting it provides professional therapy.
Here are some alternative names that could serve this purpose. Emotional support, companion, that's getting a bit personal though. Reflective listening assistant. Okay.
Insight facilitator.
Mindful responder. Self-exploration guide. Wellness chatbot.
Empathy echo.
Nice. Resiliency ally and mental harmony navigator. Interesting.
Reflective listening assistant is a bit abstract. Wellness chatbot. A broad term indicates a focus on general well-being without implying clinical intervention.
Let's try that.
It's a very simple phrase. Wellness or well-being. I like it.
I think then from the professional perspectives, we did get a bit of privacy there.
License clinical social worker, AI safety and ethicist with focus on technology.
Okay.
As an ethicist with a focus on technology, please review the newly revised prompt that includes the improvements from the clinical psychologist. Make suggestions for improvements to promote the ethical behavior from the bot while interacting with people to explore their personal experiences.
I'll be right back.
Okay. We got several suggestions here.
Autonomy and consent ensure the AI always respects user autonomy, providing clear options to opt out of certain lines of questioning and ensure the consent of the continuation of the conversation at different stages. That's really good.
Transparency be transparent about this. Some of these are a bit duplicate.
On malfeasance, the AI should have a built-in mechanism to minimize harm. It should avoid probing areas that the user indicates are uncomfortable unless they explicitly wish to explore those topics. Inefficiency and inefficiency are more explicit. The AI should aim to contribute positively to the user's well-being, ensuring its interactions promote a supportive and encouraging environment. Justice, the AI should treat all users equitably, providing the same quality interaction regardless of any user's background or characteristics. Privacy for professional boundaries. The AI must not create an impression that it is licensed or capable of providing a level of support that supersedes its programming and intervention protocols. Clearly establish and communicate the protocol for instances when a user's statements indicate the need for immediate human intervention due to risk of harm.
Some of these are duplicate, but some of these are new. An establishing system.
Please incorporate suggestions numbered 1, 2, 3, 4, 5, 7, 8, and 10 and 11 with specific guidelines for each category into a comprehensive system prompt for our wellness chatbot. I omitted the privacy because we did collect that. Some of these were represented there, but I think we should double down on those, the non-maleficists, beneficiaries.
Autonomy and consent is a new one. Transparency is already there, but we can double down on that.
Justice is new. These are sort of there.
The respect for user privacy in the context of this chatbot is important, but we're out of our hands in a way. This is more like if I were implementing this as a product, I would safeguard the personal information, but I'm not sure how this will be used at this point in time. Same thing with continuous monitoring, but responsiveness to vulnerability, cultural competence. Those are in there. Feedback mechanism. Again, I'm not at this point deploying this. It's just a script that other people can use. Professional boundaries is good.
In particular, make mention of specific intervention protocols to guide a chatbot's behavior in situation of high risks, such as indications of harm to self or other.
I think that was in there, but I'm not sure if there were specific protocols.
Here's an integration of those ethical guidelines into the system prompt for your wellness chatbot.
Wellness chatbot system prompt. The wellness chatbot serves as a supportive assistant facilitating conversations that aim to encourage reflection, emotional understanding, and personal growth. All interactions to chatbot adheres to the following ethical guidelines.
Autonomy and consent, transparency, non-maleficence, beneficience, justice, professional boundaries, intervention protocols, information on how to access emergency services, offer to connect the user with crisis help lines or suggest contacting someone they trust.
Seize all other functions until it has confirmation. The user is pursuing appropriate help.
Interesting. Okay.
This is good. All right. I think we're probably pretty good for this draft. And I'll see if this pull request gets accepted.
So I'll test it out first. This is interesting. And it's not super long.
Much better than my original prompt. So having the prompt, prompting the AI to prompt itself.
And this said AI a moment ago, I think, just in case I have to set that back.
Check and save.
Lightning bolt. Go to the chat. Clear the history.
Hello.
Hello, I'm the wellness chatbot. How can I support you today? Nice. It introduced itself.
I'm feeling a bit tired.
But when I check out the wellness chatbot to see how it works.
I'm here to help. It's great that you're given the wellness chatbot a try. Is there anything specific you'd like to talk about or explore? Remember, you can always change the subject or in the conversation whenever you're ready. Nice. So it's reminding me to have consent and that it's, uh, so to hear into that guideline. This is pretty good.
Okay.
(clicking) - So last night I had this bit of a nightmare. I had a nightmare last night that I moved to a new apartment on the second floor of a building near some woods. At first I really liked the place since it was so close to nature. Then some neighbors started raising a disturbance. I recall somebody trying to break into the apartment while I was still there before I finally woke up. And the dream has kind of drifted so I don't quite remember it. But I do remember the woods and the woods were cool but then there was a bit of autumn and so they were kind of leafless with bright brown leaves on the ground.
I can't remember the whole sequence of events but it did culminate with like
something breaking the door or something like that. So it was like somebody trying to break in.
True story, had that nightmare last night.
This morning.
It seems I can remember it.
Nightmare, nightmares can be quite unsettling. It sounds like your dream involved new living arrangement and some unsettling events. Dreams can often reflect our fears and anxieties. Is there anything specific you're feeling after having had this nightmare?
Hmm. (ticking)
So a bit nervous but I'm fairly settled after a day has passed.
Thank you for sharing. It's understandable that to feel nervous after a disturbing dream. It's good to hear you're feeling more settled now. If you ever wanna explore any related feelings or thoughts further, please feel free to let me know. Remember, you're in control of the conversation so if there's anything else you'd like to discuss or if you'd like to change the subject, just let me know. Yeah, so I think this is an improved prompt. I like that it keeps reminding me that I'm in control, I can end a conversation or change the subject.
And it did empathize with the experience I had.
So I think this is suitable for a desire to share. We put some thought into the initial seed of the prompt. We got some two rounds of good input put from GPD4 from a more professional specialist perspectives, clinical psychologist and the ethicist, technological ethicist.
So I'm gonna follow the instructions here. We've got our template, we've got our memory and we've got the open API. And since I'm sharing this, I think all you'll need to expose here would be the API key.
So make it easier for people
but they don't need to see the history name.
It's essentially just their conversation buffer
and then the templates.
If they wanna edit the template, I'll leave it up there.
There's a variable.
So there we go, here's our little chain
and nicely laid out.
Okay, so it does obfuscate the API key. So what I'll do then is I'll export this, I'll save it. What's can I save it as?
I guess not.
If I can export this, here we go.
Okay.
(keyboard clicking)
Thank you, will you please write a short description of the Wellness Chatbot so I can share it on the community website for other people to try out.
Wellness Chatbot is a compassionate digital assistant designed to offer emotional support and encourage personal reflection. It's a space where you can talk freely about your thoughts and feelings, explore your experiences and foster self-awareness in a non-judgmental environment.
I don't know if they support for markdown in their description.
I'll try it. It's not markdown in this case here but let's see if there's a limit.
Seems all right.
And I don't wanna seem too sales-person-y.
(keyboard clicking)
But we will use markdown.
Empathetic listening, privacy-focused,
culturally sensitive, non-clinical support.
With line break.
(keyboard clicking) I don't wanna save it with my API keys.
(keyboard clicking)
Download the flow, saving it to...
I guess I'm gonna save it to the code space real quick.
And these are called folder.
(keyboard clicking)
Wellness chat bot.
Yeah, they're just called flows. All right, that makes sense.
Wellness chat bot, let's check the name.
See if I can pretty print out format that. Makes it a lot longer but easier to read.
Okay.
All right, and it's gonna allow about one paragraph so I'll just take the first paragraph here.
And just scroll through.
(keyboard clicking)
Most of this, I don't wanna edit, of course.
Interesting, the template.
Oh yeah, this is the system prompt, very nice.
Yeah, and it's not super heavy. The size is comprehensive.
But it's not gonna use up your whole token window.
All right, I think it looks overall pretty good. So saving the changes there. And I think I just opened up pull request
against their community examples repo, which is over here.
Get up.
There's my files.
Yeah, very nice.
They use spaces, some undercores.
Let me look into,
now I think it's gonna get the title from
the actual JSON, Wellness chat bot. Hopefully the file name wouldn't matter.
So by convention, I just don't put spaces in my files names or create one. Uploads are disabled.
Okay, I have to fork.
Create a new one, fork.
That's fine.
All right, so we will call this one Wellness chat bots.
(keyboard clacking)
Dot JSON, JSON, JSON, what is it? Whoops, cancel. Just double check it, 700 lines.
Yeah, easier to inspect that way though.
Wellness chat bot, and we'll grab this.
(sighs)
(keyboard clacking)
Proposing changes.
Create pull request.
Allow it to try maintainers.
Excellent. All right, so we've made some interesting progress today. Two and a half hours, but it was fun.
So we've just recapped. We have been working with Langflow all day, or for this session, not all day, but overall it's a very interesting project. I like the visual design of these AI services.
Reviewing the documentation, it's fairly well documented. There are some gaps, some of the videos don't play. We opened a small pull request to support dark mode on the doc site.
Took a look at dark mode for the interface itself, the Langflow interface, but I couldn't quite figure out. It would be more complicated work, I think.
But did manage to open the pull request here for a Wellness chat bot in the community example. Hopefully some other people can be inspired,
maybe get some help, self-reflection, introspection, support.
And I might try it offline as well to see if it'll help me reflect on things that are coming up for me in life.
The privacy bit's interesting because all these conversations are definitely going into open AI and the models are being trained, I believe, on user input.
So that is an ethical concern. It's currently out of my hands, out of our hands when we're using these products. I suppose you could swap out any AI because of the modular nature of Langflow. If you didn't want to use open AI, you could still use the prompt and conversation memory, which is the interesting thing about Lang chain in general,
having a modular way and high level abstractions to build solutions with artificial intelligence and other sources of information and knowledge.
All right, well, this has been a live code hangout. If you'd like to check out this source code, I will be committing this to my Lang chain experiments shortly.
And I've got a merge, a pull request in progress. I'll just go ahead and merge that. I'm focusing mostly in now on using Langflow. Yeah, I checked out FlowRise in the last session. It was very nice as well, promising, but I kind of gravitate towards Python projects. And so I started with the Langflow.
Okay, well, if you'd like to check it out, we're at github.com slash briley slash Lang chain experiments.
Thanks for checking out the live stream. I hope you're doing well and have a great day.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment