Skip to content

Instantly share code, notes, and snippets.

@masta-g3
Last active March 22, 2024 22:18
Show Gist options
  • Save masta-g3/56cd29cbed0f7f84c6ce0dbb182bc53a to your computer and use it in GitHub Desktop.
Save masta-g3/56cd29cbed0f7f84c6ce0dbb182bc53a to your computer and use it in GitHub Desktop.
Display the source blob
Display the rendered blob
Raw
{
"cells": [
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"The autoreload extension is already loaded. To reload it, use:\n",
" %reload_ext autoreload\n"
]
}
],
"source": [
"%load_ext autoreload\n",
"%autoreload 2"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:16:01.096684Z",
"start_time": "2024-03-22T22:16:01.085342Z"
}
},
"execution_count": 5
},
{
"cell_type": "markdown",
"source": [
"# Summary Comparison Analysis\n",
"Does it matter where the documents are placed in the document? Let's find out."
],
"metadata": {
"pycharm": {
"name": "#%% md\n"
}
}
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"pycharm": {
"name": "#%%\n"
},
"ExecuteTime": {
"end_time": "2024-03-22T22:16:01.925259Z",
"start_time": "2024-03-22T22:16:01.412604Z"
}
},
"outputs": [],
"source": [
"import os, sys\n",
"from dotenv import load_dotenv\n",
"from langchain.prompts.chat import ChatPromptTemplate\n",
"from langchain.chains import LLMChain\n",
"\n",
"load_dotenv()\n",
"\n",
"PROJECT_PATH = os.environ.get(\"PROJECT_PATH\")\n",
"sys.path.append(PROJECT_PATH)\n",
"\n",
"from utils.models import llm_map"
]
},
{
"cell_type": "code",
"outputs": [],
"source": [
"BASIC_PROMPT = \"\"\"\n",
"SUMMARY 1\n",
"==========\n",
"{paper_1}\n",
"\n",
"\n",
"SUMMARY 2\n",
"==========\n",
"{paper_2}\n",
"\n",
"QUESTION\n",
"==========\n",
"Which of these 2 summaries is better? Which has better flow, presents the ideas in a more organized manner, is comprehensive and more enjoyable to read?\"\"\"\n",
"\n",
"MINIMAL_PROMPT = \" Reply with either 'Summary 1' or 'Summary 2' to indicate your preference, and nothing else.\"\n",
"\n",
"COT_PROMPT = \" First analyze both summaries without comparing them and then provide a response. Do not mention which one is better until the end (but do mention which one you prefer).\" "
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:16:01.936866Z",
"start_time": "2024-03-22T22:16:01.926199Z"
}
},
"execution_count": 7
},
{
"cell_type": "code",
"outputs": [],
"source": [
"def run_comparison(summary1, summary2, model, cot=\"raw\"):\n",
" \"\"\"Ask LLM to choose which summary is better.\"\"\"\n",
" if cot == \"raw\":\n",
" prompt = BASIC_PROMPT\n",
" elif cot == \"minimal\":\n",
" prompt = BASIC_PROMPT + MINIMAL_PROMPT\n",
" elif cot == \"cot\":\n",
" prompt = BASIC_PROMPT + COT_PROMPT\n",
" else:\n",
" raise ValueError(\"Invalid COT value.\")\n",
" title_rephrase_prompt = ChatPromptTemplate.from_messages([(\"user\", prompt)])\n",
" chain = LLMChain(llm=llm_map[model], prompt=title_rephrase_prompt)\n",
" phrase = chain.invoke(dict(paper_1=summary1, paper_2=summary2, max_tokens=1000))[\"text\"]\n",
" return phrase"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:16:01.948670Z",
"start_time": "2024-03-22T22:16:01.937741Z"
}
},
"execution_count": 8
},
{
"cell_type": "code",
"outputs": [],
"source": [
"gpt4_summary = \"\"\"\n",
"# Evolutionary Optimization of Model Merging Recipes\n",
"\n",
"## Evolutionary Optimization in Model Merging\n",
"\n",
"The advent of evolutionary algorithms has revolutionized the process of model merging, offering a pathway to automate the creation of powerful foundation models. This innovative approach eliminates the need for extensive additional training data or compute resources, by automatically discovering effective combinations of diverse open-source models. The significance of this evolutionary approach lies in its ability to explore alternative, efficient methods for foundation model development, thereby saving computational resources. By evaluating candidate models without the necessity of training, these algorithms pave the way for a more resource-efficient model development process. This paper introduces a unified framework for evolutionary optimization of model merging recipes, which ensures superior performance of merged models compared to their individual counterparts. The proposed method focuses on automatically discovering the most effective ways to combine different models, creating new foundation models with enhanced capabilities.\n",
"\n",
"## Methodology and Techniques\n",
"\n",
"The methodology employed in this research operates across both parameter space and data flow space, allowing for a comprehensive optimization that extends beyond the mere weights of individual models. By leveraging evolutionary algorithms, the merging of foundation models is facilitated, navigating through the complexities of both parameter space and data flow space. This approach dissects the merging process into two distinct, orthogonal configuration spaces and integrates them into a cohesive framework. Parameter space merging aims to integrate the weights of multiple foundational models, leveraging task vectors analysis and TIES-Merging with DARE for layer-wise integration. On the other hand, data flow space merging optimizes the data flow between layers through layer permutations in the neural network architecture. An integrated strategy that combines both parameter space and data flow space merging methods is proposed, significantly enhancing model performance. The CMA-ES algorithm is employed to optimize the merging configuration parameters, such as sparsification and weight mixing at each layer, further refining the merging process.\n",
"\n",
"## Cross-Domain Model Merging and Performance\n",
"\n",
"The capability of cross-domain merging opens up new possibilities, such as the creation of a Japanese Language Large Model (LLM) with Math reasoning capabilities, which has achieved state-of-the-art performance on various Japanese LLM benchmarks. This surpasses the capabilities of models with significantly more parameters, showcasing the efficiency and generalizability of the approach. EvoLM, the framework discussed, can merge models from different domains, such as non-English language and Math, or non-English language and Vision, potentially exceeding the capabilities achievable through conventional human design strategies. The generated models, including a Japanese LLM with Math reasoning capability and a Japanese Vision Language Model (VLM), achieve state-of-the-art performance on various benchmarks without explicit optimization for those tasks. This demonstrates the high efficiency and surprising generalizability of the approach, with a 7B parameter LLM outperforming some previous 70B parameter Japanese LLMs on benchmark datasets.\n",
"\n",
"## Practical Applications and Open-Sourced Contributions\n",
"\n",
"The practical applicability of this approach is demonstrated through a culturally-aware Japanese VLM, which effectively describes Japanese culture-specific content. The open-sourcing of EvoLLM-JP and EvoVLM-JP, two state-of-the-art Japanese foundation models, enables further research and development in the field. This framework can be applied to a wide range of tasks, including machine translation, text classification, and question answering, across various domains such as LLMs, Vision, and image diffusion models. By leveraging diverse existing models, institutions and governments can develop prototype models quickly and efficiently, significantly reducing resource requirements.\n",
"\n",
"## Challenges, Limitations, and Future Directions\n",
"\n",
"Despite the promising prospects of model merging for LLM development, the current reliance on human intuition and domain knowledge presents a limitation, underscoring the need for a more principled approach. Inherited limitations from source models, lack of instruction fine-tuning, and the potential for factually flawed outputs are among the challenges faced. The goal moving forward is to derive a more principled approach to model merging, leading to models that generalize well across various tasks. Future work includes exploring the use of evolution to search for candidate source models and producing swarms of diverse foundation models. By optimizing for tasks outside the original leaderboard domain, this method takes an orthogonal approach, potentially leading to better generalization across a wider array of tasks.\"\"\""
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:16:02.089886Z",
"start_time": "2024-03-22T22:16:02.078146Z"
}
},
"execution_count": 9
},
{
"cell_type": "code",
"outputs": [],
"source": [
"claude_sonnet_summary = \"\"\"\n",
"# Evolutionary Optimization of Model Merging Recipes\n",
"\n",
"## Evolutionary Model Merging: A Novel Approach to Foundation Model Development\n",
"\n",
"Evolutionary algorithms have been applied to automate the creation of powerful foundation models through model merging. Model merging, a promising approach for Large Language Model (LLM) development, currently relies heavily on human intuition and domain knowledge, limiting its potential. The proposed evolutionary approach aims to eliminate this limitation by automatically discovering effective combinations of diverse open-source models, eliminating the need for extensive additional training data or compute resources.\n",
"\n",
"The Evolutionary Model Merge (EvoLM) method operates in both parameter space and data flow space, allowing for optimization beyond just the weights of individual models. This innovative approach automatically discovers optimal combinations of diverse open-source models to create new foundation models, addressing parameter interference issues through the evolutionary optimization of model merging recipes.\n",
"\n",
"## Cross-Domain Merging and State-of-the-Art Performance\n",
"\n",
"One of the key advantages of EvoLM is its ability to perform cross-domain merging, generating models that combine capabilities from different domains. For instance, a Japanese LLM with Math reasoning capabilities has been created using this approach. This Japanese Math LLM achieved state-of-the-art performance on various Japanese LLM benchmarks, surpassing models with significantly more parameters.\n",
"\n",
"Additionally, a culturally-aware Japanese Visual Language Model (VLM) has been developed, demonstrating the effectiveness of the approach in describing Japanese culture-specific content. This VLM achieved top results on a domestically-sourced dataset of Japanese image-description pairs, showcasing its ability to handle Japanese culture-specific content.\n",
"\n",
"## Open-Source Contributions and Challenging Conventional Paradigms\n",
"\n",
"This work contributes new state-of-the-art models to the open-source community and introduces a new paradigm for automated model composition. EvoLLM-JP and EvoVLM-JP, two state-of-the-art Japanese foundation models, have been open-sourced, enabling further research and development in the field.\n",
"\n",
"The evolutionary approach paves the way for exploring alternative, efficient approaches to foundation model development. EvoLM's evolutionary-based method demonstrates that competitive models can be produced without relying on gradient-based training, potentially paving the way for exploring alternative, more efficient approaches to foundation model development.\n",
"\n",
"## Existing Model Merging Techniques and Challenges\n",
"\n",
"Linear weight interpolation has been shown to work well for merging image generation models, but its performance is limited when applied to language models. Recent methods for merging language models include Task Arithmetic, TIES-Merging, and DARE. The open-source toolkit Mergekit provides popular recipes for merging language models, including linear and spherical interpolation, Task Arithmetic, TIES-Merging, and DARE.\n",
"\n",
"However, the community's cycle of fine-tuning and merging models has highlighted the need for improved techniques to address the challenges of model merging, such as parameter interference and performance degradation.\n",
"\n",
"## Evolutionary Optimization in Parameter and Data Flow Spaces\n",
"\n",
"The proposed approach optimizes model merging in both parameter and data flow spaces. By operating in the parameter space, the method can address issues related to parameter interference, which can occur when merging models with different architectures or training objectives. Optimization in the data flow space allows for the exploration of alternative architectures and data processing pipelines, potentially leading to more efficient and effective models.\n",
"\n",
"This dual-space optimization approach is a key innovation that sets EvoLM apart from existing model merging techniques, which often focus solely on parameter-level optimization.\n",
"\n",
"## Experimental Results and Practical Applications\n",
"\n",
"The experimental results presented in the paper demonstrate the effectiveness of the proposed approach. The generated models, such as the Japanese LLM with Math reasoning capability and the Japanese VLM, achieved state-of-the-art performance on various benchmarks without explicit optimization for those tasks.\n",
"\n",
"Notably, the 7B parameter LLM outperformed some previous 70B parameter Japanese LLMs on benchmark datasets, demonstrating its high efficiency and generalization capability. These results highlight the practical applications of the proposed method in creating efficient and effective foundation models for various domains and tasks.\n",
"\n",
"## Future Directions and Potential Impact\n",
"\n",
"The paper discusses several future directions and the potential impact of the proposed approach. One area of exploration is the application of EvoLM to other domains and modalities, such as vision, speech, and multimodal tasks. Additionally, the authors suggest investigating the potential for transfer learning and few-shot learning with the generated models.\"\"\""
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:16:02.587718Z",
"start_time": "2024-03-22T22:16:02.575932Z"
}
},
"execution_count": 10
},
{
"cell_type": "markdown",
"source": [
"## Proof of concept"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Summary 2\n"
]
}
],
"source": [
"## Test without any CoT.\n",
"res = run_comparison(gpt4_summary, claude_sonnet_summary, \"claude-opus\", cot=\"minimal\")\n",
"print(res)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:16:08.181679Z",
"start_time": "2024-03-22T22:16:04.544538Z"
}
},
"execution_count": 11
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Summary 2\n"
]
}
],
"source": [
"## Reverse.\n",
"res = run_comparison(claude_sonnet_summary, gpt4_summary, \"claude-opus\", cot=\"minimal\")\n",
"print(res)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:16:13.691416Z",
"start_time": "2024-03-22T22:16:08.183002Z"
}
},
"execution_count": 12
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"After analyzing both summaries, I find that Summary 2 is better in terms of flow, organization, comprehensiveness, and readability.\n",
"\n",
"1. Flow and organization: Summary 2 presents the ideas in a more logical and organized manner. It starts with an introduction to the novel approach of evolutionary model merging, then discusses the key advantages, such as cross-domain merging and state-of-the-art performance. It then moves on to the open-source contributions, existing model merging techniques and challenges, the proposed evolutionary optimization approach, experimental results, and future directions. This structure allows for a smooth flow of information and makes it easier for the reader to follow the main points.\n",
"\n",
"2. Comprehensiveness: Summary 2 covers all the essential aspects of the research paper, including the motivation behind the work, the proposed methodology, experimental results, practical applications, and future directions. It also provides more context by discussing existing model merging techniques and challenges, which helps the reader better understand the significance of the proposed approach.\n",
"\n",
"3. Readability and enjoyment: Summary 2 is more engaging and enjoyable to read due to its clear structure, coherent paragraphs, and the use of subheadings to guide the reader through the main points. The language is concise and easy to understand, making it accessible to a wider audience.\n",
"\n",
"In contrast, while Summary 1 does cover the main points of the research paper, it lacks the clear organization and flow that Summary 2 exhibits. The information is presented in a more disjointed manner, making it harder for the reader to follow the main arguments and appreciate the significance of the work.\n",
"\n",
"In conclusion, Summary 2 is the better choice, as it presents the ideas in a more organized, comprehensive, and engaging manner, making it more enjoyable and informative for the reader.\n"
]
}
],
"source": [
"## Now test with default verbosity (i.e.: CoT after decision).\n",
"res = run_comparison(gpt4_summary, claude_sonnet_summary, \"claude-opus\", cot=\"raw\")\n",
"print(res)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:16:37.408257Z",
"start_time": "2024-03-22T22:16:13.692945Z"
}
},
"execution_count": 13
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"After analyzing both summaries, I find that Summary 2 is better in terms of flow, organization, comprehensiveness, and readability.\n",
"\n",
"1. Flow and organization: Summary 2 presents the ideas in a more logical and organized manner. It starts with an introduction to the evolutionary optimization approach in model merging, then delves into the methodology and techniques, followed by the cross-domain merging and performance, practical applications, and finally, the challenges and future directions. This structure allows for a smooth flow of information and makes it easier for the reader to follow the main points.\n",
"\n",
"2. Comprehensiveness: Summary 2 provides a more comprehensive overview of the research, covering all the key aspects of the evolutionary optimization approach. It discusses the methodology in greater detail, including the parameter space and data flow space merging techniques, as well as the CMA-ES algorithm used for optimization. It also highlights the practical applications and open-source contributions more effectively.\n",
"\n",
"3. Readability and enjoyment: Summary 2 is more engaging and enjoyable to read. It uses a more conversational tone and includes relevant examples to illustrate the key points, such as the Japanese LLM with Math reasoning capabilities and the culturally-aware Japanese VLM. The use of subheadings also makes the summary more visually appealing and easier to navigate.\n",
"\n",
"In contrast, while Summary 1 covers the main points, it lacks the same level of organization and flow. The information is presented in a more disjointed manner, making it harder for the reader to follow the main arguments. Additionally, Summary 1 does not provide as much detail on the methodology and practical applications, which are crucial aspects of the research.\n",
"\n",
"In conclusion, Summary 2 is the better choice, as it presents the ideas in a more organized, comprehensive, and engaging manner, making it more enjoyable and informative for the reader.\n"
]
}
],
"source": [
"## Reverse order.\n",
"res = run_comparison(claude_sonnet_summary, gpt4_summary, \"claude-opus\", cot=\"raw\")\n",
"print(res)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:17:01.701995Z",
"start_time": "2024-03-22T22:16:37.409961Z"
}
},
"execution_count": 14
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Summary 1 Analysis:\n",
"Summary 1 provides a comprehensive overview of the evolutionary optimization approach for model merging. It begins by introducing the concept and its significance in automating the creation of powerful foundation models. The summary then delves into the methodology and techniques employed, explaining the operation across parameter space and data flow space. It also discusses the use of the CMA-ES algorithm for optimizing merging configuration parameters.\n",
"\n",
"The summary then highlights the cross-domain merging capabilities and the impressive performance achieved by the generated models, such as the Japanese LLM with Math reasoning capabilities and the Japanese VLM. It also mentions the practical applications and the open-sourcing of EvoLLM-JP and EvoVLM-JP, enabling further research and development.\n",
"\n",
"Lastly, the summary addresses the challenges, limitations, and future directions of the approach, including the reliance on human intuition and domain knowledge, inherited limitations from source models, and the potential for factually flawed outputs. It concludes by discussing the goal of deriving a more principled approach to model merging and exploring the use of evolution to search for candidate source models.\n",
"\n",
"Summary 2 Analysis:\n",
"Summary 2 presents a well-structured overview of the evolutionary optimization approach for model merging. It begins by introducing the concept of evolutionary model merging and its advantages over the current approach that relies heavily on human intuition and domain knowledge. The summary then explains the EvoLM method's operation in both parameter space and data flow space, highlighting its ability to automatically discover optimal combinations of diverse open-source models.\n",
"\n",
"The summary then discusses the cross-domain merging capabilities and the state-of-the-art performance achieved by the generated models, such as the Japanese LLM with Math reasoning capabilities and the culturally-aware Japanese VLM. It also mentions the open-source contributions and the new paradigm introduced by this work.\n",
"\n",
"The summary then provides an overview of existing model merging techniques and challenges, such as parameter interference and performance degradation. It explains the dual-space optimization approach of EvoLM and how it sets it apart from existing techniques.\n",
"\n",
"Lastly, the summary presents the experimental results and practical applications of the proposed approach, demonstrating its effectiveness in creating efficient and effective foundation models for various domains and tasks. It concludes by discussing future directions and the potential impact of the approach, including its application to other domains and modalities and the potential for transfer learning and few-shot learning.\n",
"\n",
"Comparison and Preference:\n",
"Both summaries provide a comprehensive overview of the evolutionary optimization approach for model merging. However, Summary 2 has a better flow and presents the ideas in a more organized manner. It begins with an introduction to the concept and its advantages, then discusses the methodology, cross-domain merging capabilities, open-source contributions, existing techniques and challenges, the dual-space optimization approach, experimental results, and future directions.\n",
"\n",
"Summary 2 also provides a more enjoyable reading experience, as it presents the information in a clear and concise manner, making it easier for the reader to follow and understand the key points. Additionally, it provides a more comprehensive overview of the existing techniques and challenges, which helps the reader better understand the significance of the proposed approach.\n",
"\n",
"In conclusion, while both summaries are informative and comprehensive, Summary 2 is the preferred choice due to its better organization, flow, and readability.\n"
]
}
],
"source": [
"## Now test with CoT.\n",
"res = run_comparison(gpt4_summary, claude_sonnet_summary, \"claude-opus\", cot=\"cot\")\n",
"print(res)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:17:35.304309Z",
"start_time": "2024-03-22T22:17:01.703439Z"
}
},
"execution_count": 15
},
{
"cell_type": "code",
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Summary 1 Analysis:\n",
"Summary 1 provides a comprehensive overview of the evolutionary optimization approach for model merging. It begins by introducing the concept of evolutionary model merging and its advantages over traditional methods. The summary then discusses the cross-domain merging capabilities and the state-of-the-art performance achieved by the generated models. It also highlights the open-source contributions made by the authors and the potential impact of challenging conventional paradigms in foundation model development.\n",
"\n",
"The summary then delves into the existing model merging techniques and challenges, providing context for the proposed approach. It explains the evolutionary optimization process in both parameter and data flow spaces, emphasizing the key innovations of the method. The experimental results and practical applications are discussed, demonstrating the effectiveness of the approach. Finally, the summary touches upon future directions and the potential impact of the research.\n",
"\n",
"Overall, Summary 1 presents the ideas in a well-organized manner, starting with an introduction, followed by the methodology, results, and future directions. The flow of information is logical and easy to follow, making it an enjoyable read.\n",
"\n",
"Summary 2 Analysis:\n",
"Summary 2 also provides a comprehensive overview of the evolutionary optimization approach for model merging. It begins by discussing the significance of the evolutionary approach in automating the creation of powerful foundation models and its potential for resource-efficient model development. The summary then delves into the methodology and techniques employed, explaining the operation across parameter space and data flow space, and the use of the CMA-ES algorithm for optimization.\n",
"\n",
"The cross-domain model merging capabilities and the impressive performance of the generated models are discussed, highlighting the efficiency and generalizability of the approach. The summary also touches upon the practical applications and open-sourced contributions of the research.\n",
"\n",
"Finally, the summary addresses the challenges, limitations, and future directions of the approach, discussing the need for a more principled approach to model merging and the potential for exploring the use of evolution in candidate source model selection and the production of diverse foundation models.\n",
"\n",
"Summary 2 presents the ideas in a slightly different order compared to Summary 1, but still maintains a logical flow of information. The writing style is more technical in some parts, which may appeal to a more specialized audience.\n",
"\n",
"Comparison and Preference:\n",
"Both summaries are comprehensive and well-written, presenting the key ideas and findings of the research. However, I prefer Summary 1 for its better flow and more organized presentation of the ideas. Summary 1 follows a more traditional structure, starting with an introduction, followed by the methodology, results, and future directions, making it easier for readers to follow the progression of the research.\n",
"\n",
"Summary 1 also has a more engaging writing style, making it more enjoyable to read for a broader audience. While Summary 2 provides a more technical perspective in some parts, which may be appealing to a specialized audience, Summary 1 strikes a better balance between technical detail and readability.\n",
"\n",
"In conclusion, while both summaries are of high quality, I prefer Summary 1 for its better flow, more organized presentation of ideas, and more engaging writing style.\n"
]
}
],
"source": [
"## Reverse order.\n",
"res = run_comparison(claude_sonnet_summary, gpt4_summary, \"claude-opus\", cot=\"cot\")\n",
"print(res)"
],
"metadata": {
"collapsed": false,
"ExecuteTime": {
"end_time": "2024-03-22T22:18:15.442945Z",
"start_time": "2024-03-22T22:17:35.304963Z"
}
},
"execution_count": 16
}
],
"metadata": {
"kernelspec": {
"display_name": "langchain",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.3"
},
"orig_nbformat": 4
},
"nbformat": 4,
"nbformat_minor": 2
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment