- conversationId: acfa33fb-d353-4ca0-8fb2-23867ea4514c
- endpoint: openAI
- title: Python Text Token Counting
- exportAt: 19:29:58 GMT-0400 (Eastern Daylight Time)
- endpoint: openAI
- presetId: null
- model: gpt-4
1 Conclave Guildmage (GRN) 162 | |
4 Sunpetal Grove (XLN) 257 | |
2 Leonin Warleader (M19) 23 | |
2 Divine Visitation (GRN) 10 | |
11 Plains (RIX) 192 | |
8 Forest (RIX) 196 | |
2 Talons of Wildwood (M19) 202 | |
3 Ajani's Welcome (M19) 6 | |
2 Impassioned Orator (RNA) 12 | |
3 Ixalan's Binding (XLN) 17 |
1 Conclave Guildmage (GRN) 162 | |
4 Sunpetal Grove (XLN) 257 | |
2 Leonin Warleader (M19) 23 | |
2 Divine Visitation (GRN) 10 | |
11 Plains (RIX) 192 | |
8 Forest (RIX) 196 | |
3 Ajani's Welcome (M19) 6 | |
4 Impassioned Orator (RNA) 12 | |
3 Ixalan's Binding (XLN) 17 | |
2 Healer's Hawk (GRN) 14 |
4 Ghitu Lavarunner (DAR) 127 | |
4 Wizard's Lightning (DAR) 152 | |
4 Fanatical Firebrand (RIX) 101 | |
3 Lightning Strike (XLN) 149 | |
21 Mountain (RIX) 195 | |
4 Shock (M19) 156 | |
4 Viashino Pyromancer (M19) 166 | |
3 Goblin Chainwhirler (DAR) 129 | |
1 Risk Factor (GRN) 113 | |
4 Skewer the Critics (RNA) 115 |
3 Ajani's Pridemate (M19) 5 | |
10 Plains (M19) 261 | |
1 Isolated Chapel (DAR) 241 | |
2 Legion Lieutenant (RIX) 163 | |
10 Swamp (M19) 269 | |
2 Skymarch Bloodletter (XLN) 124 | |
2 Inspiring Cleric (XLN) 16 | |
3 Call to the Feast (XLN) 219 | |
2 Epicure of Blood (M19) 95 | |
1 Herald of Faith (M19) 13 |
# This code can be executed in selenium via a javascript executor. It will change the source of an image that uses a blob to a data url instead | |
# You will need to supply the ID of the image tag, or change to code to look it up some other way. | |
# This line needs changed / passed in | |
var image = document.getElementById(YOUR_IMAGE_ID); | |
var blobUrl = image.src; | |
var xhr = new XMLHttpRequest; | |
xhr.responseType = 'blob'; |
In the expanding universe of machine learning, the task of accurately answering questions based on a corpus of proprietary documents presents an exciting yet challenging frontier. At the intersection of natural language processing and information retrieval, the quest for efficient and accurate "Q&A over documents" systems is a pursuit that drives many developers and data scientists.
While large language models (LLMs) such as GPT have greatly advanced the field, there are still hurdles to overcome. One such challenge is identifying and retrieving the most relevant documents based on user queries. User questions can be tricky; they're often not well-formed and can cause our neatly designed systems to stumble.
In this blog post, we'll first delve into the intricacies of this challenge and then explain a simple yet innovative solution that leverages the new function calling capabilities baked into the chat completion API for GPT. This approach aims to streamline the retrieval
I recent did some more exploring with a local LLM tool that would import your documents into a vector store. Given the promising initial results with a handful of docs I wanted to see how it handled more / different data. I decided to copy over the text files containing Expanse trivia and answers I use as a regression suite to test my own "Q&A over documents" process. I wanted to see what types of questions it could answer from that content...
The strategy employed by this tool used double newlines as their segmentation boundary condition. A strategy that works well for many types of content however for this content that was a terrible choice as the text in the files are formatted with numbered questions followed by their answers like this:
1. Long winded question with establishing context
Since prompt engineering isn't a thing it should be no problem to reprodce either of them giving the model no information about the content aside from the title of the video and who made it.
Post a gist link in the comments...
The video 'Life in 2323 A.D.' by Isaac Arthur presents a future panorama about technological advancements and lifestyle adaptations three centuries from now, using several fictional characters to emphasize the changing elements of daily life. In the future pictured, sophisticated technologies such as self-maintaining infrastructures and life extension technologies are subtly integrated into daily life. The characters, including Amy, who lives in a technologically advanced, eco-friendly suburban setting, and Becky, a cybernetically augmented great grandmother residing in a self-sufficient arcology, illustrate the far-reaching influence of technology.
Other charaters like Cameron and Duncan opt for a techno-primitive lifestyle, choosing external devices over implants. The video predicts an Earth population between 100 billion and a trillion, sustained by highly automated, climate-controlled greenhouses and an Orbital Ring enabling cheap, quic