Skip to content

Instantly share code, notes, and snippets.

@jasonjmcghee
Created May 28, 2023 04:48
Show Gist options
  • Save jasonjmcghee/4acfd2459d6af29f6711ba039f047269 to your computer and use it in GitHub Desktop.
Save jasonjmcghee/4acfd2459d6af29f6711ba039f047269 to your computer and use it in GitHub Desktop.
Incredibly simple way to cache the output of a function - useful with unreliable LLM APIs
import os
def cached_fn(dir_name: str, file_name: str, fn, verbose=True):
"""
Base the output of a function on a file.
If the file already exists, use it instead of running the function.
If the function was executed, write its output to the provided `dir_name` and `file_name`.
Example:
>>> script = cached_fn(
>>> dir_name,
>>> "script",
>>> lambda: chat_with_ai(
>>> generate_script_prompt_system,
>>> generate_script_prompt.format(title=topic, content=content_to_answer),
>>> model="gpt-4"
>>> )
>>> )
Another Example:
>>> one_and_one = cached_fn(
>>> dir_name,
>>> "add_one_and_one",
>>> lambda: 1 + 1
>>> )
"""
output_filename = f"{dir_name}/{file_name}.txt"
if os.path.isfile(output_filename):
with open(output_filename) as f:
output = f.read()
else:
output = fn()
with open(output_filename, 'w') as f:
f.write(output)
if verbose:
print("================================================================")
print(output)
return output
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment