This page explains how to create itls using itllib. An itl is a program that runs in-the-loop. itllib is centered around the concept of a loop, which consists of a collection of pub-sub channels through which itls can interact with one another.
If you prefer to learn by running demos, check out these examples:
- (put demos here)
To use itllib, you're going to want to install a few things first.
pip install --upgrade git+https://github.com/ThatOneAI/itllib
pip install --upgrade git+https://github.com/ThatOneAI/itlmon
The first one, itllib, is a python3 library for creating itls. The second, itlmon, is a debugging tool for interacting with itls.
With those installed, let's allocate a loop.
loopname="demo-$(head -c16 /dev/urandom | xxd -p)"
python3 -m itllib create-loop demo-loop --endpoint 'streams.thatone.ai' --name "$loopname"
python3 -m itllib config add-stream pings --loop "demo-loop"
python3 -m itllib config add-stream pongs --loop "demo-loop"
The create-loop
command will create a secrets/
folder that specifies what endpoint to use for the loop. The config
commands will create a config.yaml
file for mapping more semantic stream names to their underlying endpoints.
Let's test out the config. Open up two (2) terminals and run the following command in each terminal:
python3 -m itlmon --config ./config.yaml --secrets ./secrets
# You should see three channels in the side pane: system, pings, pongs.
From one terminal, send a message in the "pings" channel. You should see the exact same message in the other terminal's "pings" channel.
If that works, then you're good to go.
Let's create an itl so we're not just talking between terminals.
Paste this into a file paddle_itl.py
:
import asyncio
from itllib import Itl
itl = Itl()
itl.apply_config("./config.yaml", "./secrets")
itl.start()
@itl.ondata("pings")
def paddle(message):
itl.stream_send("pongs", f"pong {message}")
# The itl runs asynchronously. Let's prevent the program from stopping.
while True:
asyncio.run(asyncio.sleep(999))
And run paddle_itl.py
:
python3 paddle_itl.py
Now if you send a message to the "pings" channel in any of your itlmon terminal UIs, you should see a message pop up in the "pongs" channel. That's your itl reading your ping and sending the pong.
These messages are passed over websockets. You can see the URLs for these websockets by running the following command:
python3 -m itllib config dump-endpoints
You can use these websocket endpoints to send and receive itl messages in any language and client. Note that itllib expects JSON-formatted messages, and it will silently reject improperly formated messages. Right now, our itl only accepts string messages, but we can make it more generic.
@itl.ondata("pings")
def paddle(*args, **kwargs)
itl.stream_send("pongs", f"pong: {args} {kwargs}")
@itl.ondata("pings")
async def paddle(*args, **kwargs)
itl.stream_send("pongs", "async methods work too")
There's no access control right now, so if you want to share some endpoints with others, it's recommended to:
- Make sure your loop names include long random strings so people can't stumble onto them without having a link.
- Make sure to create at least two loops with different random strings: one for your publicly-exposed streams, and one for your private ones.
Once we set up access control, this will be less of an issue.
This should be enough for most functionality. If you want to see more advanced uses of streams, check out the itllib documentation.
Configs enable one program to configure another. They're modeled on Kubernetes. The config.yaml
and secrets
folder contain some example configuration files. You can work with custom ones too.
First, let's create the necessary resources.
dbname="demo-$(head -c16 /dev/urandom | xxd -p)"
python3 -m itllib create-database demo-db --endpoint clusters.thatone.ai --name "$dbname" --notifier "https://events.thatone.ai/clusters"
python3 -m itllib config add-stream "cluster-updates" --loop "demo-loop"
python3 -m itllib config add-cluster "main-cluster" --database "demo-db" --stream "cluster-updates"
The create-database
command adds a database endpoint to your secrets/
folder. This database will store all configuration files. We're creating a new stream with add-stream
to receive update notifications whenever a config file in our to-be-created cluster changes. After that, we use add-cluster
to create the actual cluster.
You can think of a database as a repository of configuration files. A cluster organizes the configs n a database. It's similar to the difference between a disk and a filesystem.
Let's test it out with itlmon first. Create a file person-1.yaml
with these contents:
apiVersion: example.thatone.ai/v1
kind: Person
metadata:
name: person-1
spec:
name: "Alice"
greeting: "Wassup"
If you already have itlmon running, you'll need to restart it to pick up the config.yaml
and secrets
changes.
# In itlmon, if it's running:
/quit
# In the shell:
python3 -m itlmon --config ./config.yaml --secrets ./secrets
# In itlmon once it starts:
/cluster main-cluster apply person-1.yaml
You should see a message in the "cluster-updates" stream indicating that the operation has been queued. If you see the message, you're good to go.
Now let's create an itl that stores and prints config files. Call it greeter_itl.py
.
import asyncio
from itllib import Itl, SyncedResources
from pydantic import BaseModel
itl = Itl()
itl.apply_config("./config.yaml", "./secrets")
itl.start()
# Store the config files here
resources = SyncedResources()
# Any pydantic model works here, as long as it can be converted to JSON
@resources.register(itl, "main-cluster", "example.thatone.ai", "v1", "Person")
class PersonResource(BaseModel):
name: str
greeting: str
@itl.ondata("pings")
async def greet(*args, **kwargs):
# Ignore the ping data and just greet everyone
for person in resources.values():
itl.stream_send("pongs", f"{person.greeting} {person.name}!")
# The itl runs asynchronously. Let's prevent the program from stopping.
while True:
asyncio.run(asyncio.sleep(999))
And run it:
python3 greeter_itl.py
When it starts, it should see the person-1
config you applied through itlmon. You'll see a second "put" message in the "cluster-updates" channel in itlmon, which indicates that it was successfully applied. Now if you send a message in the "pings" channel, you should see a greeting in "pongs".
You can apply or delete any number of configuration files using the /cluster
command. The resources
variable in greeter_itl.py
will be kept up-to-date with the changes. Run /cluster help
for details, or check out the itllib documentation.
You'll often need to perform some logic when modifying resources. Here's a skeleton of how to handle these cases:
from itllib import ResourceController
# ...
@resources.register(itl, "main-cluster", "example.thatone.ai", "v1", "ComplexResource")
class ComplexResource(ResourceController):
async def create_resource(self, config):
# parse the config
return myCustomObject
async def update_resource(self, resource, config):
# resource contains the current object associated with the old config
# config contains the new config
return updatedObject
async def delete_resource(self, resource):
# clean up the object here
return
SyncedResources
and ResourceController
work together to abstract the underlying resource management APIs in itllib. If you need even more flexibility, you can use the underlying itllib APIs directly. Check out the itllib documentation for more details.
That covers the basic functionality provided by itllib. If you want to see a more complex demo that uses these APIs, take a look at assistants_itl, which provides dynamically reconfigurable assistants that can collaborate with one another.