- Make a container
- Start the container elsewhere.
- boom it works. lazily.
Where did the publish step go? We didn't need it. :) But you can do it for tags, as those are useful:
- Make a container.
- Tag it.
- Start a container elsewhere.
- make a container.
- start the containers with swarm or fleet
- (they share download with each other!)
- Start a container.
- Write to it.
- Shut it down.
- Elsewhere, start the container.
- boom changes.
- Start astralboot
- Boot the vm.
- Mount ipfs
- boot from mount
- ipfs get the vm.
[ Deploying Infrastructure with IPFS ]
In deployments, we often see lots of machines needing to fetch the same data with as little latency and bandwidth consumption as possible.
This can happen in many ways:
- new updates trigger many machines to fetch images all at once
- new replicas come up over time
- large datasets need to be moved around for computation
- or even fetching images + containers to your dev machines
In all these scenarios we typically see network topologies that operate mostly in a small, very fast local network, and a slower pipe out to the internet. This can be many machines in the same datacenter retrieving something from S3 or the broader internet, or even developers fetching images into their dev machines at their office's ISP.
There have been many systems developed for this, perhaps most notably twitter's murder, a specialized version of bittorrent for machine deployment.
ipfs is a version controlled, peer-to-peer file system. So it can be used like twitter's storm, but it also has versioning, tagging, and signature verification, much like git. So it's even better for organizing and authenticating the data. It also supports end-to-end encryption, so it's perfect for deployments across the wide internet.
Let's check it out! I'm going to walk you through a few of the things you can do with ipfs.
Let's start with simple VMs. Suppose we have a small hypervisor linux with ipfs installed.
Let's fetch a vm. `ipfs cat >mymv.