This Gist is a quick writeup for devs using the beta or master
build. We'll get a more complete writeup in the Beaker site docs on 0.8's final release. Feel free to open issues for discussion.
We've done some work on the DatArchive
API to make it easier to use. Prior to 0.8, Dats had a "staging area" folder which you had to commit()
to publish. In 0.8, Beaker will automatically sync that folder. As a result, the staging-area methods (diff()
commit()
and revert()
) were deprecated. There are also some new methods, and a few changes to how events work.
Here's a full reference:
// archive loading:
var archive = new DatArchive(url) // load the given dat
var archive = await DatArchive.load(url) // same as above but lets you await the loading
var archive = await DatArchive.selectArchive({ // open a modal to pick an archive from the library
title: String // title of the modal
buttonLabel: String // label on the 'ok' button
filters: {
isOwner: Boolean // only show owned (if true) or unowned (if false)
}
})
// archive creation:
var archive = await DatArchive.create({ // create a new dat
title: String // title of the archive
description: String // description of the archive
links: String // see https://github.com/datprotocol/dat.json#links
prompt: Boolean // if true, show the configuration modal; if false, just ask permission
})
var archive = await DatArchive.fork(url, { // duplicate a dat
title: String // title of the archive
description: String // description of the archive
links: String // see https://github.com/datprotocol/dat.json#links
prompt: Boolean // if true, show the configuration modal; if false, just ask permission
})
// archive meta:
await archive.getInfo({timeout:}) /* => {
url: String // the url of the dat
isOwner: Boolean // is this dat owned and writable by the user
version: Number // what is the latest (known) revision number of the dat?
peers: Number // how many peers are there right now?
mtime: Number // the timestamp of the last update received
size: Number // the downloaded size of the archive (bytes)
title: String // title of the archive
description: String // description of the archive
links: String // see https://github.com/datprotocol/dat.json#links
} */
await archive.configure({
title: String // title of the archive
description: String // description of the archive
links: String // see https://github.com/datprotocol/dat.json#links
web_root: String // the subfolder to be shown when the dat is navigated to (default '/')
fallback_page: String // on 404, serve this page (default false)
})
await archive.history({start:, end:, reverse:, timeout:}) /* => [
{
path: String // file changed
version: Number // version # of the change
type: 'put' or 'del' // what kind of change
},
...
] */
// read data:
await archive.stat(path, {timeout:}) /* file information => {
size: Number // size of the file (bytes)
offset: Number // offset into the content feed (blocks)
blocks: Number // number of blocks on the content feed
downloaded: Number // number of blocks downloaded
mtime: Number // modified timestamp
isFile(): Boolean
isDirectory: Boolean()
}*/
await archive.readFile(path, {encoding:, timeout:})
archive.readdir(path, {
recursive: Boolean // if true, will recurse into subfolders
stat: Boolean // if true, will output an array of {name: String, stat: StatObject}
timeout: Number
})
// write data:
await archive.writeFile(path, data, encoding)
await archive.mkdir(path)
await archive.unlink(path)
await archive.rmdir(path, {recursive:})
await archive.copy(orgPath, dstPath)
await archive.rename(orgPath, dstPath)
// utilities:
var key = await DatArchive.resolveName(url) // do a DNS lookup
await archive.download(path, {timeout:})
// events:
var emitter = archive.watch(pattern) // watch for file changes
// events...
// - 'invalidated' the file has changed but not yet downloaded
// - 'changed' the file has changed and been downloaded
archive.addEventListener(event, fn)
archive.removeEventListener(event, fn)
// events...
// - 'network-changed' the peer-count changed
// - 'download' data was downloaded
// - 'upload' data was uploaded
// - 'sync' all known blocks downloaded for one of the dat's feeds
In 0.8, we've introduced "Lab APIs." Discussion here. These are Web APIs which are either intended to be temporary, or which are getting field-testing before becoming an official API with long-term support.
All Lab APIs will be deprecated at some point. That means that apps using Lab APIs will someday break and need to be replaced. Deprecation may happen gradually (eg because a permanent version of the API is being rolled out) or suddenly (eg because a security issue was found). You have been warned!!
To use a Lab API, your dat.json
has to include an opt-in statement which looks like this:
{
"experimental": {
"apis": ["the-api-id"]
}
}
This Lab API gives you the power to read and manage the user's Dat Library. Discussion here. The methods and events are:
// per-use API:
await experimental.library.requestAdd(url, {duration})
await experimental.library.requestRemove(url)
// full management API:
await experimental.library.add(url, {duration}) // duration is optional, number of minutes to keep saved
await experimental.library.remove(url)
await experimental.library.get(url) // returns the settings for the archive
await experminetal.library.list({
inMemory: true/false, // only give dats that are actively swarming
isSaved: true/false, // only give dats that are saved (exclude the trash) or not (trash only)
isNetworked: true/false, // only give dats that are networked (no offline) or not (no online)
isOwner: true/false // only give dats that the user owns or not
})
// events:
// 'added' - archive added
// 'removed' - archive removed
// 'folder-synced' - archive synced to its folder
// 'updated' - archive metadata recently changed
// 'network-changed' - archive peer count recently changed
To use this API, include the following opt-in in your dat.json:
{
"experimental": {"apis": ["library"]}
}
Adding & removing dats affects whether they will be seeded (and saved) by the user. All dats are seeded temporary on access, but Beaker only downloads the files that are read, and the dats are not kept online for long. When added to the library, the dat is downloaded in its entirety, and then seeded for other people to access.
There are two ways to use this API. The first way is to use the "per-use" methods, which are requestAdd
and requestRemove
. These are preferable to users because they prompt the user each time, giving them a chance to review the choice. The second way is to use the "full management" methods, which gives the app total control over the library. Only use these APIs if you absolutely must, because they are very powerful.