Skip to content

Instantly share code, notes, and snippets.

@joehand
Created January 7, 2019 02:25
Show Gist options
  • Save joehand/37855438cdb1475b267c31dd89d88983 to your computer and use it in GitHub Desktop.
Save joehand/37855438cdb1475b267c31dd89d88983 to your computer and use it in GitHub Desktop.
#dat IRC history since 2017 (maybe with gaps?)
This file has been truncated, but you can view the full file.
{"channel":"#dat"}
{"from":"millette","message":"it's the little things https://github.com/mafintosh/hypercored/pull/1","timestamp":1499808770460}
{"from":"pfrazee","message":"nice","timestamp":1499808804116}
{"from":"pfrazee","message":"yoshuawuyts: is there some kind of special trick to listening for radio change events with yo?","timestamp":1499811016382}
{"from":"pfrazee","message":"nm I got something","timestamp":1499811316568}
{"from":"pfrazee","message":"mafintosh: I'm adding a \"Delete downloaded files\" tool to beaker. I use the default storage system in hyperdrive. Do I need to use `archive.content.clear()` ? If yes or no, do I need to do anything else?","timestamp":1499812789398}
{"from":"mafintosh","message":"pfrazee: i think blahah was adding a per file clear","timestamp":1499812817996}
{"from":"pfrazee","message":"mafintosh: do I need that if I'm clearing all files?","timestamp":1499812841880}
{"from":"mafintosh","message":"pfrazee: but yea you can do archive.content.clear(0, archive.content.length, [cb]) to clear all","timestamp":1499812843832}
{"from":"pfrazee","message":"mafintosh: yeah and that'll nuke the actual content, not just the hypercore references to the content, right?","timestamp":1499812869837}
{"from":"mafintosh","message":"pfrazee: both","timestamp":1499812882947}
{"from":"pfrazee","message":"mafintosh: yeah cool. I think you'll be happy to have this feature, I think you requested it earlier","timestamp":1499812901378}
{"from":"mafintosh","message":"ya, super cool","timestamp":1499813304057}
{"from":"pfrazee","message":"mafintosh: I'm testing this thing and... well, when I delete, I see 50MB go away. Then when I redownload, it goes back to 50MB","timestamp":1499813808582}
{"from":"pfrazee","message":"mafintosh: seems right, except that 50MB downloads in like 2 seconds!","timestamp":1499813819922}
{"from":"pfrazee","message":"that seems too fast to believe","timestamp":1499813862386}
{"from":"mafintosh","message":"pfrazee: ha","timestamp":1499813881777}
{"from":"mafintosh","message":"pfrazee: sure its completely downloaded?","timestamp":1499813903138}
{"from":"pfrazee","message":"mafintosh: would the osx du lie about that?","timestamp":1499813926587}
{"from":"mafintosh","message":"pfrazee: if it fetches the last piece first, the file will appear to be 50mb","timestamp":1499813955397}
{"from":"pfrazee","message":"mafintosh: right right, ok I'll check on that","timestamp":1499813968675}
{"from":"yoshuawuyts","message":"pfrazee: fyi, this should be merged to yo https://github.com/maxogden/yo-yo/pull/71","timestamp":1499818221988}
{"from":"yoshuawuyts","message":"pfrazee: input stuff might otherwise not work as you expect - amongst other things","timestamp":1499818248868}
{"from":"pfrazee","message":"yoshuawuyts: ok cool","timestamp":1499818286640}
{"from":"jondashkyle","message":"Hmm, is sync broken on the most recent version of Dat Desktop? Have a folder with a writeable Dat Archive. Open the app, make a change to a file, and it doesn’t sync up. When I run `sync` with the cli on the folder it syncs no problem. If I make a change to a file, then open Dat Desktop, it pushes fine, but no additional changes get synced to the Archive.","timestamp":1499839730027}
{"from":"mafintosh","message":"yoshuawuyts: o/ reminds me. can we do a maintenance release of desktop? bunch of bug fixes upstream","timestamp":1499845269368}
{"from":"yoshuawuyts","message":"mafintosh: scribbling it down; shouldn't take too long","timestamp":1499851621298}
{"from":"dat-gitter","message":"(sdockray) mafintosh: somehow i am getting that `Error: addMembership ENOBUFS` error in multicast-dns again when i run dat doctor on one particular machine(?!)","timestamp":1499864534430}
{"from":"mafintosh","message":"@sdockray do you have a full stack trace?","timestamp":1499864554910}
{"from":"dat-gitter","message":"(sdockray) just this: https://pastebin.com/A0APunw6","timestamp":1499864623398}
{"from":"dat-gitter","message":"(sdockray) this time i uninstalled dat and made sure there was no old version of dat running and then reinstalled, so its on 13.7.0","timestamp":1499864667424}
{"from":"dat-gitter","message":"(sdockray) also @e-e-e and i were just noticing incredibly slow speeds in app we're writing (between 10-100kb/s) and i then tested same dats (all small ones with 50-100 files and 20-50MB) directly with dat cli on a couple diff machines and very slow on all of them. do you have a test dat that i could use as a sanity check to try and clone?","timestamp":1499864950747}
{"from":"mafintosh","message":"@sdockray yes i do, one sec","timestamp":1499868481830}
{"from":"mafintosh","message":"@sdockray e6b46dad39f3a60ae4c25a304b38e70580c4ebee2bd06771208e028cb044e014","timestamp":1499868549295}
{"from":"mafintosh","message":"@sdockray ah that crash is a doctor bug i can see :)","timestamp":1499868712475}
{"from":"mafintosh","message":"@sdockray i get 0.5-1mb on that dat linked (capping my current network)","timestamp":1499868776186}
{"from":"mafintosh","message":"https://github.com/joehand/dat-doctor/pull/9","timestamp":1499868939383}
{"from":"mafintosh","message":"to fix that crash above","timestamp":1499868945755}
{"from":"mafintosh","message":"@sdockray oh finally, if you dm me the dat in question maybe i take a look :)","timestamp":1499869093326}
{"from":"garbados","message":"hello! i'm reading through dat's documentation but having trouble understanding something. if i develop an application and share it as a dat archive (ex: for use with beaker) but that application has a vulnerability, how do i distribute the patch?","timestamp":1499878805797}
{"from":"creationix","message":"garbados: dats push updates","timestamp":1499878904657}
{"from":"creationix","message":"just patch the file and sync your source dat, anyone listening for updates will get it","timestamp":1499878918141}
{"from":"creationix","message":"unlike bittorrent, dats are mutable by the writer","timestamp":1499878928920}
{"from":"creationix","message":"so if it's a web app, beaker will automatically get the latest version. I think there is even a JS api to be notified of changes","timestamp":1499878957375}
{"from":"garbados","message":"if i understand right, the dat link contains a public key, where \"the writer\" has the corresponding private key which allows them to make changes?","timestamp":1499879045711}
{"from":"pfrazee","message":"garbados: correct","timestamp":1499879168616}
{"from":"garbados","message":"pfrazee, creationix: that's really neat, thanks for clarifying","timestamp":1499879185974}
{"from":"pfrazee","message":"garbados: sure thing. Ask away with any other questions, that's what I'm here for","timestamp":1499879207895}
{"from":"garbados","message":"yay! thank you :D","timestamp":1499879222898}
{"from":"caiogondim_","message":"Good afternoon! I'm having fun with hypercore/drive in the browser and was wondering if https://github.com/mafintosh/hyperdrive-www is an up to date example I could follow...","timestamp":1499881044626}
{"from":"pfrazee","message":"caiogondim_: are you using beaker, or another browser?","timestamp":1499881097719}
{"from":"caiogondim_","message":"I use beaker, but want to make it ti production :) Im actually trying to plug hyperdrive and HLS.js","timestamp":1499881127502}
{"from":"pfrazee","message":"caiogondim_: :) so is this project for beaker, I suppose I should ask","timestamp":1499881155176}
{"from":"pfrazee","message":"I just need to know which stack you're targeting","timestamp":1499881169024}
{"from":"caiogondim_","message":"it's not. Chrome, Firefox and new Safari for desktop as starter","timestamp":1499881188779}
{"from":"pfrazee","message":"ok. mafintosh what's the status on non-beaker dat APIs?","timestamp":1499881204422}
{"from":"mafintosh","message":"pfrazee: what do you mean?","timestamp":1499881863149}
{"from":"pfrazee","message":"mafintosh: do you have a good example of using dat in chrome right now? (Asking for caiogondim_)","timestamp":1499881910675}
{"from":"mafintosh","message":"ah i have a gist","timestamp":1499882037473}
{"from":"jhand","message":"caiogondim_: there are some updated docs on using it in browser here: https://docs.datproject.org/browser#under-the-hood","timestamp":1499882039029}
{"from":"mafintosh","message":"Also that","timestamp":1499882046197}
{"from":"mafintosh","message":"Hehe","timestamp":1499882048152}
{"from":"caiogondim_","message":"Thanks for that :)","timestamp":1499882057778}
{"from":"jhand","message":"I have a websocket example I've been playing with I can put up too. To avoid using webrtc","timestamp":1499882136449}
{"from":"caiogondim_","message":"Any gains on performance? Would love to see it...","timestamp":1499882186225}
{"from":"jhand","message":"caiogondim_: we did find connecting via webrtc a bit slow. But its mainly a difference of if you want to use the p2p network via webrtc or connect directly to a ws server.","timestamp":1499882252723}
{"from":"mafintosh","message":"also hypercored supports websockets","timestamp":1499882809767}
{"from":"jondashkyle","message":"quick lil idea, but it'd be awesome if you could use `dat` to target a sub directory, similar to what you can do with browserify","timestamp":1499883326209}
{"from":"jondashkyle","message":"i'm using `dat` in npm scripts, and i build output to a `dist` folder, and have a little dat archive in there to sync that. (also would be cool to have a dat push which automatically cancels the stream after completion)","timestamp":1499883377927}
{"from":"jondashkyle","message":"just got my site running on dat! http://jon-kyle.com","timestamp":1499883396261}
{"from":"jondashkyle","message":"prob barking up some familiar trees with these thoughts, but never hurts to have a lil bump","timestamp":1499883449664}
{"from":"yoshuawuyts","message":"jondashkyle: rad!","timestamp":1499883477659}
{"from":"jhand","message":"jondashkyle: yea, dat push has been on my list for awhile... I should get to it. https://github.com/datproject/dat/issues/648","timestamp":1499883709032}
{"from":"jhand","message":"jondashkyle: not sure I understand the sub directory part.","timestamp":1499883728521}
{"from":"jhand","message":"what does your npm script look like righ tnow?","timestamp":1499883744249}
{"from":"jondashkyle","message":"oh cool, would be so handy with push!","timestamp":1499883757049}
{"from":"jondashkyle","message":"yesterday i tried adding something like this:","timestamp":1499883814720}
{"from":"jondashkyle","message":"https://www.irccloud.com/pastebin/lwl5YTxZ/","timestamp":1499883836574}
{"from":"jondashkyle","message":"but funnily enough it actually syncs the entire directory that package.json is contained in, ha!","timestamp":1499883857022}
{"from":"jondashkyle","message":"so for now i'm manually navigating to the `dist` directory after build, and running dat sync.","timestamp":1499883876860}
{"from":"jhand","message":"jondashkyle: oh you can pass dir to any dat command `dat sync <dir>`","timestamp":1499883909693}
{"from":"jhand","message":"but thats weird what you posted didn't sync as expected","timestamp":1499883934421}
{"from":"jondashkyle","message":"so funny, thought i tried `sync <dir>`!","timestamp":1499884348613}
{"from":"jondashkyle","message":"and yeah, i was probably doing something incorrect, but i suppose it might be helpful to hear from people who have a tendency to get confused about what they’re finding, haha!","timestamp":1499884390926}
{"from":"jhand","message":"ya definitely! means we need to improve docs or ui =)","timestamp":1499884415741}
{"from":"jhand","message":"the dir part is kind of hidden at the top when you do `dat help`","timestamp":1499884463363}
{"from":"ogd","message":"https://www.npmjs.com/package/handle-protocol cc blahah","timestamp":1499885115666}
{"from":"ogd","message":"blahah: now im gonna try to harvest all dois using the LIST_HANDLE operation","timestamp":1499885141634}
{"from":"ogd","message":"mafintosh: you should check out the handle protocol spec, its pretty weird https://tools.ietf.org/html/rfc3652","timestamp":1499885208679}
{"from":"ogd","message":"mafintosh: basically dns from another dimension","timestamp":1499885216985}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: one-line PR for a typo in sodium-javascript: https://github.com/sodium-friends/sodium-javascript/pull/7","timestamp":1499887906256}
{"from":"blahah","message":"ogd nice","timestamp":1499888158049}
{"from":"mafintosh","message":"@lukeburns we should add a test for that in sodium-test","timestamp":1499888276826}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: ya still trouble with that change","timestamp":1499888659439}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: PR with test for crypto_sign_open: https://github.com/sodium-friends/sodium-test/pull/1","timestamp":1499889810646}
{"from":"dat-gitter","message":"(lukeburns) i'll add one for detached sign/verify as well","timestamp":1499890393347}
{"from":"mafintosh","message":"sweet","timestamp":1499890596857}
{"from":"mafintosh","message":"@lukeburns i'll get it merged when i'm back on my laptop","timestamp":1499890627992}
{"from":"bret","message":"mafintosh: should I write a webasm IDv3 tag parser / setter ? http://www.gigamonkeys.com/book/practical-an-id3-parser.html","timestamp":1499891218894}
{"from":"mafintosh","message":"bret: why do you wanna do it in wasm? perf?","timestamp":1499891275057}
{"from":"bret","message":"because the example is lisp... but thats why i'm asking if there would be any benefit to it","timestamp":1499891302350}
{"from":"mafintosh","message":"ahhh","timestamp":1499891502947}
{"from":"mafintosh","message":"their lisp has a lot more helpers tho","timestamp":1499891513246}
{"from":"mafintosh","message":"so prob easier in js still :)","timestamp":1499891527362}
{"from":"bret","message":"k","timestamp":1499891557545}
{"from":"pfrazee","message":"web_root and fallback_page landed in beaker master","timestamp":1499892397690}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: made some corrections to detached sign/verify and added tests","timestamp":1499892418552}
{"from":"pfrazee","message":"we need to get in touch with zeit about getting the hyper* modules to work with now","timestamp":1499902546795}
{"from":"dat-gitter","message":"(sdockray) mafintosh: the test dat you gave me is fast. i see that mine is slow if i am doing `HOME -> (clone) -> REMOTE` but it is fast if i do `HOME <- (clone) <- REMOTE` (on the same dat key)","timestamp":1499907467691}
{"from":"dat-gitter","message":"(sdockray) should i blame it on australia?","timestamp":1499907481821}
{"from":"mafintosh","message":"@sdockray i'll get back to you tmw with some flags you should try it with","timestamp":1499907518847}
{"from":"dat-gitter","message":"(sdockray) ok thanks!","timestamp":1499907533974}
{"from":"mafintosh","message":"@sdockray try running dat with --maxRequests=200 in your test","timestamp":1499953782611}
{"from":"mafintosh","message":"@lukeburns around?","timestamp":1499957027267}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: yep thx for the tweaks","timestamp":1499957823478}
{"from":"mafintosh","message":"pretty nasty bugs","timestamp":1499957897924}
{"from":"mafintosh","message":"good with better tests now!","timestamp":1499957902733}
{"from":"dat-gitter","message":"(lukeburns) :thumbsup:","timestamp":1499957944605}
{"from":"mafintosh","message":"@lukeburns ahhhh we still use tweetnacl directly inside hypercore in browser mode","timestamp":1499958118323}
{"from":"mafintosh","message":"was wondering why we hadnt noticed this","timestamp":1499958127433}
{"from":"mafintosh","message":"note to self to move to sodium-universal for all crypto!","timestamp":1499958138786}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: are you wanting to modify sodium-signatures or replace it with sodium-universal in hypercore?","timestamp":1499958443164}
{"from":"mafintosh","message":"@lukeburns was just thinking about that","timestamp":1499958470449}
{"from":"mafintosh","message":"@lukeburns prob just move to sodium-universal in hypercore. we use it already","timestamp":1499958488628}
{"from":"mafintosh","message":"@lukeburns rename hash.js in lib to crypto.js and add some signature helpers","timestamp":1499958501774}
{"from":"mafintosh","message":"@lukeburns also, i might write a simple high level crypto lib for this on top of sodium-universal that deprecates sodium-signatures","timestamp":1499958536580}
{"from":"mafintosh","message":"that hypercore then will use","timestamp":1499958546610}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: those sound like two different approaches","timestamp":1499958819105}
{"from":"mafintosh","message":"ya","timestamp":1499958848299}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: so a lib with a simpler api than sodium-universal that does everything you need in hypercore?","timestamp":1499958915482}
{"from":"mafintosh","message":"ya, that or just use sodium-universal directly for signing in hypercore","timestamp":1499958949138}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: gotcha","timestamp":1499958960996}
{"from":"mafintosh","message":"@lukeburns, for now, https://github.com/mafintosh/sodium-signatures/commit/7b68f4baa361b30da37781ae14dc7895ccd8d004","timestamp":1499959131006}
{"from":"dat-gitter","message":"(lukeburns) nice!","timestamp":1499959164911}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: i'd be happy to jump into hypercore to remove the sodium-signatures dep / make the hash -> crypto changes if that'd be helpful","timestamp":1499959570622}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: or can just hold off if you're not decided on that approach","timestamp":1499959595181}
{"from":"mafintosh","message":"@lukeburns that'd be great :) less deps","timestamp":1499959669601}
{"from":"mafintosh","message":"@lukeburns if crypto.js was the only place we required sodium stuff that'd be a good goal","timestamp":1499959691340}
{"from":"dat-gitter","message":"(lukeburns) cool, on it!","timestamp":1499959704012}
{"from":"mafintosh","message":"then we can also modularise crypto.js","timestamp":1499959782956}
{"from":"mafintosh","message":"later on","timestamp":1499959787645}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: https://github.com/mafintosh/hypercore/pull/106","timestamp":1499961970702}
{"from":"pfrazee","message":"mafintosh: I haven't used the FS as a primary store before, but it's occurring to me that, without locking, the apps in hashbase are going to clobber each other's data","timestamp":1499964363242}
{"from":"pfrazee","message":"for instance, I need to be able to do read/update/write","timestamp":1499965160631}
{"from":"pfrazee","message":"anybody in the channel have experience with that? kind of a nightmare question in the browser since bad code could just acquire and hold locks willy nilly","timestamp":1499965222555}
{"from":"pfrazee","message":"creationix: this seems like something you might have experience with","timestamp":1499965250895}
{"from":"mafintosh","message":"pfrazee: dont get the question","timestamp":1499965588520}
{"from":"pfrazee","message":"mafintosh: ok let's say I've got profile.json, and I want to update a field. I have to do read('profile.json'), update the field, then write('profile.json'). That's not atomic, so if I had another thread or process running that did a write('profile.json'), I get non-deterministic output","timestamp":1499965647197}
{"from":"pfrazee","message":"so I need a way to do atomic read/write. I should look into atomic FS operations before I look at locks","timestamp":1499965707092}
{"from":"mafintosh","message":"pfrazee: use a mutex","timestamp":1499965768520}
{"from":"mafintosh","message":"see mutexify","timestamp":1499965775084}
{"from":"pfrazee","message":"mafintosh: yeah, I'd have to expose the mutex as a web api, which means I need to think through security implications","timestamp":1499965868346}
{"from":"mafintosh","message":"In the web?","timestamp":1499965897779}
{"from":"mafintosh","message":"Is this for the dat api?","timestamp":1499965910858}
{"from":"pfrazee","message":"yeah","timestamp":1499965941278}
{"from":"pfrazee","message":"my bad, I said \"the apps in hashbase\" earlier and I mean \"in beaker\"","timestamp":1499965959568}
{"from":"TheLink","message":"pfrazee: the \"beaker browser\" menu on macos could use a \"Preferences\" and a \"Check for updates ...\" entry","timestamp":1499965977432}
{"from":"pfrazee","message":"TheLink: it exists in the settings page","timestamp":1499966001157}
{"from":"TheLink","message":"yes, I know","timestamp":1499966013251}
{"from":"pfrazee","message":"TheLink: you think we should add to the dropdown menu too?","timestamp":1499966016586}
{"from":"TheLink","message":"felt 99% of mac apps have it there :)","timestamp":1499966039122}
{"from":"TheLink","message":"not sure if that's a reason ;)","timestamp":1499966046555}
{"from":"blahah","message":"pfrazee tbh I'd use a higher level abstraction for such a store","timestamp":1499966065147}
{"from":"pfrazee","message":"TheLink: oh I gotcha, so in the mac menu","timestamp":1499966079749}
{"from":"blahah","message":"toiletdb or lowdb for example","timestamp":1499966084605}
{"from":"TheLink","message":"yeah","timestamp":1499966084891}
{"from":"pfrazee","message":"like, the bar across the top of the screen","timestamp":1499966085177}
{"from":"TheLink","message":"yes","timestamp":1499966090786}
{"from":"pfrazee","message":"TheLink: yeah ok, I'll make a todo","timestamp":1499966091644}
{"from":"pfrazee","message":"blahah: I think we may not be able to avoid the problem that way","timestamp":1499966127478}
{"from":"pfrazee","message":"if the dat api doesn't have a way to do atomic read/update/write transactions, and no way for a single process to become a 'server' to all possible writers *to* a dat archive, then there's no way to protect against data loss","timestamp":1499966212327}
{"from":"pfrazee","message":"multiple tabs need to safely write to a dat archive at the same time","timestamp":1499966230294}
{"from":"pfrazee","message":"I'll look into what others have done. There's probably a way we could do compare-and-swap","timestamp":1499966326915}
{"from":"blahah","message":"eh sorry missed the threaded part","timestamp":1499966341188}
{"from":"pfrazee","message":"blahah: no worries","timestamp":1499966367423}
{"from":"blahah","message":"previous convo with mafintosh suggested SLEEP was threadsafe","timestamp":1499966368716}
{"from":"blahah","message":"so a pure hyperdrive as the fs abstraction should work no?","timestamp":1499966394334}
{"from":"pfrazee","message":"well there's no concern that the hyperdrive's internal state will get corrupted","timestamp":1499966413617}
{"from":"mafintosh","message":"pfrazee: ah okay","timestamp":1499966425300}
{"from":"mafintosh","message":"pfrazee: you wanna add a compare and swap api then","timestamp":1499966438619}
{"from":"pfrazee","message":"mafintosh: yeah that's what I'm thinking","timestamp":1499966453824}
{"from":"pfrazee","message":"mafintosh: ironically that would be much easier if I wasnt using staging","timestamp":1499966463786}
{"from":"pfrazee","message":"mafintosh: because then I could just use the hypercore log seq","timestamp":1499966471370}
{"from":"mafintosh","message":"pfrazee: you could do it on the content","timestamp":1499966497187}
{"from":"mafintosh","message":"like write if exisitng file buffer is equal to this","timestamp":1499966519440}
{"from":"pfrazee","message":"mafintosh: that'd be pretty costly, no?","timestamp":1499966533142}
{"from":"pfrazee","message":"I'll figure it out","timestamp":1499966667701}
{"from":"blahah","message":"pfrazee ok actually read the thread this time 😄 this problem is pretty common in bioinformatics... basic classes of solutions include thread safe indexes, mutexes, atomic changes, and a single io gatekeeper thread","timestamp":1499967335965}
{"from":"blahah","message":"I'd probably go with a storage abstraction that lives in its own thread","timestamp":1499967430000}
{"from":"pfrazee","message":"blahah: yeah, is it common because DBs arent often used?","timestamp":1499967439251}
{"from":"pfrazee","message":"we basically have that with beaker (storage in its own thread) so the problem is just transactional atomicity","timestamp":1499967456819}
{"from":"blahah","message":"yeah usually custom indexes optimised for specific applications","timestamp":1499967467023}
{"from":"pfrazee","message":"yeah","timestamp":1499967472639}
{"from":"blahah","message":"wait so you already have a single disk io thread? how does it receive instructions?","timestamp":1499967559221}
{"from":"pfrazee","message":"well we just run hyperdrive inside the main electron thread","timestamp":1499967656393}
{"from":"pfrazee","message":"and then the apps run in webview processes and send IPC commands to the main thread","timestamp":1499967675122}
{"from":"pfrazee","message":"though eventually I'll want to move hyperdrive into its own thread","timestamp":1499967696060}
{"from":"blahah","message":"Where's the nondeterministic behaviour coming from?","timestamp":1499967757381}
{"from":"pfrazee","message":"scenario is, two tabs that both want to update the same file","timestamp":1499967806780}
{"from":"pfrazee","message":"the full operation is read(), change, write()","timestamp":1499967826684}
{"from":"blahah","message":"so, hyperdrive wrapper with a queue?","timestamp":1499967830411}
{"from":"pfrazee","message":"right which means some kind of transactional semantics","timestamp":1499967846674}
{"from":"pfrazee","message":"compare-and-swap seems better to me because it avoids the potential that a hostile or badly-written app could tie up the queue/lock","timestamp":1499967906655}
{"from":"blahah","message":"if it's always read change write that's one transaction though? just queue that operation?","timestamp":1499967911045}
{"from":"pfrazee","message":"blahah: sure but you may need to run app logic within the transaction","timestamp":1499967931129}
{"from":"pfrazee","message":"for instance, how would the dat api encode this: `var json = parse(await read(...)); if (json.n < 100) { json.n++ } await(write(..., stringify(json)))`","timestamp":1499968020214}
{"from":"pfrazee","message":"if we want a queue to handle it, we either need locking semantics, or we need to be able to send the entire transaction to the managing thread","timestamp":1499968092798}
{"from":"pfrazee","message":"the latter is infeasible","timestamp":1499968129753}
{"from":"pfrazee","message":"unless we wanted to let apps send JS to be evalled within the main thread, which would not be a good idea","timestamp":1499968159243}
{"from":"pfrazee","message":"as tantalizing as it might be","timestamp":1499968169594}
{"from":"pfrazee","message":"(at least to me. I always liked the idea of being able to send arbitrary JS to act like a query inside a db engine)","timestamp":1499968199682}
{"from":"blahah","message":"pfrazee so, it seemsik","timestamp":1499968662983}
{"from":"blahah","message":"gah, phone keyboard","timestamp":1499968671623}
{"from":"pfrazee","message":"lol","timestamp":1499968728166}
{"from":"blahah","message":"it seems like a single file on the fs as the target for multithread io is causing friction","timestamp":1499968738256}
{"from":"blahah","message":"abstractions with transactions basically exist for this, so unless there's a strong constraint forcing the use of json files, why not use them?","timestamp":1499968836672}
{"from":"blahah","message":"also it seems that arbitrary code execution during transactions is the potential failure case? so either - writing to abstractions in memory that are periodically just dumped to fs, or restrictions on what can happen in a transaction","timestamp":1499968955363}
{"from":"blahah","message":"also sorry if I'm trivialising it - I could well be failing to grasp important things","timestamp":1499969044927}
{"from":"pfrazee","message":"blahah: (2 secs had to take a call)","timestamp":1499969060913}
{"from":"pfrazee","message":"blahah: ok I'm back! no I just need to do some research, I think. I'm sure this has been solved well before","timestamp":1499973842837}
{"from":"ogd","message":"blahah: ive pulled down about 10 million (50gb) of metadata from share, script is still running","timestamp":1499974704386}
{"from":"pfrazee","message":"hey motherboard wrote an article about beaker and dat! https://twitter.com/corintxt/status/885582873683779585","timestamp":1499974754381}
{"from":"ogd","message":"haha whoa","timestamp":1499974765564}
{"from":"pfrazee","message":"haha yeah that pic. Our browser may make your laptop explode","timestamp":1499974787720}
{"from":"ogd","message":"burnthemall","timestamp":1499974836174}
{"from":"karissa","message":"lol","timestamp":1499975436433}
{"from":"mafintosh","message":"pfrazee: cool!!","timestamp":1499976156842}
{"from":"blahah","message":"nice ogd","timestamp":1499977259437}
{"from":"karissa","message":"ogd: nice","timestamp":1499977809727}
{"from":"karissa","message":"soundcloud is going to shut down, who is working on the p2p replacement?","timestamp":1499977834770}
{"from":"karissa","message":"btw here's a tool to download a soundcloud song https://github.com/diracdeltas/sounddrop","timestamp":1499977844238}
{"from":"pfrazee","message":"karissa: good question","timestamp":1499977885324}
{"from":"mafintosh","message":"youtube-dl also supports soundcloud","timestamp":1499980849263}
{"from":"dat-gitter","message":"(benrogmans) e27.co, a popular blog in Asia published my article about the Distributed Web: https://e27.co/ #promotingdistributedweb","timestamp":1499980943734}
{"from":"pfrazee","message":"@benrogmans nice!","timestamp":1499981039680}
{"from":"jhand","message":"karissa: https://twitter.com/0x00A/status/885534247137812480","timestamp":1499981280510}
{"from":"mafintosh","message":"any android users tried running node in https://termux.com ?","timestamp":1499981636264}
{"from":"mafintosh","message":"heard that should work","timestamp":1499981643025}
{"from":"jhand","message":"whoa I'll try when I get home","timestamp":1499981794312}
{"from":"karissa","message":"woahh","timestamp":1499981833094}
{"from":"TheLink","message":"pfrazee: would it make sense to make the swarm debugger a tab in the sidebar which gets opened by default when clicking on the \"xx peers\" button in the address bar?","timestamp":1499982041696}
{"from":"pfrazee","message":"TheLink: do you find that debugging the swarm is a pretty common activity?","timestamp":1499982068601}
{"from":"TheLink","message":"pfrazee: in a torrent app the peers/seeds listings are normally pretty easily accessible","timestamp":1499982127494}
{"from":"TheLink","message":"of course beaker is different","timestamp":1499982136707}
{"from":"mafintosh","message":"jhand: if that works i'm buying an android asap haha","timestamp":1499982137924}
{"from":"pfrazee","message":"TheLink: right, and clicking on the peer count, it would make sense for it to show the peers","timestamp":1499982139623}
{"from":"TheLink","message":"pfrazee: yes imho","timestamp":1499982149389}
{"from":"pfrazee","message":"TheLink: that does make sense, I'm just not sure it's super useful","timestamp":1499982179400}
{"from":"mafintosh","message":"jhand: https://medium.freecodecamp.org/building-a-node-js-application-on-android-part-1-termux-vim-and-node-js-dfa90c28958f","timestamp":1499982209379}
{"from":"TheLink","message":"pfrazee: only a little useful if combined with a ip2c service ;)","timestamp":1499982218300}
{"from":"TheLink","message":"pfrazee: otherwise the peer count is the most essential information which is already prominently there","timestamp":1499982262519}
{"from":"pfrazee","message":"TheLink: we could throw in mandatory authentication to dat so that we can get a list of usernames and addresses and home phone numbers","timestamp":1499982276133}
{"from":"TheLink","message":"lol","timestamp":1499982290150}
{"from":"TheLink","message":"for small information like websites it's probably not so much relevant where the sources are located (except for latency)","timestamp":1499982380544}
{"from":"TheLink","message":"more relevant for big data sharing","timestamp":1499982425154}
{"from":"pfrazee","message":"oh yeah, that'd make sense","timestamp":1499982489376}
{"from":"substack","message":"is anyone backing up soundcloud right now? https://techcrunch.com/2017/07/12/soundshroud/","timestamp":1499982652357}
{"from":"substack","message":"it looks like it might fold soon and pull a geocities","timestamp":1499982663009}
{"from":"jondashkyle","message":"was a brief discussion about that up above","timestamp":1499982675400}
{"from":"substack","message":"tanks, upscrolling","timestamp":1499982697536}
{"from":"jondashkyle","message":"and yeah, makes me think vine. sad to think about all the sub-cultures sacrificed b/c of platform greed","timestamp":1499982711431}
{"from":"karissa","message":"if it really is a geocities repeat then some company like google should buy soundcloud and then shut it down 10 years later","timestamp":1499982772905}
{"from":"mafintosh","message":"soundcloud throttles download speeds to x2 the bitrate tho i think","timestamp":1499982773235}
{"from":"mafintosh","message":"so i wonder if a scraper is even viable","timestamp":1499982790484}
{"from":"jondashkyle","message":"would it be possible to create a page where you could enter a soundcloud URL on https://, and it would scrape that into a dat:// archive. this archive could be populated with an index.html, with small audio players, and brief instructions on how to customize the page an republish with beaker?","timestamp":1499982796478}
{"from":"mafintosh","message":"jondashkyle: yea thats def doable","timestamp":1499982818517}
{"from":"jondashkyle","message":"would be so good as a quick 24 hour project to sort of ride the wave of publicity surrounding the possible shut down","timestamp":1499982845023}
{"from":"jondashkyle","message":"would love to help with the design and front-end of this","timestamp":1499982869312}
{"from":"karissa","message":"sounds fun","timestamp":1499982905929}
{"from":"pfrazee","message":"jondashkyle: I need to stay focused on my current project or I'd volunteer","timestamp":1499982918174}
{"from":"jondashkyle","message":"totally! yeah, if others could assist with some of the dat:// specific implementations, i could dive into this heavily over the next two days","timestamp":1499982947219}
{"from":"jondashkyle","message":"could probably have a few friends with large soundcloud followings help promote. https://soundcloud.com/anenon and https://soundcloud.com/hollyherndon come to mind","timestamp":1499983001207}
{"from":"mafintosh","message":"jondashkyle: youtube-dl to download all the content, the generate an index.html page with the listing","timestamp":1499983009874}
{"from":"jondashkyle","message":"along with a few labels like https://soundcloud.com/ghostly","timestamp":1499983020605}
{"from":"karissa","message":"someone did this, too, specifically for soundcloud in the browser: https://github.com/diracdeltas/SoundDrop/blob/master/js/main.js","timestamp":1499983061956}
{"from":"karissa","message":"jondashkyle: i wonder if you could make some quick mockups that go over the ux","timestamp":1499983085605}
{"from":"jondashkyle","message":"absolutely!","timestamp":1499983132933}
{"from":"mafintosh","message":"i can write a gist thats makes a shitty index.html page of it in 1h","timestamp":1499983139888}
{"from":"mafintosh","message":"with the content in a dat","timestamp":1499983152324}
{"from":"jondashkyle","message":"yeah, if we could do something like, youtube-dl into a Dat Archive > use the dat js API to create a small JSON of what has been grabbed, that would be great","timestamp":1499983181531}
{"from":"karissa","message":"so do we also want a feed of dats that people can tail","timestamp":1499983184907}
{"from":"pfrazee","message":"man I bet this would be pretty fast to code","timestamp":1499983193280}
{"from":"mafintosh","message":"let me just bang out that gist real quick","timestamp":1499983204155}
{"from":"jondashkyle","message":"i have been working with using choo to generate a static html page over the past two days (just updated my site with it: http://jon-kyle.com)","timestamp":1499983212588}
{"from":"jondashkyle","message":"but yeah, having that, and then it provides you with a URL to your dat archive and a link to download Beaker would be such a great example of how rapidly you can spin up a project with the API","timestamp":1499983273898}
{"from":"jondashkyle","message":"maybe then i can quietly move http://2pac.com to dat hahaha","timestamp":1499983353901}
{"from":"jondashkyle","message":"@mafintosh sounds awesome","timestamp":1499983538062}
{"from":"mafintosh","message":"jondashkyle: you want a simple json file with all the downloaded content in it?","timestamp":1499984715620}
{"from":"jondashkyle","message":"yeah! the metadata for the tracks.","timestamp":1499984732876}
{"from":"mafintosh","message":"i'm just gonna add the filename for now :)","timestamp":1499984785428}
{"from":"jondashkyle","message":"thats cool! might be able to use the soundcloud api to grab some of that stuff, too","timestamp":1499984811064}
{"from":"mafintosh","message":"yea","timestamp":1499984831886}
{"from":"jondashkyle","message":"worked on this a few years ago w/ it: https://github.com/jondashkyle/slashFavorites","timestamp":1499984882616}
{"from":"jondashkyle","message":"actually ended up in soundclouds promo kit they sent to labels lol","timestamp":1499984893307}
{"from":"jondashkyle","message":"trying to get their internal dev teams to use the api (album releases, etc)","timestamp":1499984927513}
{"from":"pfrazee","message":"(unrelated to sound-dat-cloud-stagram, I just threw this together to show someone: https://gist.github.com/pfrazee/3e3033e08feef961e0e4cc53ddb84534 . Gif of it in action: https://video.twimg.com/tweet_video/DEpgUINUAAAlSEM.mp4)","timestamp":1499985014220}
{"from":"jondashkyle","message":"ha! meta-awesome.","timestamp":1499985276816}
{"from":"jondashkyle","message":"lil repo, going to put the front-end in here: https://github.com/jondashkyle/soundcloud-archiver","timestamp":1499985291024}
{"from":"pfrazee","message":"cool. jondashkyle if you find yourself blocked w/o a coder, ping me","timestamp":1499985462729}
{"from":"jondashkyle","message":"will do!","timestamp":1499985474246}
{"from":"mafintosh","message":"almost done","timestamp":1499985769969}
{"from":"jondashkyle","message":"damn!","timestamp":1499985785464}
{"from":"jondashkyle","message":"i'm still trying to think about a basic typographic direction","timestamp":1499985799575}
{"from":"jondashkyle","message":"zzz design zzz","timestamp":1499985807864}
{"from":"mafintosh","message":"dat://b59c00cba7c21fa9fd9777a91193618247fc33cee300f07e82fc4ce3a0feb0d9","timestamp":1499986290050}
{"from":"mafintosh","message":"jondashkyle: o/","timestamp":1499986386313}
{"from":"pfrazee","message":"mafintosh: todd terje is th eman","timestamp":1499986442366}
{"from":"pfrazee","message":"mafintosh: every time a video or song successfully plays over dat... man i gets me excited","timestamp":1499986519316}
{"from":"jondashkyle","message":"whoa! awesome","timestamp":1499986575260}
{"from":"mafintosh","message":"https://github.com/mafintosh/soundcloud-to-dat","timestamp":1499986820698}
{"from":"mafintosh","message":"jondashkyle: whats your github? then i'll add you to o/","timestamp":1499986831478}
{"from":"jondashkyle","message":"jondashkyle","timestamp":1499986839205}
{"from":"mafintosh","message":"you can update index.html to update the index.html that ships with the dat","timestamp":1499986852685}
{"from":"mafintosh","message":"jondashkyle: same npm user?","timestamp":1499986880283}
{"from":"jondashkyle","message":"aye!","timestamp":1499986895900}
{"from":"mafintosh","message":"should be added to all of it","timestamp":1499986910711}
{"from":"mafintosh","message":"substack: https://github.com/mafintosh/soundcloud-to-dat/","timestamp":1499987104536}
{"from":"mafintosh","message":"substack: do soundcloud-to-dat https://soundcloud.com/substack some-folder","timestamp":1499987121836}
{"from":"mafintosh","message":"to offline your music","timestamp":1499987126894}
{"from":"substack","message":":D","timestamp":1499987149439}
{"from":"substack","message":"I'll archive my stuff later today when I go into down to upload some 10m-NE2 tiles","timestamp":1499987182649}
{"from":"mafintosh","message":"substack: i'm offlines yours now","timestamp":1499987258802}
{"from":"mafintosh","message":"jondashkyle: should i tweet it out now or wait for you to do some cool styles?","timestamp":1499987299395}
{"from":"jondashkyle","message":"oh! if you want to tweet out the module, definitely go for it","timestamp":1499987323708}
{"from":"jondashkyle","message":"i'm going to have some dinner real fast, and then think about how to make a page which has an input on it. when you enter a soundcloud URL, it'll use soundcloud-to-dat","timestamp":1499987351925}
{"from":"jondashkyle","message":"and then i might send this along to some friends who have prominent soundcloud followings / are involved in music to see if they want to help share","timestamp":1499987389280}
{"from":"jondashkyle","message":"but yeah, the module seems to be immediately useful for people who have an application for it","timestamp":1499987407650}
{"from":"mafintosh","message":"fun fact, soundcloud-to-dat can actually offline anything youtube-dl can","timestamp":1499987967421}
{"from":"mafintosh","message":"just realised that","timestamp":1499987973880}
{"from":"mafintosh","message":"updated my toddterje dat to dat://527ca2011069cc5d7d2cf23056d9ed4c7a15bb0d0f57749badaae5d58285c52f","timestamp":1499988054287}
{"from":"mafintosh","message":"pfrazee: why cant i repo from the beaker url bar? is that a bug","timestamp":1499988231038}
{"from":"pfrazee","message":"mafintosh: repo?","timestamp":1499988246255}
{"from":"mafintosh","message":"the dat linked above","timestamp":1499988262896}
{"from":"pfrazee","message":"mafintosh: btw you know \"inspector norse\" by todd, right? funkiest song ever","timestamp":1499988278915}
{"from":"mafintosh","message":"ya off course","timestamp":1499988290655}
{"from":"pfrazee","message":"just had to be sure, you didnt include it in your dat!!","timestamp":1499988302877}
{"from":"pfrazee","message":"ok what do you mean by repo?","timestamp":1499988309022}
{"from":"mafintosh","message":"ah gah not repo, i mean copy lol","timestamp":1499988411821}
{"from":"mafintosh","message":"sometimes i time a word that sounds like the word i wanna type","timestamp":1499988430645}
{"from":"mafintosh","message":"pfrazee: oh in general i cannot copy from beaker","timestamp":1499988471322}
{"from":"mafintosh","message":"hmmm","timestamp":1499988472342}
{"from":"pfrazee","message":"mafintosh: oh weird. You're on linux?","timestamp":1499988486948}
{"from":"mafintosh","message":"mac","timestamp":1499988493094}
{"from":"mafintosh","message":"0.7.3 beaker","timestamp":1499988497137}
{"from":"pfrazee","message":"hmm. Anything atypical about your keyboard layout? Also, what happens if you use the context menu, or window menu, to copy?","timestamp":1499988526305}
{"from":"pfrazee","message":"\"atypical\" as in, not qwerty english","timestamp":1499988546489}
{"from":"mafintosh","message":"nope","timestamp":1499988553174}
{"from":"mafintosh","message":"and none it seems to work","timestamp":1499988557239}
{"from":"pfrazee","message":"thaaaat's fucky. Try restarting the browser","timestamp":1499988569424}
{"from":"mafintosh","message":"tried, no luck","timestamp":1499988623635}
{"from":"mafintosh","message":"also cannot paste","timestamp":1499988630594}
{"from":"pfrazee","message":"what the heck. I've never run into that","timestamp":1499988673023}
{"from":"pfrazee","message":"can you copy/paste textareas in a page?","timestamp":1499988679708}
{"from":"mafintosh","message":"let me try","timestamp":1499988718995}
{"from":"mafintosh","message":"pfrazee: i wanted to copy this link for you dat://527ca2011069cc5d7d2cf23056d9ed4c7a15bb0d0f57749badaae5d58285c52f/music/2/0/2/2/2/0/2/1/TODD%20TERJE%20feat%20DET%20GYLNE%20TRIANGEL%20-%20Maskindans-308870609.mp3","timestamp":1499988730064}
{"from":"mafintosh","message":"pfrazee: textareas doesnt work as well","timestamp":1499988775163}
{"from":"pfrazee","message":"mafintosh: what version of osx/","timestamp":1499988820484}
{"from":"pfrazee","message":"(listening)","timestamp":1499988830857}
{"from":"mafintosh","message":"latest","timestamp":1499988832734}
{"from":"pfrazee","message":"bleh that is super weird. Not even right click works on textareas in a page?","timestamp":1499988857406}
{"from":"pfrazee","message":"mafintosh: here's a mix for you https://www.youtube.com/playlist?list=PLBND3AXbdG42caNolTcuZ33pZ2pC1FUa8","timestamp":1499988886091}
{"from":"mafintosh","message":"pfrazee: no nothing works","timestamp":1499988897594}
{"from":"pfrazee","message":"mafintosh: nice bass in that track. Sweet 90s vibe","timestamp":1499988899995}
{"from":"mafintosh","message":"sounds like a electron bug to me","timestamp":1499988907543}
{"from":"pfrazee","message":"mafintosh: ok I'll file an issue and see if anybody's experienced that in electron","timestamp":1499988909234}
{"from":"pfrazee","message":"yeah","timestamp":1499988911312}
{"from":"jondashkyle","message":"hmm you guys think soundcloud might get pissed for scraping and downloading?","timestamp":1499988925418}
{"from":"karissa","message":"if they get pissed then we're doing something right ™","timestamp":1499988948516}
{"from":"jondashkyle","message":"was thinking about putting this page on a domain, just to sort of make it a little weightier","timestamp":1499988955003}
{"from":"karissa","message":"jondashkyle: yeah makes sense","timestamp":1499988981115}
{"from":"jondashkyle","message":"if they send a cease and desist that'd look pretty shitty imo, wonder if they would","timestamp":1499989006945}
{"from":"jondashkyle","message":"thinking about calling it “Sound Salvage” haha","timestamp":1499989072951}
{"from":"yoshuawuyts","message":"jondashkyle: ideally you could run this from a VPN, randomize ip regularly and switch up user agents - fixed IP might just get flagged and banned","timestamp":1499989093223}
{"from":"jondashkyle","message":"ahhh yeah... hmm...","timestamp":1499989107813}
{"from":"jondashkyle","message":"dunno if i have the skillz for that one hahaha","timestamp":1499989192128}
{"from":"pfrazee","message":"yoshuawuyts: haha yeah dang man, that's intense","timestamp":1499989219651}
{"from":"mafintosh","message":"deal with it if they contact you","timestamp":1499989223974}
{"from":"mafintosh","message":"which they most likely wont","timestamp":1499989230363}
{"from":"karissa","message":"yeah","timestamp":1499989248246}
{"from":"jondashkyle","message":"yeah, i'm just assuming this is going to end up on noisy, maybe a little pitchfork mention, etc…","timestamp":1499989250696}
{"from":"jondashkyle","message":"(which it prob won't but no reason to assume not!)","timestamp":1499989274277}
{"from":"pfrazee","message":"yeah that could happen","timestamp":1499989274520}
{"from":"pfrazee","message":"jondashkyle: maybe somebody here can tell you better, but I think you might be able to avoid some legal risk (RE copyright) if you have the UI say, \"only use this on your own music\"","timestamp":1499989375256}
{"from":"jondashkyle","message":"totally.","timestamp":1499989414948}
{"from":"jondashkyle","message":"yeah that's a crucial distinction worth mentioning","timestamp":1499989428215}
{"from":"mafintosh","message":"pfrazee: dat://527ca2011069cc5d7d2cf23056d9ed4c7a15bb0d0f57749badaae5d58285c52f/music/0/3/2/2/1/1/1/0/LINDSTRØM%20&%20TODD%20TERJE%20-%20Lanzarote-74428294.mp3","timestamp":1499989437055}
{"from":"yoshuawuyts","message":"Oh PS bret, you reading along?","timestamp":1499989450133}
{"from":"bret","message":"No was driving","timestamp":1499989473749}
{"from":"mafintosh","message":"so american","timestamp":1499989495209}
{"from":"yoshuawuyts","message":"bret: did hyperamp end up getting dat support? People are backing up SoundCloud over dat ✨","timestamp":1499989496442}
{"from":"pfrazee","message":"hahahaha","timestamp":1499989497907}
{"from":"bret","message":"Not yet :(","timestamp":1499989508655}
{"from":"bret","message":"Reinventing front end still","timestamp":1499989519703}
{"from":"bret","message":"Sound cloud is shutting down 4 real?","timestamp":1499989543310}
{"from":"pfrazee","message":"bret: screenshot of hyperamp looks very hot","timestamp":1499989549323}
{"from":"pfrazee","message":"that's what it's looking like","timestamp":1499989561993}
{"from":"yoshuawuyts","message":"bret: seems increasingly likely","timestamp":1499989564710}
{"from":"pfrazee","message":"mafintosh: nice I dig that synth","timestamp":1499989565883}
{"from":"mafintosh","message":"i need to add a \"play all\" thing","timestamp":1499989592614}
{"from":"karissa","message":"hm my beaker browser crashed on that link","timestamp":1499989607722}
{"from":"bret","message":"Have some really good ideas after talking with feross. Going to build a data model around his last fm module I think","timestamp":1499989610580}
{"from":"bret","message":"That was used for play.cash","timestamp":1499989624453}
{"from":"mafintosh","message":"karissa: ya mine to just now","timestamp":1499989625674}
{"from":"bret","message":"That's so wacky. They host audio files","timestamp":1499989681454}
{"from":"bret","message":"How hard could it be!","timestamp":1499989687884}
{"from":"jondashkyle","message":"you know whats cooler than a million mp3s? A BILLION MP3S","timestamp":1499989717192}
{"from":"karissa","message":"apparently its expensive to give a bunch of bandwidth away for free","timestamp":1499989717644}
{"from":"yoshuawuyts","message":"bret: <kubernetes joke>","timestamp":1499989733284}
{"from":"bret","message":"Just use unlisted YouTube videos as your backend","timestamp":1499989777504}
{"from":"bret","message":"🐼😜","timestamp":1499989784985}
{"from":"mafintosh","message":"pfrazee: dat://dc47e16484cd7938f8a46a8764b974021a0a6c9bcd4e690b4a930627a819ea36","timestamp":1499989792156}
{"from":"mafintosh","message":"pfrazee: beaker crashes when trying to play the mp3 in there","timestamp":1499989807844}
{"from":"mafintosh","message":"substack's soundcloud","timestamp":1499989815886}
{"from":"bret","message":"mafintosh: someone needs to build a YouTube and flicker dat store module","timestamp":1499989816495}
{"from":"yoshuawuyts","message":"bret: lmao","timestamp":1499989821982}
{"from":"pfrazee","message":"karissa: mafintosh: sadly that's a known upstream bug in electron","timestamp":1499989829810}
{"from":"pfrazee","message":"let me find the issue #","timestamp":1499989836594}
{"from":"mafintosh","message":"ah ok","timestamp":1499989838973}
{"from":"pfrazee","message":"https://github.com/electron/electron/issues/9342","timestamp":1499989875021}
{"from":"jondashkyle","message":"ahh bummer","timestamp":1499989939331}
{"from":"pfrazee","message":"yeah I forgot about that until it crashed for me too","timestamp":1499989968736}
{"from":"pfrazee","message":"they had a fix that they reverted. I couldn't get an answer as to why they reverted it, but the fix PR might be a good starting point","timestamp":1499990011306}
{"from":"bret","message":"Well super exciting. Wish we had a better solution in place today","timestamp":1499990028510}
{"from":"bret","message":"One idea we had was youtube-dl backed dat drives that would offline SoundCloud artist feeds","timestamp":1499990075306}
{"from":"sethvincent","message":"yay i just grabbed my soundcloud things and put the dat on hashbase dat://cc5ef634173bee87acbfcb51fe8cbf8853e3ba84940f2da5bcb82fb364321e08/","timestamp":1499990163261}
{"from":"pfrazee","message":"sethvincent: nice","timestamp":1499990196025}
{"from":"bret","message":"pfrazee: ty it's very much old iTunes inspired","timestamp":1499990307095}
{"from":"pfrazee","message":"bret: yeah I dig it","timestamp":1499990325858}
{"from":"mafintosh","message":"bret: what makes you think it doesn't already work with youtube? :D :D :D","timestamp":1499990895202}
{"from":"mafintosh","message":"bret, pfrazee: dat://4ac7e8c10a143743292af92828f63200159399db4bb5fe90b03f6acd8286c204","timestamp":1499990932291}
{"from":"mafintosh","message":"that is pfrazee's youtube playlist from above","timestamp":1499990943637}
{"from":"mafintosh","message":"ah wrong link","timestamp":1499991000756}
{"from":"pfrazee","message":"mafintosh: oh sweet, link me the real dat","timestamp":1499991052400}
{"from":"bret","message":"Oh cool! 😎","timestamp":1499991078296}
{"from":"ogd","message":"blahah: quick stats on the first 2 million dois ive resolved (i do about a million an hour w/ my new resolver, unoptimized)","timestamp":1499991089389}
{"from":"ogd","message":"https://www.irccloud.com/pastebin/L26pm2E1/","timestamp":1499991099234}
{"from":"ogd","message":"interestingly ive had 0 not found responses (equivalent of 404)","timestamp":1499991137659}
{"from":"ogd","message":"sethvincent: im streaming it, cool!!!","timestamp":1499991175549}
{"from":"mafintosh","message":"pfrazee, bret: dat://c09196c26ea1036d875dc5501ed40f94bf1391be72493d0bf3a529bc52ee4b0b","timestamp":1499991234954}
{"from":"jondashkyle","message":"@mafintosh, with soundcloud-to-dat, i'm gonna end up saving all of these mp3s to my server, yeah?","timestamp":1499991251267}
{"from":"mafintosh","message":"importing as we speak","timestamp":1499991256666}
{"from":"mafintosh","message":"jondashkyle: yup","timestamp":1499991265482}
{"from":"yoshuawuyts","message":"Oh also https://mobile.twitter.com/bcrypt/status/885564146082693120","timestamp":1499991277335}
{"from":"jondashkyle","message":"hmm i wonder if there is a way to clean it up, like after 5 minutes remove the dir from the server","timestamp":1499991281450}
{"from":"yoshuawuyts","message":"Think this a good take too https://mobile.twitter.com/bcrypt/status/885565992323039233","timestamp":1499991309680}
{"from":"pfrazee","message":"jondashkyle: this may end up being more legal trouble than its worth","timestamp":1499991399680}
{"from":"jondashkyle","message":"lol yeah","timestamp":1499991405116}
{"from":"mafintosh","message":"you could package it up in an electron app","timestamp":1499991445994}
{"from":"mafintosh","message":"then users run it","timestamp":1499991450337}
{"from":"jondashkyle","message":"yeah, sort of a big ask of people. my thought is that this is a useful tool, but also a way of introducing more people to dat://, too.","timestamp":1499991477453}
{"from":"jondashkyle","message":"using it immediately in browser the best move","timestamp":1499991500734}
{"from":"jondashkyle","message":"i suppose what i can do is just monitor it, and if there are hundreds of mp3s getting downloaded to the server, put a quick hold on it","timestamp":1499991521058}
{"from":"pfrazee","message":"jondashkyle: you're right about the usability but I dont think it's worth the risk","timestamp":1499991563606}
{"from":"jondashkyle","message":"yeah… i wonder if we say the page will be live for only X number of days, and go offline after that… so in this way it's a temporary thing, and after it’s gone replace the page with a small note saying what the tool was, what it was for, and a link to the module on npm","timestamp":1499991669215}
{"from":"pfrazee","message":"jondashkyle: the PR could be really awesome but, if that's the goal then I think a better solution is just to make a solid p2p soundcloud replacement and have it ready for when they finally shut down","timestamp":1499991697195}
{"from":"jondashkyle","message":"totally! yeah i think that could be a great project too. there have been a few takes on it, and projects like https://resonate.is/ are doing great things","timestamp":1499991744115}
{"from":"jondashkyle","message":"thinking of this page tapping into the discussion around soundcloud for introducing people to dat, as well as being a usable tool. if it weren’t for the legalities it’d be a no brainer to figure it out","timestamp":1499991849079}
{"from":"pfrazee","message":"I agree","timestamp":1499991951582}
{"from":"pfrazee","message":"jondashkyle: maybe we can find a way to make it work from a dat:// app inside beaker","timestamp":1499993372007}
{"from":"jondashkyle","message":"@pfrazee how could that maybe work?","timestamp":1499993509520}
{"from":"pfrazee","message":"jondashkyle: we'd just need a way to pull the data from soundcloud. Could pitch it as an \"export my account\" tool","timestamp":1499993537428}
{"from":"jondashkyle","message":"exactly, that's how i'm thinking of it","timestamp":1499993699014}
{"from":"jondashkyle","message":"but yeah, can also see how it could be helpful to have a dat with an expiration. gives you enough time to fork it. useful for a tool like this where you are creating dats, but don’t want to archive them.","timestamp":1499993772158}
{"from":"jondashkyle","message":"gives someone 15 min to fork/clone","timestamp":1499993798129}
{"from":"ogd","message":"i do think a 'delete after N replications' feature would be nice","timestamp":1499993844485}
{"from":"pfrazee","message":"jondashkyle: if we can do it on the client side, and tell people it's explicitly for exporting their own data, then I think you'll probably be ok legally","timestamp":1499993846738}
{"from":"pfrazee","message":"you just dont want to end up accruing a bunch of copyrighted material, or help other people do it","timestamp":1499993886971}
{"from":"jondashkyle","message":"agreed","timestamp":1499993911108}
{"from":"pfrazee","message":"(IANAL)","timestamp":1499993912378}
{"from":"ogd","message":"blahah: i think a dat based stream processing job queue kinda thing would be really nice. for creating pipelines made up of shell scripts and running them whenever input data changes","timestamp":1499994024091}
{"from":"substack","message":"that would be fantastic for peermaps","timestamp":1499994115257}
{"from":"substack","message":"and all of the vector tile tools I'm building","timestamp":1499994125367}
{"from":"ogd","message":"i have lots of data archiving processes i wanna keep running continuously and im about ready to spend some time automating it","timestamp":1499994157590}
{"from":"jhand","message":"mafintosh: downloading your music to my phone https://usercontent.irccloud-cdn.com/file/Ee5kGC75/Screenshot_20170713-184928.png","timestamp":1499997025208}
{"from":"jhand","message":"Sharing works too","timestamp":1499997045010}
{"from":"mafintosh","message":"wow","timestamp":1499997130915}
{"from":"mafintosh","message":"jhand: just ordered a google pixel haha","timestamp":1499997144523}
{"from":"mafintosh","message":"Not even joking","timestamp":1499997150652}
{"from":"jhand","message":"lol","timestamp":1499997152029}
{"from":"jhand","message":"pretty easy to install dat too!","timestamp":1499997180010}
{"from":"jhand","message":"mafintosh: omg i can view the files in chrome with dat sync --http","timestamp":1499997294229}
{"from":"jhand","message":"https://usercontent.irccloud-cdn.com/file/oJsPdzNw/Screenshot_20170713-185501.png","timestamp":1499997315477}
{"from":"mafintosh","message":"jhand: can you sync in background?","timestamp":1499997323647}
{"from":"jhand","message":"mafintosh: ya keeps running","timestamp":1499997359448}
{"from":"mafintosh","message":"Whooooa","timestamp":1499997374723}
{"from":"jhand","message":"so i can sync and look at http site","timestamp":1499997386457}
{"from":"jhand","message":"this is amazing","timestamp":1499997389661}
{"from":"mafintosh","message":"jhand: so i'm guessing this means we could package it up in an app?","timestamp":1499997420563}
{"from":"jhand","message":"mafintosh: ya I guess so? haha","timestamp":1499997463536}
{"from":"mafintosh","message":"omg","timestamp":1499997542841}
{"from":"mafintosh","message":"jhand: why is no one talking about this? this is amazing","timestamp":1499997563218}
{"from":"jhand","message":"https://usercontent.irccloud-cdn.com/file/cuVOImjk/Screenshot_20170713-190103.png","timestamp":1499997683537}
{"from":"jhand","message":"check the url =)","timestamp":1499997688923}
{"from":"jhand","message":"ya im exicted","timestamp":1499997692935}
{"from":"mafintosh","message":"whoa","timestamp":1499997729607}
{"from":"jhand","message":"mafintosh: what is a cooler live demo we can do?","timestamp":1499997734485}
{"from":"jhand","message":"mafintosh: its so easy!","timestamp":1499997739021}
{"from":"mafintosh","message":"https://github.com/dominictarr/androidify","timestamp":1499997739317}
{"from":"mafintosh","message":"jhand: live tv","timestamp":1499997748439}
{"from":"mafintosh","message":"i'm doing that first thing when i get my pixel","timestamp":1499997780209}
{"from":"jhand","message":"mafintosh: ya seems like anything we can load over dat and stream to chrome","timestamp":1499997781479}
{"from":"jhand","message":"haha awesome","timestamp":1499997784903}
{"from":"mafintosh","message":"jhand: hey run doctor on the phone and see if utp works","timestamp":1499997958679}
{"from":"jhand","message":"mafintosh: no it couldn't build it","timestamp":1499997999011}
{"from":"mafintosh","message":"jhand: what about sodium?","timestamp":1499998019282}
{"from":"jhand","message":"Tried a few times but kept missing packages","timestamp":1499998024317}
{"from":"jhand","message":"Nope","timestamp":1499998028658}
{"from":"jhand","message":"Let me see the last error","timestamp":1499998040780}
{"from":"mafintosh","message":"jhand: i'm guessing the pixel is 64bit?","timestamp":1499998058168}
{"from":"jhand","message":"https://usercontent.irccloud-cdn.com/file/isXxoc9b/Screenshot_20170713-190749.png","timestamp":1499998094933}
{"from":"jhand","message":"mafintosh: may just need to install more packages","timestamp":1499998134283}
{"from":"mafintosh","message":"jhand: you need build-essential and autoconf","timestamp":1499998170244}
{"from":"mafintosh","message":"i want my pixel *now*","timestamp":1499998178215}
{"from":"jhand","message":"mafintosh: ya pixel is 64. I'm on 5x which I assume is too","timestamp":1499998200749}
{"from":"mafintosh","message":"jhand: my arm prebuilts are 32 bit. if i build for 64 bit it should just work","timestamp":1499998217192}
{"from":"mafintosh","message":"the pi is 32 bit","timestamp":1499998229139}
{"from":"jhand","message":"https://github.com/termux/termux-packages/issues/232","timestamp":1499998296349}
{"from":"mafintosh","message":"Ah cool","timestamp":1499998328440}
{"from":"mafintosh","message":"I'll just prebuild them once I get it","timestamp":1499998343139}
{"from":"mafintosh","message":"jhand: which node version you running on it?","timestamp":1499998358605}
{"from":"mafintosh","message":"8+?","timestamp":1499998363498}
{"from":"jhand","message":"6","timestamp":1499998375883}
{"from":"jhand","message":"That was the package default","timestamp":1499998383741}
{"from":"mafintosh","message":"jhand: if you can upgrade somehow to 8 it can use the wasm builds","timestamp":1499998441072}
{"from":"mafintosh","message":"more perf","timestamp":1499998446709}
{"from":"mafintosh","message":"Although it seemed fast in your screenshot","timestamp":1499998460259}
{"from":"jhand","message":"https://github.com/termux/termux-packages/issues/1080","timestamp":1499998467081}
{"from":"jhand","message":"mafintosh: ya that wasn't hosted locally either.","timestamp":1499998494726}
{"from":"mafintosh","message":"pretty cool that termux is open source","timestamp":1499998562386}
{"from":"karissa","message":"wow amazing","timestamp":1500000220633}
{"from":"jondashkyle","message":"whoooa this is sick","timestamp":1500000916510}
{"from":"karissa","message":"https://github.com/mmckegg/ferment","timestamp":1500001184606}
{"from":"karissa","message":"distributed soundcloud mashup of ssb webtorrent and electron","timestamp":1500001215241}
{"from":"jondashkyle","message":"ah yea! was trying to find that earlier.","timestamp":1500003654571}
{"from":"substack","message":"pfrazee: giving hashbase a spin!","timestamp":1500006715453}
{"from":"pfrazee","message":"substack: awesome lmk if you have any problems","timestamp":1500006735984}
{"from":"substack","message":"also, if I make a dat repo with an index.html in the root dir, will that work in beaker?","timestamp":1500006770422}
{"from":"pfrazee","message":"yep","timestamp":1500006790858}
{"from":"substack","message":"nice","timestamp":1500006793239}
{"from":"substack","message":"pfrazee: https://hashbase.io/new-archive isn't working, add archive button is greyed out","timestamp":1500007751878}
{"from":"substack","message":"SyntaxError: missing ; before statement[Learn More]new-archive.js:30:8","timestamp":1500007760479}
{"from":"substack","message":" ","timestamp":1500007760986}
{"from":"substack","message":"this is in firefox","timestamp":1500007764150}
{"from":"pfrazee","message":"substack: 2 secs I'll debug now","timestamp":1500007777802}
{"from":"substack","message":"works in chrome though, button is green","timestamp":1500007899317}
{"from":"pfrazee","message":"substack: I think it's the async/await code (I forget sometimes when I need to hold off on new lang features). I'll push a fix","timestamp":1500007938567}
{"from":"substack","message":"upload progress is stuck at 0%, no connections","timestamp":1500008031463}
{"from":"substack","message":"can people reach my dat? dat://db9c54fd4775da34109c9afd366cac5d3dff26c6a3902fc9c9c454193b543cbb","timestamp":1500008037742}
{"from":"pfrazee","message":"substack: I cant","timestamp":1500008214961}
{"from":"substack","message":"weird","timestamp":1500008247406}
{"from":"substack","message":"I was able to seed the same data over ipfs, but it had some trouble getting through the NAT at first","timestamp":1500008278198}
{"from":"pfrazee","message":"hmm, dat's up to date right?","timestamp":1500008288841}
{"from":"substack","message":"yes I just installed the latest version","timestamp":1500008312553}
{"from":"pfrazee","message":"hm. Usually you can reach hashbase because that shouldnt require a hole punch. Maf can probably help you when he's up","timestamp":1500008351587}
{"from":"substack","message":"oh there it goes!","timestamp":1500008381593}
{"from":"substack","message":"after I restarted it","timestamp":1500008385558}
{"from":"pfrazee","message":"ah good","timestamp":1500008385758}
{"from":"substack","message":"cool it's at 50% https://hashbase.io/substack/ne2srw-tiles","timestamp":1500008482088}
{"from":"pfrazee","message":"good deal. You may hit the 100mb cap, I can bump you up. How big is it?","timestamp":1500008573802}
{"from":"substack","message":"18M","timestamp":1500008601127}
{"from":"pfrazee","message":"oh ok :)","timestamp":1500008623347}
{"from":"substack","message":"do you have any plans to offer more than 10G for paid plans?","timestamp":1500008634573}
{"from":"substack","message":"I would like to use hashbase to keep peermaps and osm-derived vector tiles seeded, probably in the neighborhood of ~40G each, ~80G total","timestamp":1500008693298}
{"from":"pfrazee","message":"we could probably do a custom thing for that","timestamp":1500008720661}
{"from":"substack","message":"I think there could be a lot of companies doing things with big files that would use hashbase if they were also using tools that integrated with dat","timestamp":1500008785052}
{"from":"substack","message":"or maybe even not, the desktop tools are pretty good for dat too","timestamp":1500008794274}
{"from":"pfrazee","message":"yeah I think so too. We'll need to make a few changes to scale up","timestamp":1500008814370}
{"from":"pfrazee","message":"probably will need to switch to S3 or the google cloud equivalent, we're just using the VM disk atm and that stops scaling","timestamp":1500008845768}
{"from":"substack","message":"I am going to try to drum up some consulting work for these kinds of clients, so I will try to sell them on hashbase because it makes a lot of sense","timestamp":1500008892831}
{"from":"substack","message":"particularly if hashbase could also seed to dats in a browser using websockets at least","timestamp":1500008909800}
{"from":"substack","message":"I don't know if the default `dat share` also listens on websockets or anything","timestamp":1500008934347}
{"from":"pfrazee","message":"substack: sounds good. The websocket seems feasible though we do have soft bandwidth caps because that's the real high dollar cost. (We're still figuring out all of this)","timestamp":1500008999699}
{"from":"substack","message":"you should talk to kyledrake about this stuff if you can","timestamp":1500009024332}
{"from":"substack","message":"he did a ton of research about bandwidth costs putting neocities together","timestamp":1500009045187}
{"from":"pfrazee","message":"yeah I'll get in touch, we met at a conference a few months back","timestamp":1500009080681}
{"from":"substack","message":"also, a hashbase org plan with adjustable per-user and group quotas","timestamp":1500009184604}
{"from":"substack","message":"if you can get enough initial interest that is!","timestamp":1500009208796}
{"from":"pfrazee","message":"substack: I'll look into whether we can feasibly handle the 80gb with the current arch. Even if we have the disk allocated, we haven't tested a dataset that large so I dont know whatll happen","timestamp":1500009217728}
{"from":"pfrazee","message":"substack: yeah I'm all about it","timestamp":1500009232178}
{"from":"substack","message":"it would be separate datasets, each ~40G","timestamp":1500009239245}
{"from":"substack","message":"or so","timestamp":1500009243959}
{"from":"pfrazee","message":"ok cool","timestamp":1500009245369}
{"from":"pfrazee","message":"heh I might just spin up a couple separate VMs and run hypercored for you","timestamp":1500009263089}
{"from":"pfrazee","message":"\"hashbase\"","timestamp":1500009266216}
{"from":"substack","message":"I can add hashbase to the \"supported by\" list on peermaps.org as well","timestamp":1500009326264}
{"from":"substack","message":"there should be a lot more info on that page once I get these webgl formats figured out!","timestamp":1500009341769}
{"from":"substack","message":"but I'm making slow but steady progress on that front","timestamp":1500009354377}
{"from":"pfrazee","message":"nice, that'll be very cool","timestamp":1500009428100}
{"from":"substack","message":"progress bar is still at 99%, not sure why https://hashbase.io/substack/ne2srw-tiles","timestamp":1500009547996}
{"from":"pfrazee","message":"bug in our progress calculation, dont worry about it","timestamp":1500009612650}
{"from":"substack","message":"I made another one, only 3.4M https://hashbase.io/substack/cities1000-lon-lat-elev","timestamp":1500010320503}
{"from":"substack","message":"uploaded right away without problems, I think it was a network hiccup here","timestamp":1500010336155}
{"from":"substack","message":"ok all data uploaded to ipfs, dat+hashbase, and neocities!","timestamp":1500010471091}
{"from":"substack","message":"now I can update the mixmap example","timestamp":1500010488070}
{"from":"substack","message":"oh right and I *also* need to add the html demos to ipfs and dat","timestamp":1500010522687}
{"from":"substack","message":"I also need to download some more NE raster datasets to upload and cut tiles later...","timestamp":1500010607982}
{"from":"substack","message":"for more demos","timestamp":1500010612981}
{"from":"substack","message":"I'll probably bike into town again tomorrow to do uploads","timestamp":1500010634332}
{"from":"substack","message":"pfrazee: in beaker, how do I link to assets hosted in a different dat?","timestamp":1500010895660}
{"from":"pfrazee","message":"substack: should work just fine to reference them by the full url. dat://{key}/path","timestamp":1500010932737}
{"from":"substack","message":"pfrazee: will that work in an xmlhttprequest?","timestamp":1500011011150}
{"from":"pfrazee","message":"substack: yeah","timestamp":1500011018975}
{"from":"substack","message":"A+","timestamp":1500011021358}
{"from":"substack","message":"pfrazee: kyledrake was just active on #ipfs btw","timestamp":1500011503819}
{"from":"substack","message":"yessssss https://substack.neocities.org/mixmap/demos/ne2swr-cities.html","timestamp":1500012292474}
{"from":"substack","message":"POW dat://81e8ab9b6944e5263ff517be5e9c002446a8a881eff74c1df9ad3fbd6d875da2","timestamp":1500012803932}
{"from":"substack","message":"^ open in beaker browser","timestamp":1500012810772}
{"from":"karissa","message":"nice","timestamp":1500012931149}
{"from":"dat-gitter","message":"(rjsteinert) Hi folks. I had fun playing with beaker and dat tonight with @dwblair. We created a google docs like experience where he could see me editing in beaker browser (contenteditable) over in his beaker browser. Very cool! We did have some issues we plan on screen capturing at a later date. If you're interested in playing around with it, here's the issue with related code example https://github.com/beakerbrowser/beaker/iss","timestamp":1500013849594}
{"from":"dat-gitter","message":"(rjsteinert) Also, I'm interested in ideas y'all have had around merging dat archives. In other words, let's say there is a fork of my archive and I want to pull in changes from that fork because the forkee messaged be telling me about their awesome suggested changes. Any plan in the work for something like a `dat merge` command? We can definitely accomplish this with third party tooling but might be cool to have a \"pull request\" ","timestamp":1500013998187}
{"from":"dat-gitter","message":"(rjsteinert) @substack I get `Error: (regl) webgl not supported, try upgrading your browser or graphics drivers ` in Beaker Browser. Should I be compiling Beaker and running on HEAD?","timestamp":1500014634877}
{"from":"substack","message":"not sure, it worked on my machine™","timestamp":1500014678436}
{"from":"substack","message":"rjsteinert: does https://substack.neocities.org/mixmap/demos/ne2swr-cities.html work in your regular browser?","timestamp":1500014708058}
{"from":"substack","message":"could be your card is blacklisted or some such","timestamp":1500014714535}
{"from":"dat-gitter","message":"(rjsteinert) @substack Same issue in Version 59.0.3071.115 (Official Build) (64-bit), macOS Sierra 10.12.5. This is a dual boot MacBook Air, could hop into Ubuntu to try.","timestamp":1500015289811}
{"from":"dat-gitter","message":"(rjsteinert) * ^ Chrome Version 59.0.3071.115 (Official Build) (64-bit)","timestamp":1500015315156}
{"from":"dat-gitter","message":"(rjsteinert) @substack Works on my Nexus 5!","timestamp":1500015547459}
{"from":"dat-gitter","message":"(rjsteinert) (in Chrome Version 59.0.3071.125)","timestamp":1500015593585}
{"from":"mafintosh","message":"ogd: generic scraper for dat could be named science-to-dat","timestamp":1500019467820}
{"from":"barnie","message":"have just created, what I think is an interesting discussion piece on dat future: https://github.com/datproject/dat/issues/824","timestamp":1500020038378}
{"from":"blahah","message":"mafintosh: what do you mean scraper?","timestamp":1500024995446}
{"from":"mafintosh","message":"blahah: a program that fetches data from certain science sites","timestamp":1500025071754}
{"from":"mafintosh","message":"like youtube-dl but for science sites","timestamp":1500025082339}
{"from":"blahah","message":"ah","timestamp":1500025084740}
{"from":"blahah","message":"to me a scraper is when there's no API or interface for downloading, and you have to parse it out of a webpage","timestamp":1500025117105}
{"from":"blahah","message":"I made https://github.com/ContentMine/getpapers a long time ago","timestamp":1500025169591}
{"from":"blahah","message":"works with a bunch of science APIs","timestamp":1500025179933}
{"from":"blahah","message":"but abstractions are all wrong","timestamp":1500025262576}
{"from":"blahah","message":"some ways to pipe things to a dat generically would be useful","timestamp":1500025338229}
{"from":"blahah","message":"piping json entries for example, with a key that defines the id","timestamp":1500025353694}
{"from":"blahah","message":"that would cover basically all scholarly metadata sources output","timestamp":1500025366730}
{"from":"blahah","message":"then you want probably an oai-pmh generic update streaming thing, and solr/elastic ones","timestamp":1500025403917}
{"from":"blahah","message":"then to do like 90% of scientific sources it's just thing wrappers","timestamp":1500025435322}
{"from":"blahah","message":"*thin","timestamp":1500025437586}
{"from":"blahah","message":"substack: re dat piping of data: https://gist.github.com/blahah/a987d15c38fb0985785f4ab619250c69","timestamp":1500025805644}
{"from":"barnie","message":"blahah mafintosh: would love your opinion on https://github.com/datproject/dat/issues/824 if you have a bit of time somewhere","timestamp":1500025868603}
{"from":"blahah","message":"barnie: replied :)","timestamp":1500026449682}
{"from":"barnie","message":"blahah: thanks!","timestamp":1500026538119}
{"from":"barnie","message":"you are currently about 4 active developers, am I right?","timestamp":1500026581087}
{"from":"barnie","message":"isn't that a bit light?","timestamp":1500026588614}
{"from":"barnie","message":"for such groundbreaking tech?","timestamp":1500026607766}
{"from":"barnie","message":"or should dat always be small, targeted to science community?","timestamp":1500026631441}
{"from":"blahah","message":"barnie: I think probably about 50 people developing things with dat right now","timestamp":1500026864214}
{"from":"blahah","message":"at a conservative estimate","timestamp":1500026870474}
{"from":"blahah","message":"but distributed around lots of projects and repos, so not easy to measure","timestamp":1500026903083}
{"from":"barnie","message":"yes, there you have it.","timestamp":1500026973498}
{"from":"barnie","message":"distributed, dispersed","timestamp":1500026979511}
{"from":"blahah","message":"but that's OK, it's how it should be","timestamp":1500026986774}
{"from":"barnie","message":"but looking at main dat project commits 4 core devs","timestamp":1500027012580}
{"from":"blahah","message":"we don't all need to know each other - the community and the technology reflect the philosophy","timestamp":1500027016313}
{"from":"blahah","message":"a small number of committers is a sign of stable repo with small scope imo","timestamp":1500027048493}
{"from":"blahah","message":" in this case at least - there could be other factors in mnay situations","timestamp":1500027090507}
{"from":"barnie","message":"its not so much about knowing, but finding resources, being productive, etc.","timestamp":1500027101112}
{"from":"blahah","message":"barnie I think a lot of people's contributing to dat things will be at the higher level, beaker browser, sciencefair etc","timestamp":1500027117684}
{"from":"blahah","message":"and ideally higher level than that","timestamp":1500027125267}
{"from":"barnie","message":"so many similar projects have failed using this philosophy","timestamp":1500027130731}
{"from":"blahah","message":"well, yes, but the same is true for the alternative - most projects of all kinds fail","timestamp":1500027154200}
{"from":"barnie","message":"especially in this new field","timestamp":1500027154857}
{"from":"barnie","message":"true","timestamp":1500027163561}
{"from":"blahah","message":"I agree that making it easier to understand or find things is good","timestamp":1500027187575}
{"from":"barnie","message":"with such small set of core developers it is hard to develop stable base for all the complexity that is yet to be tackled","timestamp":1500027249158}
{"from":"blahah","message":"but it also has to grow with the project - I've seen a lot of projects fail because they attracted users/contributors faster than they could scale the technology or (human) support system","timestamp":1500027249804}
{"from":"barnie","message":"yes, that is true of react-native, I think, having tried to setup my first projects with it","timestamp":1500027282845}
{"from":"blahah","message":"barnie: I think that's the opposite of the case - the stability comes from having small modules that do one thing well and need very little maintenance","timestamp":1500027285490}
{"from":"blahah","message":"(referring to the complexity/stability issue)","timestamp":1500027302463}
{"from":"barnie","message":"but are you all 4 full-time available, or is dat a side-project","timestamp":1500027323719}
{"from":"blahah","message":"I don't work for dat, it's not even a side project","timestamp":1500027346343}
{"from":"blahah","message":"but I do make a project that depends on it","timestamp":1500027354823}
{"from":"barnie","message":"any job assignment can take you away for 2 years and the project may die","timestamp":1500027361577}
{"from":"blahah","message":"there is a dat core team of full-time people","timestamp":1500027366101}
{"from":"barnie","message":"ok, that's good","timestamp":1500027381800}
{"from":"blahah","message":"I am funded 50% of my time to work on the thing that depends on dat","timestamp":1500027387530}
{"from":"blahah","message":"and we are all working on longer term stable funding for dat and related projects","timestamp":1500027402522}
{"from":"blahah","message":"but until that is in place, trying to scale rapidly will more likely lead to failure than prevent it","timestamp":1500027447184}
{"from":"barnie","message":"btw, I am not critizising, just worried for your futur ;)","timestamp":1500027453932}
{"from":"blahah","message":"no problem, it's useful to discuss","timestamp":1500027466003}
{"from":"blahah","message":"for sciencefair, part of our sustainability plan is to provide a stable % of our funding for projects we depend on like dat","timestamp":1500027503539}
{"from":"barnie","message":"if, say, IPFS would quickly mature and dat users walk away to it, would you mind?","timestamp":1500027541548}
{"from":"blahah","message":"they aren't really the same thing, so it doesn't matter to me","timestamp":1500027566443}
{"from":"blahah","message":"I hope IPFS succeeds","timestamp":1500027572241}
{"from":"barnie","message":"me too, but IPFS is just an example, losing user base is the danger","timestamp":1500027599320}
{"from":"blahah","message":"well, the user base is likely to not even know about dat really","timestamp":1500027613493}
{"from":"blahah","message":"dat is a layer with a developer base, and will have users","timestamp":1500027626826}
{"from":"blahah","message":"but then a layer of stuff built on dat will have users that don't care whether it's dat or whatever underneath, but the features enabled by it will matter to them","timestamp":1500027673383}
{"from":"blahah","message":"so they can't just switch to IPFS unless someone builds the same thing on IPFS","timestamp":1500027715974}
{"from":"barnie","message":"yeah, that's my issue.","timestamp":1500027750006}
{"from":"barnie","message":"this leaves dat as entirely steered by the applications that grow on top of it","timestamp":1500027768013}
{"from":"barnie","message":"no knowledge upfront on the direction it 'll go","timestamp":1500027787941}
{"from":"blahah","message":"guided by the needs of the users, yes","timestamp":1500027791754}
{"from":"blahah","message":"that's necessarily true","timestamp":1500027797906}
{"from":"blahah","message":"but the stability again comes from the modular ecosystem","timestamp":1500027840082}
{"from":"blahah","message":"for example, sciencefair actually depends on hypercore and some other things, not dat directly","timestamp":1500027857329}
{"from":"barnie","message":"user guidance is good, but if one wants to start a big initiative, like the social platform I mentioned..","timestamp":1500027874551}
{"from":"blahah","message":"even if dat itself went in a different direction, the small modules that make it up would not - the new direction would come with developing and switching to new modules","timestamp":1500027892115}
{"from":"barnie","message":"..its good to know more of its vision and future direction (generally)","timestamp":1500027892611}
{"from":"blahah","message":"so people depending on those things could carry on","timestamp":1500027916098}
{"from":"barnie","message":"yea","timestamp":1500027936563}
{"from":"blahah","message":"I would say that right now, the entire ecosystem including all p2p/distributed projects other than dat is not stable enough to guarantee you that features you depend on will be included in the project in a few years","timestamp":1500027975874}
{"from":"blahah","message":"that's just because it's nascent and exploratory by nature","timestamp":1500027989282}
{"from":"barnie","message":"that's exactly the unique opportunity dat now has","timestamp":1500027997973}
{"from":"blahah","message":"as it matures, the things that survive and don't will become clear naturally","timestamp":1500028007833}
{"from":"barnie","message":"too slow, I am afraid","timestamp":1500028019463}
{"from":"blahah","message":"it's much more right now to do with whether we can convince funders to provide stability","timestamp":1500028025652}
{"from":"barnie","message":"you'll end in the dustbin","timestamp":1500028027046}
{"from":"blahah","message":"well, that's the gamble we are all taking - we believe these things will work and succeed if we do them right, and are trying to make that happen","timestamp":1500028072143}
{"from":"blahah","message":"slow is good","timestamp":1500028091241}
{"from":"barnie","message":"hmm.","timestamp":1500028111479}
{"from":"barnie","message":"take replikativ for example","timestamp":1500028118736}
{"from":"blahah","message":"moving faster than the resources can support leads to failure imo","timestamp":1500028127092}
{"from":"barnie","message":"i find it fantastic what they mention in their site","timestamp":1500028130266}
{"from":"barnie","message":"but the latest project activity is 9 days old","timestamp":1500028142024}
{"from":"barnie","message":"nothing much to find with googling","timestamp":1500028151692}
{"from":"barnie","message":"i would not decide to integrate that in my solution","timestamp":1500028164722}
{"from":"blahah","message":"I agree, dat is like the opposite of replikativ","timestamp":1500028177763}
{"from":"barnie","message":"there are a lot of successful github projects having much more activity. Dat could be like that too","timestamp":1500028206773}
{"from":"blahah","message":"stable growth over time, spawning many stable, well-scoped modules as it goes, not outgrowing its resources or over-stating its claim","timestamp":1500028213465}
{"from":"barnie","message":"I totally agree with that tendency, but one does not rule out the other","timestamp":1500028247102}
{"from":"blahah","message":"github activity is not a measure of success :) it's a function of popularity, how big the codebase in a single repo is, and how buggy the thing is","timestamp":1500028264047}
{"from":"barnie","message":"no, i mean active + successful github projects. they exist","timestamp":1500028286444}
{"from":"blahah","message":"generally the most active things are hype-driven projects with huge corporate backing","timestamp":1500028294213}
{"from":"barnie","message":"like blockchain, ha ha I know what you mean","timestamp":1500028314424}
{"from":"blahah","message":"e.g. the facebook stuff like React, yarn, jest","timestamp":1500028317912}
{"from":"barnie","message":"yep","timestamp":1500028323487}
{"from":"blahah","message":"and blockchain","timestamp":1500028324722}
{"from":"blahah","message":"yup","timestamp":1500028326102}
{"from":"blahah","message":"that's unhealthy in every way","timestamp":1500028342642}
{"from":"barnie","message":"I have open issues with Jest :(","timestamp":1500028345356}
{"from":"blahah","message":"React will be obsolete in no time, same with jest and yarn","timestamp":1500028352653}
{"from":"blahah","message":"their success is not the kind dat would want","timestamp":1500028372196}
{"from":"barnie","message":"well, in general whole js community is going too fast now","timestamp":1500028378594}
{"from":"barnie","message":"agreed","timestamp":1500028388610}
{"from":"blahah","message":"carefully building out things that solve the problems incrementally and stably is the way","timestamp":1500028398362}
{"from":"blahah","message":"and avoiding commerical interests gaining control","timestamp":1500028438264}
{"from":"barnie","message":"that I fully, totally agree with","timestamp":1500028460369}
{"from":"barnie","message":"but that is also why many dat-like activities die","timestamp":1500028481613}
{"from":"barnie","message":"they don't have a good model/strategy to keep the community alive in the long run","timestamp":1500028499467}
{"from":"barnie","message":"blahah: do you mind if I copy the thread to the github issue?","timestamp":1500028606741}
{"from":"barnie","message":"btw, the whole commercial infuence on the internet should be a motivating driver to develop dat faster","timestamp":1500028851591}
{"from":"barnie","message":"internet freedoms are rapidly encrouched by private interests","timestamp":1500028866558}
{"from":"barnie","message":"a good decentralized application model would directly compete with the large IT moguls' products","timestamp":1500028922008}
{"from":"barnie","message":"governments in general also do not stimulate growth of decentralized solutions","timestamp":1500028974061}
{"from":"barnie","message":"bitcoin/blockchain doesn't make it easier for dat to flourish.","timestamp":1500029030849}
{"from":"blahah","message":"sorry just dealing with some admin","timestamp":1500029171762}
{"from":"blahah","message":"pls do","timestamp":1500029173960}
{"from":"blahah","message":"ehem, please do copy the discussion to the issue","timestamp":1500029195494}
{"from":"barnie","message":"thx, will do","timestamp":1500029207372}
{"from":"blahah","message":"and I agree with all the above, but it can't happen p","timestamp":1500029209856}
{"from":"blahah","message":"... faster than the resources allow","timestamp":1500029221848}
{"from":"blahah","message":"(phone keyboard)","timestamp":1500029235286}
{"from":"barnie","message":"blahah: thanks for your thoughts. much appreciated!","timestamp":1500029700567}
{"from":"blahah","message":"barnie: sorry I dropped out during the convo - had urgent staff/admin stuff to sort out","timestamp":1500033812813}
{"from":"blahah","message":"but in general happy to discus this any time - it's important to get right","timestamp":1500033828897}
{"from":"barnie","message":"can't skip that stuf ;)","timestamp":1500033835879}
{"from":"blahah","message":"I am currently working on the sustainability plan for sciencefair and will share it when ready for comments","timestamp":1500033848846}
{"from":"barnie","message":"cool, I'm interested","timestamp":1500033866679}
{"from":"blahah","message":"if you wanted to take a look at that it would be most appreciated","timestamp":1500033867247}
{"from":"barnie","message":"the social platform I have in mind also has sustainability first and foremost as its aim","timestamp":1500033905431}
{"from":"barnie","message":"but it would require loads of effort","timestamp":1500033926672}
{"from":"barnie","message":"too big for me alone :)","timestamp":1500033937700}
{"from":"barnie","message":"that's why I'm gauging and prodding","timestamp":1500033959685}
{"from":"blahah","message":"indeed, same for sciencefair","timestamp":1500034316206}
{"from":"blahah","message":"hence writing a sustainability plan, which I think is probably not normal for a nascent open source thing","timestamp":1500034335710}
{"from":"barnie","message":"blahah: sorry, now I was away. Where will it be available when finished?","timestamp":1500042939049}
{"from":"barnie","message":"if it's on the github project...i'm watching it.","timestamp":1500043026463}
{"from":"blahah","message":"barnie: https://github.com/sciencefair-land/strategy","timestamp":1500044495482}
{"from":"barnie","message":":thumbsup:","timestamp":1500044567986}
{"from":"blahah","message":"work in progress, but should be able to get a chunk done this weekend","timestamp":1500044660075}
{"from":"barnie","message":"do you mind if I just fire away my first observations here?","timestamp":1500044679518}
{"from":"barnie","message":"as I read?","timestamp":1500044690373}
{"from":"barnie","message":"you mention 'You might like to look at the ScienceFair app.'","timestamp":1500044774056}
{"from":"barnie","message":"I would include at least one screenshot to the 'Clean, modern interface' to your main site.","timestamp":1500044820067}
{"from":"barnie","message":"much easier to quickly get a good first impression","timestamp":1500044853339}
{"from":"barnie","message":"without installing","timestamp":1500044862468}
{"from":"blahah","message":"barnie: observations welcome but better in #sciencefair if poss?","timestamp":1500044942659}
{"from":"barnie","message":"sure!","timestamp":1500044956000}
{"from":"blahah","message":"(good points though)","timestamp":1500044956389}
{"from":"blahah","message":"thanks :)","timestamp":1500044962706}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: would allowing blind replication require changes to hypercore-protocol?","timestamp":1500048352338}
{"from":"dat-gitter","message":"(lukeburns) i.e. replicating without the public key","timestamp":1500048450844}
{"from":"mafintosh","message":"lukeburns i'd need to think about it","timestamp":1500049269616}
{"from":"mafintosh","message":"storing unverified content doesnt sound good to me tho","timestamp":1500049290263}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: one use case would be to eliminate the need to trust hosted replication services","timestamp":1500049517654}
{"from":"mafintosh","message":"lukeburns i think having a verified but at-rest encrypted mode would be optimal for that","timestamp":1500049594151}
{"from":"mafintosh","message":"so the replication service cant read the data but still verify it","timestamp":1500049623584}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: hm","timestamp":1500049674177}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: that would require passing around an extra key for decryption. blind replication would allow for zero friction for both publishers and consumers. it doesn't compromise anything since the blind replicator is just an intermediary and readers still verify content. if for some reason a bad actor asks a host to blind replicate a bad dat, then the bad actor will be paying for a service that doesn't do them any goo","timestamp":1500050261825}
{"from":"mafintosh","message":"lukeburns no we could make it work with the same keypair","timestamp":1500050298928}
{"from":"dat-gitter","message":"(lukeburns) oh?","timestamp":1500050311802}
{"from":"mafintosh","message":"derive a blind one from the original","timestamp":1500050314463}
{"from":"mafintosh","message":"that you share with blind replicators","timestamp":1500050325860}
{"from":"mafintosh","message":"and share the original with other peers","timestamp":1500050368168}
{"from":"dat-gitter","message":"(lukeburns) i'm not 100% clear on the idea. even if you derived a key, peers would have to know if they're downloading from a blind replicator and do more work to read data, no?","timestamp":1500050505557}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: so say i have a hypercore feed with keypair a, A. are you suggesting i derive a keypair a', A', add another layer of encryption / signatures and share A' with an untrusted host? then i guess you could have peers listen for A', which they can also derive, and then double decrypt / verify... is this what you mean?","timestamp":1500051479364}
{"from":"mafintosh","message":"lukeburns something like that ya","timestamp":1500051544814}
{"from":"mafintosh","message":"half baked","timestamp":1500051551360}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: is this much better than the other approach? a blind host could very well be hosting a bad feed without knowing it (i.e. the first layer is verified, but the second layer is bad)","timestamp":1500051634567}
{"from":"jondashkyle","message":"mafintosh: here is where i’m at right now: http://soundcloud.jon-kyle.com/","timestamp":1500054649274}
{"from":"jondashkyle","message":"need to add validation to the form, and going to work on honing in the language","timestamp":1500054665033}
{"from":"jondashkyle","message":"using a small diy toiletdb to store the archive key and a timestamp, so i can run a little cleanup script and get rid of anything older than a few hours, which should prevent having a zillion mp3s on the server","timestamp":1500054711795}
{"from":"jondashkyle","message":"still considering where to put it, if it should be it’s own domain, etc","timestamp":1500054737158}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: final thought: implementing unverified replication wouldn't make it easier for people to create bad feeds -- it would just make it possible to replicate them; someone would still need to write up a bad implementation of the protocol first that would allow them to create / replicate bad feeds, at which point they already have the tools to invade a swarm with bad peers. to get a \"good peer\" to blind replicate ","timestamp":1500055249053}
{"from":"dat-gitter","message":"e.g.), but in the end, it doesn't even matter","timestamp":1500055249161}
{"from":"audy","message":"How does version control work with dat? Does it work with the GUI?","timestamp":1500055817663}
{"from":"karissa","message":"audy: good q. there's a 'dat log' command you can see the history of a dat","timestamp":1500056053061}
{"from":"karissa","message":"audy: also check this for a bare bones gui. we want to implement a history view for the desktop app too but just haven't yet https://docs.datproject.org/http#built-in-versioning","timestamp":1500056078625}
{"from":"audy","message":"karissa can I check out older versions of files or go back in time like git?","timestamp":1500056081878}
{"from":"audy","message":"ah, I see","timestamp":1500056098154}
{"from":"pfrazee","message":"audy: Beaker browser (for better or worse) currently stores all old versions of files in the dats you own, so you can go back in time","timestamp":1500056541504}
{"from":"audy","message":"pfrazee thanks. I'll look into that","timestamp":1500056572980}
{"from":"audy","message":"I'm looking for an easy way to do version control for datasets (excel spreadsheets, mostly) within my org","timestamp":1500056578471}
{"from":"audy","message":"since everyone is constantly overwriting each others' edits via Box","timestamp":1500056596492}
{"from":"karissa","message":"audy: yeah, that's an awesome use case..","timestamp":1500056929915}
{"from":"karissa","message":"audy: would be happy to chat with you on the phone sometime to delve into it a bit. we're trying to support that use case a lot","timestamp":1500056946992}
{"from":"audy","message":"karissa sure that'd be great. I'll msg you my contact info","timestamp":1500058264111}
{"from":"jondashkyle","message":"hey everyone it’s live! http://soundsalvage.jon-kyle.com/","timestamp":1500060880813}
{"from":"jondashkyle","message":"i just put it on my name so hopefully it looks a little more personal and not like some sort of service trying to steal users","timestamp":1500060901163}
{"from":"dat-gitter","message":"(lukeburns) jondashkyle: \"Error during WebSocket handshake: Unexpected response code: 502\"","timestamp":1500060975384}
{"from":"dat-gitter","message":"(lukeburns) looks great!","timestamp":1500060986248}
{"from":"jondashkyle","message":"oh hmm, what url did you enter?","timestamp":1500060993400}
{"from":"dat-gitter","message":"(lukeburns) https://soundcloud.com/lukeburns","timestamp":1500061013276}
{"from":"yoshuawuyts","message":"jondashkyle: neat!","timestamp":1500061028496}
{"from":"jondashkyle","message":"strange, not seeing that here luke!","timestamp":1500061058040}
{"from":"dat-gitter","message":"(lukeburns) hmmmm","timestamp":1500061131645}
{"from":"dat-gitter","message":"(lukeburns) am at a coffee shop...what is code 502","timestamp":1500061148557}
{"from":"jondashkyle","message":"“error”, haha! it might be an issue with connecting to websockets?","timestamp":1500061271378}
{"from":"jondashkyle","message":"maybe the port i'm trying to use is blocked… i'm sort of new to the serverside of things.","timestamp":1500061296482}
{"from":"jondashkyle","message":"yoshuawuyts: it’s using choo v6 for SSR :)","timestamp":1500061304748}
{"from":"yoshuawuyts","message":"jondashkyle: \\o/","timestamp":1500061326342}
{"from":"jondashkyle","message":"pfrazee: hmm looks like some issues w/ playing embedded mp3s in hashbase: https://soundsalvage-jkm.hashbase.io/","timestamp":1500061329545}
{"from":"jondashkyle","message":"works a treat in beaker though!","timestamp":1500061333749}
{"from":"dat-gitter","message":"(lukeburns) that link works for me!","timestamp":1500061390781}
{"from":"dat-gitter","message":"(lukeburns) mp3s work great","timestamp":1500061397285}
{"from":"yoshuawuyts","message":"jondashkyle: lil bit of feedback; I find the font a lil hard to read on mobile -, looking stellar on desktop tho","timestamp":1500061415570}
{"from":"jondashkyle","message":"lol yeah, prob a little rough on mobile","timestamp":1500061430042}
{"from":"jondashkyle","message":"not sure how useful it'd be on there anyway, though","timestamp":1500061438090}
{"from":"dat-gitter","message":"(lukeburns) v into All Hallows and Boulevards","timestamp":1500061568822}
{"from":"dat-gitter","message":"(lukeburns) :)","timestamp":1500061569872}
{"from":"jondashkyle","message":"lol i had a really stupid bug in there, it should be fixed now! (no cahche breaking so mash the hard refresh)","timestamp":1500061904794}
{"from":"jondashkyle","message":"thanks luke!","timestamp":1500062032932}
{"from":"dat-gitter","message":"(lukeburns) ooh ok so supposedly my music is now on dat://5a1f0558b8caf33fc6fe17231d911bfe022d98a2a0b318f54bf03aa01714c593","timestamp":1500062204677}
{"from":"dat-gitter","message":"(lukeburns) i'm timing out on beaker","timestamp":1500062216279}
{"from":"dat-gitter","message":"(lukeburns) can you see it?","timestamp":1500062218576}
{"from":"pfrazee","message":"jondashkyle: nice!!","timestamp":1500062370355}
{"from":"pfrazee","message":"jondashkyle: that https hashbase link is working for me, what browser are you using?","timestamp":1500062382145}
{"from":"pfrazee","message":"@lukebruns that opened up for me","timestamp":1500062386413}
{"from":"jondashkyle","message":"yeah i can see it luke!","timestamp":1500062391620}
{"from":"dat-gitter","message":"(lukeburns) hmmmmmmm","timestamp":1500062399778}
{"from":"dat-gitter","message":"(lukeburns) curious","timestamp":1500062406149}
{"from":"jondashkyle","message":"pfrazee: just in chrome","timestamp":1500062413404}
{"from":"pfrazee","message":"@lukeburns the music isnt playing though! :(","timestamp":1500062421622}
{"from":"dat-gitter","message":"(lukeburns) eek!","timestamp":1500062425124}
{"from":"pfrazee","message":"jondashkyle: humm","timestamp":1500062426609}
{"from":"pfrazee","message":"@lukeburns is that a dat that soundsalvage created?","timestamp":1500062441454}
{"from":"jondashkyle","message":"the page shows up fine, it just looks like hashbase doesn't pull in the mp3s, maybe.","timestamp":1500062443677}
{"from":"dat-gitter","message":"(lukeburns) ya","timestamp":1500062444596}
{"from":"pfrazee","message":"ok let me debug a bit","timestamp":1500062453989}
{"from":"jondashkyle","message":"word! and yeah, playing perfectly in beaker itself, lukeburns","timestamp":1500062468738}
{"from":"dat-gitter","message":"(lukeburns) i don't see any peers","timestamp":1500062561149}
{"from":"dat-gitter","message":"(lukeburns) jondashkyle: what is your dat key?","timestamp":1500062577560}
{"from":"jondashkyle","message":"dat://524d4005d0d8bbb41d021b87295c7064a78573e5be960a5645ac8235fcf39e40/","timestamp":1500062592838}
{"from":"dat-gitter","message":"(lukeburns) ok that's also timing out for me","timestamp":1500062671532}
{"from":"dat-gitter","message":"(lukeburns) is it likely traffic is being blocked?","timestamp":1500062788300}
{"from":"dat-gitter","message":"(lukeburns) @ coffeeshop","timestamp":1500062793377}
{"from":"pfrazee","message":"@lukeburns that's super possible, yeah","timestamp":1500062816414}
{"from":"barnie","message":"jondashkyle: great app!!","timestamp":1500062837574}
{"from":"pfrazee","message":"jondashkyle: that app is awesome. We've got this crash bug in beaker right now that I'm hitting a lot with this","timestamp":1500062852791}
{"from":"jondashkyle","message":"pfrazee: yeah i get it when i skip around tracks a bit","timestamp":1500062867648}
{"from":"jondashkyle","message":"if i hit it and just let it play it’s alright. electron jank.","timestamp":1500062878673}
{"from":"jondashkyle","message":"barnie: thanks!","timestamp":1500062887835}
{"from":"pfrazee","message":"bedeho: let me see if I can poke some people I know in electron to solve it","timestamp":1500062899959}
{"from":"pfrazee","message":"mafintosh: I dont suppose you're super comfortable messing with C++ in electron, are you? My C++ sucks right now and electron is a real doozie to jump into","timestamp":1500062927338}
{"from":"barnie","message":"can I try it with someone else's url (not on soundcloud) but I don't want to mess with others' personal data","timestamp":1500062929445}
{"from":"pfrazee","message":"barnie: dat://ad2d88529999f4f6921c893e99e2e5b155fcf47c3871ea03164e42fc0612a15f","timestamp":1500062953771}
{"from":"pfrazee","message":"^ that's my brother's","timestamp":1500062960974}
{"from":"barnie","message":"cool, thx!","timestamp":1500062965493}
{"from":"pfrazee","message":"oh did you want his soundcloud url?","timestamp":1500062972343}
{"from":"barnie","message":"that bad? ;)","timestamp":1500062989147}
{"from":"pfrazee","message":"https://soundcloud.com/kickupdust he took down a lot of his stuff so there's just one quirky track :)","timestamp":1500062990485}
{"from":"jondashkyle","message":"barnie: if you want a real hack this can also be used to download youtube videos, lol","timestamp":1500063140830}
{"from":"jondashkyle","message":"it’s using youtube-dl to nab the tracks","timestamp":1500063147794}
{"from":"barnie","message":"ha ha, cool!","timestamp":1500063166303}
{"from":"barnie","message":"it worked first time, but now is hanging","timestamp":1500063215899}
{"from":"barnie","message":"should I empty cache or something?","timestamp":1500063223825}
{"from":"barnie","message":"firefox","timestamp":1500063243131}
{"from":"jondashkyle","message":"hmm, unsure. i mean, it’s almost certainly going to break b/c i have it on a very limited vhost","timestamp":1500063262741}
{"from":"jondashkyle","message":"https://usercontent.irccloud-cdn.com/file/ZJxtPFED/Screen%20Shot%202017-07-14%20at%204.14.01%20PM.png","timestamp":1500063276547}
{"from":"jondashkyle","message":"but yeah cache shouldn't be an issue","timestamp":1500063306970}
{"from":"jondashkyle","message":"i think it crashed, so just restarted. should be ok… i don’t really know what i’m doing with this, very diy.","timestamp":1500063412498}
{"from":"barnie","message":"works on chrome now","timestamp":1500063414341}
{"from":"barnie","message":"great stuff","timestamp":1500063421519}
{"from":"jondashkyle","message":"thanks!","timestamp":1500063424582}
{"from":"jondashkyle","message":"is there a way to force pm2 to restart if it detects a crash? i know forever does that, but from what i was reading pm2 is better suited for production env","timestamp":1500063471209}
{"from":"pfrazee","message":"jondashkyle: I think it does that automatically","timestamp":1500063708311}
{"from":"jondashkyle","message":"i see, thanks!","timestamp":1500063722935}
{"from":"jondashkyle","message":"keep maxing out the space on my droplet :(","timestamp":1500064268899}
{"from":"pfrazee","message":"jondashkyle: oh no!","timestamp":1500066414369}
{"from":"jondashkyle","message":"yeah hitting 20gig every few minutes","timestamp":1500066428085}
{"from":"pfrazee","message":"no way","timestamp":1500066466809}
{"from":"pfrazee","message":"what's the current total?","timestamp":1500066474110}
{"from":"pfrazee","message":"if that thing goes viral you're going to be totally boned","timestamp":1500066483200}
{"from":"pfrazee","message":"jondashkyle: lmk if things get out of hand, we can find a way to rearchitect it","timestamp":1500067926947}
{"from":"jondashkyle","message":"i think it's ok for the moment. i keep a log of what times archives were created, and writing a script to go through and clean them out after 15 minutes","timestamp":1500067968702}
{"from":"jondashkyle","message":"if someone hasn't cloned it by then, i don't think they will","timestamp":1500067976697}
{"from":"jondashkyle","message":"if it gets picked up in a substantial way will let you know","timestamp":1500068022867}
{"from":"pfrazee","message":"yeah","timestamp":1500068023203}
{"from":"ogd","message":"using dat to transfer a dat from one machine to another :D","timestamp":1500068597563}
{"from":"ogd","message":"jsut have to remember to backup content.secretkey and also transfer after clone","timestamp":1500068609111}
{"from":"ogd","message":"having a backup strategy for keys built in would be great","timestamp":1500068619575}
{"from":"pfrazee","message":"ogd: agree, I need that in beaker too","timestamp":1500069241267}
{"from":"ogd","message":"transferring ownership of a dat is too hard right now...we should make it easier. you have to copy ~/.dat/secret_key/<discovery-key> but theres no way to print out the discovery key atm. you also have to flip a bit in .dat/metadata.ogd","timestamp":1500071005436}
{"from":"louisc","message":"jondashkyle: super cool. are you finding that data is actually getting re-seeded? or is it all just ending up on the one server?","timestamp":1500071037500}
{"from":"jondashkyle","message":"louisc: good question. i'm saving a log of archive keys as they're added. wonder if there would be an easy way of going through those and seeing which are still being seeded","timestamp":1500071080277}
{"from":"louisc","message":"jondashkyle: the mention of vine on your webpage made me die inside for a moment. Not archiving all of my likes on vine was the worst mistake I ever made","timestamp":1500071775875}
{"from":"jondashkyle","message":"i think, with the repetition of high profile shuttering like this, where it isn't a slow decline but a very usable active community which gets shuttered because of funding issues, will hopefully drive people to explore alternatives more","timestamp":1500071847076}
{"from":"jondashkyle","message":"e.g. i have my data, and give access to others to use it","timestamp":1500071920780}
{"from":"jondashkyle","message":"(all the familiar stuff)","timestamp":1500071930157}
{"from":"jondashkyle","message":"but yeah, something i should mention on the page is that the music will still be around, you know? it’s all the likes and connections which are lost","timestamp":1500071954516}
{"from":"jondashkyle","message":"the *community*","timestamp":1500071961770}
{"from":"jondashkyle","message":"so much of tech forgets that","timestamp":1500071982871}
{"from":"louisc","message":"jondashkyle: for sure. coming up with really usable ways of doing p2p on mobile I think will be the tipping point","timestamp":1500071993234}
{"from":"jondashkyle","message":"yep","timestamp":1500072052514}
{"from":"louisc","message":"jondashkyle: for sure. tech makes up a small part of it. there's larger topics around curation, aggregation, governance and community that I think are more important to the future of where decentralised solutions can go. The tech makes it possible, but the non tech gets it over the line","timestamp":1500072074090}
{"from":"louisc","message":"jondashkyle: although I'm not a fan of cdm.link, the thought piece there on Soundcloud shuttering hit the spot for me","timestamp":1500072110277}
{"from":"jondashkyle","message":"yeah, i mean, it's entering the zeitgeist more w/ the news cycle talking about election hacking, etc…","timestamp":1500072197019}
{"from":"jondashkyle","message":"have been thinking that data awareness is going to grow culturally in a similar way to how organic foods during the 80s grew as a market","timestamp":1500072230558}
{"from":"jondashkyle","message":"i think people are trying to scale too quickly sometimes, everyone is stepping on everyone else’s toes","timestamp":1500072260441}
{"from":"jondashkyle","message":"trying to nurture real smaller communities is the way to get the ball rolling","timestamp":1500072273530}
{"from":"jondashkyle","message":"i mean, sort of adhering to the aesthetics of a “vernacular web” on this page (sort of aggressive typesetting, pseudo-brutalist whatever) contextualized it differently for a certain group of people who connect w/ that understanding of design","timestamp":1500072357630}
{"from":"jondashkyle","message":"that's why the page is sort of ugly looking, and doesn't look like just another tech demo page","timestamp":1500072380955}
{"from":"louisc","message":"jondashkyle: those decisions translate well","timestamp":1500072403760}
{"from":"jondashkyle","message":"designers are gonna have to understand it to help push those conversations in the agencies or studios doing client work before it makes it to the average user","timestamp":1500072425112}
{"from":"jondashkyle","message":"(i mean, this is just a small part of the pie)","timestamp":1500072431710}
{"from":"louisc","message":"jondashkyle: I'm not a designer myself, but I'm reading more and more things lately that seem to point in this direction. designers and developers are cottoning onto this conscious decision making","timestamp":1500072491641}
{"from":"louisc","message":"I've even worked in places recently where there were active decisions \"not to look like silicon valley\"","timestamp":1500072522679}
{"from":"louisc","message":"actively moving the user out of that headspace","timestamp":1500072537528}
{"from":"louisc","message":"jondashkyle: vine was kind of an interesting example. although I think \"mainstream vine\" kind of grew into one of those things we all identify as cringe tech, there were a lot of really interesting underground pockets to it, that you sensed could grow out into their own communities themselves. I noticed for example a lot of beat makers and labels operating in california were using it to push their work","timestamp":1500072668848}
{"from":"louisc","message":"building interesting folios of visual work that co-incided with their physical output","timestamp":1500072683009}
{"from":"jondashkyle","message":"totally","timestamp":1500072692343}
{"from":"jondashkyle","message":"yeah, i mean the role of the underground is crucial to these products being successful. the people using vine were coming to watch 1% of the videos, the most popular ones.","timestamp":1500072746163}
{"from":"jondashkyle","message":"but the people producing that 1% weren’t superstars, you know? they just made good stuff. attention as currency, etc…","timestamp":1500072778614}
{"from":"jondashkyle","message":"louisc: have to run but lets pick it back up soon","timestamp":1500072800587}
{"from":"louisc","message":"jondashkyle: ✌️","timestamp":1500072819769}
{"from":"dat-gitter","message":"(e-e-e) A friend just shared this with me, and I figure it might be of interest to some people lurking here - Coconut south east asia digital rights camp - http://mailchi.mp/engagemedia/were-hiring-video-for-change-program-manager-3220401?e=5ade6c9f18","timestamp":1500075347267}
{"from":"karissa","message":"jondashkyle: looks awesome! great job","timestamp":1500080548939}
{"from":"karissa","message":"jondashkyle: tried to get mine but it looks like the dat went dormant, maybe i didn't grab it in time","timestamp":1500083301538}
{"from":"jondashkyle","message":"karissa: there is a 15 minute window after the dat has been created when you can clone, after that the dat is cleared from server","timestamp":1500095622334}
{"from":"barnie","message":"hi all! added 2 cts to 'Discussion: Positioning, Vision and future direction of the Dat Project' (https://github.com/datproject/dat/issues/824)","timestamp":1500112749695}
{"from":"barnie","message":"interesting?","timestamp":1500112754730}
{"from":"barnie","message":"join? love feedback","timestamp":1500112831654}
{"from":"mafintosh","message":"barnie: seems like a good quality thread! will dig in later today","timestamp":1500112915532}
{"from":"mafintosh","message":"thanks for helping out :)","timestamp":1500112919631}
{"from":"barnie","message":"no problem. i enjoy it","timestamp":1500112945405}
{"from":"barnie","message":"mafintosh: btw, i've been working on test-driving hypercore on mobile with react-native, like we talked about earlier","timestamp":1500114172503}
{"from":"barnie","message":"prototype should have unit tests ported and working for all transitive dat dependencies","timestamp":1500114217945}
{"from":"barnie","message":"but i'm still learning technology, thus slow","timestamp":1500114241458}
{"from":"barnie","message":"and find myself in techstack-setup-hell","timestamp":1500114256172}
{"from":"barnie","message":"want to debug through node_modules to find how shims should be modified / adapted","timestamp":1500114289212}
{"from":"barnie","message":"but debugging proved to be a huge PITA untill now. Was trying to use Jest","timestamp":1500114322167}
{"from":"barnie","message":"now going to switch to mocha, jasmine and/or karma","timestamp":1500114349892}
{"from":"substack","message":"why do you need to do this?","timestamp":1500114498510}
{"from":"barnie","message":"jest was running tests directly against my code, not the transpiled code","timestamp":1500114559636}
{"from":"barnie","message":"thus ignoring the shims and succeeding where it shouldn't","timestamp":1500114579001}
{"from":"barnie","message":"maybe it could work, but i didn't get good feedback from jest team","timestamp":1500114608586}
{"from":"barnie","message":"https://github.com/facebook/jest/issues/4028","timestamp":1500114631468}
{"from":"barnie","message":"there were other issues with node v8.. debugging cost me 2,5 days till now","timestamp":1500114730159}
{"from":"barnie","message":"https://stackoverflow.com/questions/45056952/debugging-jest-unit-tests-with-breakpoints-in-vs-code-with-react-native","timestamp":1500114769293}
{"from":"louisc","message":"can a hyperdrive archive's history be modified? that is, if an archive has 100 changes, could change number 50 be removed? I was thinking about this while reading a blog post about beaker's ability to look at old versions of a website directly from the URL (eg. dat://beakerbrowser.com+100/)","timestamp":1500121018957}
{"from":"dat-gitter","message":"(e-e-e) louisc: I cant imagine this being possible because of the append only nature and how data is verified. I could be wrong.","timestamp":1500121313185}
{"from":"louisc","message":"e-e-e I suspected this would be the case, the issue of verification. I thought I remembered mafintosh saying on Twitter at some point that append-only logs could be modified, but I might have misinterpreted that.","timestamp":1500121482583}
{"from":"dat-gitter","message":"(e-e-e) Perhaps they could be totally rewritten, but then how would a remote dat resolve those changes?","timestamp":1500121698444}
{"from":"barnie","message":"louis: i am not that familiar with dat inner workings yet, so I may sound stupid, but couldn't you add an additional change to the log that reverses nr 50? Or do you want to physically remove e.g. sensitive data?","timestamp":1500122778722}
{"from":"barnie","message":"similar to event sourcing","timestamp":1500122791831}
{"from":"louisc","message":"barnie: an example scenario might be that you accidentally publish sensitive data to your beaker website (or dat archive), and you want to remove references to that sensitive data. with beaker, users can navigate back to any revision of your website, so is there a way modifying the archive history?","timestamp":1500123525713}
{"from":"barnie","message":"louisc: ah, then my suggestion won't work","timestamp":1500126551578}
{"from":"mafintosh","message":"louisc: you can safely remove old entries","timestamp":1500133787783}
{"from":"mafintosh","message":"just not overwrite them","timestamp":1500133794766}
{"from":"mafintosh","message":"we actually have an api for it as well, .clear(start, [end])","timestamp":1500133824877}
{"from":"pfrazee","message":"running eslint for the first time on beaker","timestamp":1500135559631}
{"from":"pfrazee","message":"pray for me","timestamp":1500135577523}
{"from":"barnie","message":"pfrazee: amen","timestamp":1500135732986}
{"from":"pfrazee","message":"🙏","timestamp":1500135899674}
{"from":"barnie","message":"did it help?","timestamp":1500136123064}
{"from":"pfrazee","message":"which god did you pray to","timestamp":1500136130998}
{"from":"barnie","message":"ha ha, the rational one","timestamp":1500136155370}
{"from":"pfrazee","message":"oh shoot I meant to pray to the god of being lazy and not having to fix all this code","timestamp":1500136174186}
{"from":"barnie","message":"lol !!","timestamp":1500136188319}
{"from":"barnie","message":"should've prayed to most of them, ensuring you don't miss out. sorry.","timestamp":1500136260758}
{"from":"pfrazee","message":"whew, done. only one bug found!","timestamp":1500142915267}
{"from":"pfrazee","message":"https://github.com/beakerbrowser/beaker/pull/605","timestamp":1500142995962}
{"from":"barnie","message":"nice :) prayers were heard after all","timestamp":1500145085578}
{"from":"dat-gitter","message":"(whilo) hey","timestamp":1500145550954}
{"from":"dat-gitter","message":"(whilo) I am one of the core devs of replikativ and its initial designer","timestamp":1500145585202}
{"from":"dat-gitter","message":"(whilo) I think barnie discussed some important points on IRC with blahblah here: https://github.com/datproject/dat/issues/824","timestamp":1500145636895}
{"from":"barnie","message":"and I'll be here for additional discussion if you prefer","timestamp":1500145692711}
{"from":"dat-gitter","message":"(whilo) I would like to join the discussion of how to move forward. I just come from a lightkone workshop in Portugal about edge computing, the follow-up project of syncfree in EU horizon 2020 project.","timestamp":1500145695043}
{"from":"dat-gitter","message":"(whilo) barnie, what would you like to see concretely?","timestamp":1500145745228}
{"from":"barnie","message":"well, on a very general level I would like to see successful technology frameworks for decentralized computing","timestamp":1500145796555}
{"from":"barnie","message":"with regards to dat I would like it to be positioned more broadly so that obvious, but slightly out-of-scope use cases get the change to come to fruition","timestamp":1500145855602}
{"from":"dat-gitter","message":"(whilo) ok, do you have a particular application that you find interesting?","timestamp":1500145896663}
{"from":"barnie","message":"i see dat as a potential decentralized messaging + application framework.","timestamp":1500145897367}
{"from":"barnie","message":"its positioning is too file-oriented imho","timestamp":1500145911171}
{"from":"dat-gitter","message":"(whilo) distributed computing is subject to many constraints and tradeoffs, different people have different needs","timestamp":1500145916577}
{"from":"barnie","message":"yes, the application is a new kind of social network","timestamp":1500145934256}
{"from":"dat-gitter","message":"(whilo) yes, i don't think a filesystem view will cut it, but then this is what developers want. that is why s3 is the way it is","timestamp":1500145949139}
{"from":"dat-gitter","message":"(whilo) very nice","timestamp":1500145956965}
{"from":"dat-gitter","message":"(whilo) yes, i think social networks are a good example","timestamp":1500145965410}
{"from":"barnie","message":"with mobile devices (far off where dat currently is, but not impossible it seems)","timestamp":1500145968693}
{"from":"dat-gitter","message":"(whilo) what kind of computation cannot be done in a federated way for you? e.g. as in diaspora or metadon","timestamp":1500145989278}
{"from":"barnie","message":"with hypercore you can basically exchange any data stream","timestamp":1500145997529}
{"from":"barnie","message":"might as well be messages","timestamp":1500146002553}
{"from":"dat-gitter","message":"(whilo) ok. i don't know about the internals of hypercore","timestamp":1500146018046}
{"from":"dat-gitter","message":"(whilo) i am interested in a scalable p2p pub-sub layer that is deployable in the browser","timestamp":1500146041774}
{"from":"barnie","message":"then you can create more complex apps using message collaboration (Martin Fowler)","timestamp":1500146048493}
{"from":"dat-gitter","message":"(whilo) (and on other runtimes)","timestamp":1500146050158}
{"from":"barnie","message":"well, I was thinking of javascript as well, that runs in the browser and - via react native - on mobile as well","timestamp":1500146088827}
{"from":"dat-gitter","message":"(whilo) ok. the problem is that you have to pin down a data-model and describe how concurrency relates to it","timestamp":1500146097813}
{"from":"dat-gitter","message":"(whilo) messaging is too level and just a primitive","timestamp":1500146106830}
{"from":"barnie","message":"yes.","timestamp":1500146115462}
{"from":"barnie","message":"on the mobile device you would basically have a rich applcation (all layers present)","timestamp":1500146137802}
{"from":"dat-gitter","message":"(whilo) replikativ's pub-sub layer runs in the browser and in react native already, but to really scale we need to have a better overlay","timestamp":1500146141134}
{"from":"dat-gitter","message":"(whilo) i would like to have that factored as a library","timestamp":1500146163733}
{"from":"barnie","message":"you have the hypercore layer as lowest level, then an event sourcing layer and maybe cqrs to do domain modeling","timestamp":1500146173179}
{"from":"barnie","message":"and on top modularized ui","timestamp":1500146192041}
{"from":"barnie","message":"whilo, are you from the replikativ team?","timestamp":1500146236654}
{"from":"dat-gitter","message":"(whilo) i am the main dev so to speak","timestamp":1500146249043}
{"from":"dat-gitter","message":"(whilo) and its designer","timestamp":1500146257712}
{"from":"barnie","message":"cool. it's a great thing you've got going","timestamp":1500146269003}
{"from":"dat-gitter","message":"(whilo) we are a team now, though and i don't own it in any way","timestamp":1500146280383}
{"from":"barnie","message":"you've probably also read my concerns regarding your size and approach","timestamp":1500146289319}
{"from":"dat-gitter","message":"(whilo) i am very aware of it","timestamp":1500146299146}
{"from":"barnie","message":"i'd love to see you succeed","timestamp":1500146316530}
{"from":"dat-gitter","message":"(whilo) i think ipfs is best positioned to succeed right now, although i don't like its approach. it does not focus on state management, but on a read-only filesystem mostly and has weak write semantics","timestamp":1500146340903}
{"from":"dat-gitter","message":"(whilo) i also think it will not scale very well, depending on the application, but then these issues can be solved","timestamp":1500146357292}
{"from":"dat-gitter","message":"(whilo) thanks!","timestamp":1500146365062}
{"from":"barnie","message":"i tried it a bit and it broke frequently","timestamp":1500146367911}
{"from":"dat-gitter","message":"(whilo) oh, ok","timestamp":1500146378295}
{"from":"dat-gitter","message":"(whilo) what have you tried?","timestamp":1500146382358}
{"from":"barnie","message":"did run on tablet and mobile for a while though","timestamp":1500146383929}
{"from":"barnie","message":"IPFS","timestamp":1500146387110}
{"from":"dat-gitter","message":"(whilo) i mean what specific workload","timestamp":1500146398111}
{"from":"dat-gitter","message":"(whilo) writing a lot, for instance","timestamp":1500146407519}
{"from":"barnie","message":"no, just the demo, didn't go deep","timestamp":1500146413602}
{"from":"dat-gitter","message":"(whilo) i just used it like bittorrent a few times","timestamp":1500146420588}
{"from":"dat-gitter","message":"(whilo) oh, ok","timestamp":1500146424796}
{"from":"dat-gitter","message":"(whilo) the ipfs guys are well connected and financed. juan benet is an investor in a new hedge-fund and their filecoin ICO was for accredited investors only","timestamp":1500146493756}
{"from":"dat-gitter","message":"(whilo) my biggest problem right now is to have a positive economic feedback","timestamp":1500146513076}
{"from":"barnie","message":"i recently investigated blockchain and don't trust it","timestamp":1500146517685}
{"from":"barnie","message":"the ico's are bad as well","timestamp":1500146531910}
{"from":"dat-gitter","message":"(whilo) it does not matter if they get tons of money","timestamp":1500146532825}
{"from":"dat-gitter","message":"(whilo) yes, they are totally bad. but filecoin is not stupid","timestamp":1500146546278}
{"from":"dat-gitter","message":"(whilo) and if they have the money they can still fix stuff","timestamp":1500146563616}
{"from":"barnie","message":"i probably could be a millionaire right now, but would have thrown my ethics and values in the dustbin","timestamp":1500146566009}
{"from":"dat-gitter","message":"(whilo) i am not talking about getting richt personally","timestamp":1500146582473}
{"from":"dat-gitter","message":"(whilo) rich","timestamp":1500146585136}
{"from":"barnie","message":"no i know, its not you, but the whole thing around blockchain :)","timestamp":1500146601407}
{"from":"dat-gitter","message":"(whilo) i just think that an open replication system for data and shared applications need to have some positive economic feedback","timestamp":1500146606672}
{"from":"dat-gitter","message":"(whilo) the blockchain people get one thing right","timestamp":1500146623062}
{"from":"dat-gitter","message":"(whilo) they capture the imagination of what can be done with distributed databases","timestamp":1500146640887}
{"from":"barnie","message":"yes, but would you like to be sponsored by people that are unwittingly in a ponzi scheme..","timestamp":1500146653861}
{"from":"dat-gitter","message":"(whilo) actually they are a very boring and obscure area of research, but they managed to make them really popular by opening up infrastructure","timestamp":1500146670514}
{"from":"barnie","message":"i liked bigchaindb as well when i first saw it. Don't know much about filecoin specifically","timestamp":1500146690954}
{"from":"dat-gitter","message":"(whilo) i would like to have a road-map to success that supports people working on the open infrastructure","timestamp":1500146714015}
{"from":"barnie","message":"hmm yes, that would be good","timestamp":1500146733454}
{"from":"dat-gitter","message":"(whilo) i don't care about the speculators. actually they could also do something good by being stupidly greedy and financing good projects, so i am ambiguous there. i don't think that the blockchain projects will really work in the real world though","timestamp":1500146777313}
{"from":"dat-gitter","message":"(whilo) so at the moment i don't think they really help solve the problem","timestamp":1500146790873}
{"from":"barnie","message":"i fully agree","timestamp":1500146805611}
{"from":"dat-gitter","message":"(whilo) i will meet the people behind https://decodeproject.eu/ in two weeks and give a talk about replikativ there","timestamp":1500146833127}
{"from":"dat-gitter","message":"(whilo) they also explored blockchain ideas without going into the money business, just viewing it as a shared distributed system","timestamp":1500146863278}
{"from":"barnie","message":"even though the technology and project are maybe untrustworthy, there are some nice ideas around conceptually","timestamp":1500146867230}
{"from":"barnie","message":"that's for sure","timestamp":1500146909788}
{"from":"dat-gitter","message":"(whilo) yes, some have for instance voting mechanisms to let the community steer the project","timestamp":1500146911608}
{"from":"dat-gitter","message":"(whilo) i really like that","timestamp":1500146914168}
{"from":"pfrazee","message":"(dont let me interrupt this conversation, just wanted to drop in a link related to some beaker & dat API semantics https://github.com/beakerbrowser/beaker/issues/606)","timestamp":1500146918977}
{"from":"barnie","message":"I liked that eGovernment effort, however weird, or far-reaching the idea. Forgot the name","timestamp":1500146965106}
{"from":"dat-gitter","message":"(whilo) but concretely, i would like to join forces with the dat-project","timestamp":1500146968382}
{"from":"pfrazee","message":"I can offer a few thoughts on all this","timestamp":1500146986266}
{"from":"dat-gitter","message":"(whilo) @pfrazee, great :)","timestamp":1500146997636}
{"from":"pfrazee","message":"barnie: dat's main focus is files but actually the way I like to describe dat is, \"it's a set of network data structures, with a core tech that's based on bittorrent\"","timestamp":1500147033311}
{"from":"barnie","message":"i am between an ended startup and considering new career options, after a small sabbatical. I really enjoy helping out currently and time","timestamp":1500147039019}
{"from":"pfrazee","message":"right now dat has the append-only feed and the files archive. It'll add a keyvalue store soon too","timestamp":1500147048013}
{"from":"dat-gitter","message":"(whilo) pfrazee, are you a dev?","timestamp":1500147073829}
{"from":"pfrazee","message":"@whilo I work on https://beakerbrowser.com, so I'm one of the bigger consumers of the dat tech","timestamp":1500147092272}
{"from":"barnie","message":"yes the main focus is clear. I was just explaining to whilo that hypercore in fact doesn't care. It streams data","timestamp":1500147092730}
{"from":"pfrazee","message":"barnie: right yeah","timestamp":1500147102347}
{"from":"pfrazee","message":"so the files data-structure / API can be used for messaging in a pull based system","timestamp":1500147111077}
{"from":"pfrazee","message":"eg, you could treat a folder like an outbox","timestamp":1500147131403}
{"from":"barnie","message":"so it may stream both events/messages as well as files. actually that's how i would use it","timestamp":1500147136200}
{"from":"pfrazee","message":"yeah","timestamp":1500147147311}
{"from":"barnie","message":"btw i mentioned wrong term before. its 'event collaboration' described by Fowler I was refering to.]","timestamp":1500147182546}
{"from":"dat-gitter","message":"(whilo) I will be back in 10 mins, sorry.","timestamp":1500147184947}
{"from":"pfrazee","message":"we've been (trying to be) thinking pretty carefully about the entire stack needed, and dat's 3 primitives make a pretty solid core. I think you need a few more things on top of it","timestamp":1500147216965}
{"from":"pfrazee","message":"key distribution for identity, which can be built on top of the dat file archives (using an archive to represent a user)","timestamp":1500147232208}
{"from":"barnie","message":"yes, certainly","timestamp":1500147238137}
{"from":"barnie","message":"i described some earlier...see above","timestamp":1500147253497}
{"from":"pfrazee","message":"signalling and proxies to create synchronous channels (probably w/webrtc)","timestamp":1500147253589}
{"from":"barnie","message":"you mean to get into a mobile device with dat?","timestamp":1500147291367}
{"from":"pfrazee","message":"and data aggregation","timestamp":1500147291667}
{"from":"barnie","message":"yes for that i was thinking of event sourcing (very simple impl)","timestamp":1500147313494}
{"from":"pfrazee","message":"aggregation meaning, the ability to query across a number of archives and their files for published information, along the lines of \"give me all the events published by people I follow\"","timestamp":1500147339001}
{"from":"barnie","message":"ah yes, coordination + management of feeds, dats etc. Yes","timestamp":1500147371599}
{"from":"pfrazee","message":"barnie: mobile will eventually happen, yeah","timestamp":1500147383255}
{"from":"barnie","message":"that's in a logic layer above hypercore layer","timestamp":1500147392232}
{"from":"barnie","message":"and the event sources","timestamp":1500147401099}
{"from":"pfrazee","message":"yeah","timestamp":1500147404725}
{"from":"barnie","message":"i hope so","timestamp":1500147418178}
{"from":"barnie","message":"i tried to setup some project already","timestamp":1500147429780}
{"from":"jhand","message":"Got dat working on android via termux this week :)","timestamp":1500147434543}
{"from":"barnie","message":"landed in js-stack-setup-hell","timestamp":1500147440065}
{"from":"pfrazee","message":"jhand: yeah I saw that, awesome","timestamp":1500147442773}
{"from":"jhand","message":"Haven't really had time to play with it yet but worked great in both directions","timestamp":1500147471443}
{"from":"barnie","message":"problem with termux is, you need to install 2 apps as a user.","timestamp":1500147478124}
{"from":"jhand","message":"Right","timestamp":1500147490306}
{"from":"barnie","message":"to really use it you should integrate the code in your own app","timestamp":1500147494372}
{"from":"barnie","message":"i like what termux does","timestamp":1500147511173}
{"from":"barnie","message":"i was trying to ReactNativify and shim the node objects, seeing as a js rewrite would be way too much for me","timestamp":1500147580505}
{"from":"barnie","message":"one thing that is nice in the social network i envision, is there are mostly only FoaF interactions/transactions, which lessens eventual consistency issues in cases where you don't want a big data replication biz going on under the hoods.","timestamp":1500147980783}
{"from":"dat-gitter","message":"(whilo) @pfrazee, nice i have done something similar to beakerbrowser 5 years ago, it motivated me to study CS and start working in this direction :)","timestamp":1500148320061}
{"from":"pfrazee","message":"@whilo cool! what was the project?","timestamp":1500148336131}
{"from":"pfrazee","message":"@whilo what you're doing with replikativ makes sense. Just so you see what's going on with Dat, we haven't focused as much on record types. The key-value store will be the first CRDT, and that will be a simple multi-value register (conflicts are kept). So despite the similar domain, there's not a direct overlap","timestamp":1500148352793}
{"from":"pfrazee","message":"browsing over your docs it looks like you're building a set of CRDTs","timestamp":1500148377286}
{"from":"dat-gitter","message":"(whilo) @pfrazee https://video.golem.de/audio-video/4179/magnet-uris-unter-kde.html","timestamp":1500148430208}
{"from":"pfrazee","message":"@whilo oh very cool, yeah that's right in line with us","timestamp":1500148473547}
{"from":"pfrazee","message":"@whilo how are things going with replikativ so far?","timestamp":1500148548310}
{"from":"dat-gitter","message":"(whilo) they are going fine in general, but i would like to leverage some dat/bittorrent/kademelia like dht infrastructure for immutable data","timestamp":1500148605961}
{"from":"pfrazee","message":"yeah","timestamp":1500148623451}
{"from":"dat-gitter","message":"(whilo) i am also thinking a lot about a good pub-sub layer. i have a design for a gossip based protocol","timestamp":1500148646864}
{"from":"pfrazee","message":"hypercore is dat's append-only log module, https://github.com/mafintosh/hypercore","timestamp":1500148660532}
{"from":"pfrazee","message":"uses a DHT to lookup, signs all entries, addressed by a pubkey","timestamp":1500148682369}
{"from":"dat-gitter","message":"(whilo) i have integrated ipfs in our filesync demo: https://github.com/replikativ/filesync-replikativ","timestamp":1500148684310}
{"from":"dat-gitter","message":"(whilo) but it is not in process, i need to run a separate thing","timestamp":1500148695045}
{"from":"pfrazee","message":"some neat attributes to it https://beakerbrowser.com/2017/06/19/cryptographically-secure-change-feeds.html","timestamp":1500148702764}
{"from":"dat-gitter","message":"(whilo) i would like to have it deployable within my stack","timestamp":1500148704688}
{"from":"pfrazee","message":"yeah if you're in a node stack, then dat can embed just fine","timestamp":1500148722267}
{"from":"pfrazee","message":"if not then youre out of luck","timestamp":1500148752743}
{"from":"dat-gitter","message":"(whilo) i see. well replikativ runs on node as well, we have a multi-runtime Clojure(Script) codebase","timestamp":1500148779314}
{"from":"pfrazee","message":"yeah that's cool","timestamp":1500148787381}
{"from":"dat-gitter","message":"(whilo) https://www.npmjs.com/package/replikativ","timestamp":1500148793955}
{"from":"dat-gitter","message":"(whilo) I am really happy with Clojure, although it is not as popular. I still can go into all the popular environments and integrate nicely","timestamp":1500148835871}
{"from":"dat-gitter","message":"(whilo) Something that IPFS or pure js solutions cannot do as easily","timestamp":1500148850243}
{"from":"barnie","message":"pfrazee: on your github issue: I think in an eventual consistent environment you should not try to implement ACID behaviour to tackle optimistic concurrency issues.","timestamp":1500148927694}
{"from":"dat-gitter","message":"(whilo) But anyway, I am more thinking about how to define a long term shared infrastructure vision and pooling resources than about pure tech decisions.","timestamp":1500148935361}
{"from":"dat-gitter","message":"(whilo) @pfrazee, why can't you deploy beaker browser in pure js?","timestamp":1500148984662}
{"from":"pfrazee","message":"barnie: I'm inclined to agree, but it depends on whether it's worthwhile to have device-local transactions despite the lack of network transactions","timestamp":1500149019108}
{"from":"pfrazee","message":"@whilo we alter the browser's behavior in ways that browsers dont normally allow","timestamp":1500149034806}
{"from":"pfrazee","message":"even with extensions","timestamp":1500149041514}
{"from":"barnie","message":"pfrazee: you could maybe benefit from event sourcing as well. runs also on you average run-of-the-mill ATM machine","timestamp":1500149106771}
{"from":"dat-gitter","message":"(whilo) @pfrazee, i stopped back then the project with the magnet-url, because (besides being buggy and limited to KDE) i knew that main stream browser vendors would never support a p2p io protocol","timestamp":1500149261353}
{"from":"pfrazee","message":"you figure?","timestamp":1500149275068}
{"from":"dat-gitter","message":"(whilo) but i think you could nowadays built something like it in pure js and deploy it in the browser","timestamp":1500149277988}
{"from":"barnie","message":"with webrtc et al","timestamp":1500149319840}
{"from":"dat-gitter","message":"(whilo) you won't get the magnet protocol handler, but you could use the host part of the URL and encode the rest in the fragment after #","timestamp":1500149342516}
{"from":"dat-gitter","message":"(whilo) yes, there is already bittorrent for the browser built on top of webrtc","timestamp":1500149367000}
{"from":"barnie","message":"cool!","timestamp":1500149378814}
{"from":"dat-gitter","message":"(whilo) replikativ only focuses on distributed write coordination, cooperating with a solid project for scalable reading, like bittorrent in the browser would help it enormously.","timestamp":1500149414735}
{"from":"pfrazee","message":"yeah, webtorrent. Cool project","timestamp":1500149442055}
{"from":"pfrazee","message":"@whilo we'll see how successful beaker is. I think if we can demonstrate the value, it could end up standardized","timestamp":1500149477680}
{"from":"dat-gitter","message":"(whilo) pfrazee you think a p2p data exchange layer could become part of w3c?","timestamp":1500149525836}
{"from":"dat-gitter","message":"(whilo) I have a really hard time imagining it","timestamp":1500149533311}
{"from":"dat-gitter","message":"(whilo) it frustrated me a lot to realize it","timestamp":1500149547338}
{"from":"pfrazee","message":"@whilo for sure. p2p solves a huge problem with data silos because any user can self publish an unlimited set of domains","timestamp":1500149584197}
{"from":"dat-gitter","message":"(whilo) i think it has to be pure js at least in the beginning. important for webtorrent would be a way to have webtorrent running in the background and not in each page","timestamp":1500149593077}
{"from":"pfrazee","message":"it scales well and performs well, and I think it will simplify certain kinds of application dev. I really think itll be a no brainer when the stack has matured","timestamp":1500149638554}
{"from":"substack","message":"a bunch of languages will compile to webasm in the future","timestamp":1500149896450}
{"from":"substack","message":"webasm is the new jvm, but not as crap","timestamp":1500149910887}
{"from":"pfrazee","message":"yeah totally","timestamp":1500149985595}
{"from":"dat-gitter","message":"(whilo) why is the jvm crap? it is used for the biggest infrastructure stacks in the backend.","timestamp":1500150307174}
{"from":"dat-gitter","message":"(whilo) Clojure integrates with the language semantics of the host, that is a tighter integration than just being able to emit bytecode","timestamp":1500150358579}
{"from":"barnie","message":"agree, the jvm is not crap, some language impls on top of it maybe","timestamp":1500150634410}
{"from":"blahah","message":"well - crap is a relative term","timestamp":1500151968582}
{"from":"blahah","message":"except the noun","timestamp":1500151974411}
{"from":"blahah","message":"it's big, meaning the (network/storage) resource constraints are relatively high","timestamp":1500152046454}
{"from":"jondashkyle","message":"lol @ except the noun","timestamp":1500152069680}
{"from":"pfrazee","message":"plus I heard the jvm doesnt even run javascript","timestamp":1500152078349}
{"from":"blahah","message":"it's slow to start, constraining the ways it can be abstracted into systems","timestamp":1500152081866}
{"from":"blahah","message":"and it feels super icky because corporate java directory structures, classes and patterns are associated with it","timestamp":1500152138746}
{"from":"blahah","message":"pfrazee: case closed","timestamp":1500152157573}
{"from":"pfrazee","message":"blahah: I have a talent for getting to the heart of the issue","timestamp":1500152180693}
{"from":"blahah","message":"jvm do u even node?","timestamp":1500152183301}
{"from":"blahah","message":"pfrazee: fathming you mean?","timestamp":1500152202450}
{"from":"blahah","message":";)","timestamp":1500152206884}
{"from":"pfrazee","message":"blahah: haha yes quite","timestamp":1500152209938}
{"from":"pfrazee","message":"+pfrazee -jvm","timestamp":1500152219528}
{"from":"blahah","message":":D","timestamp":1500152230156}
{"from":"barnie","message":"pfrazee: jvm does run javascript","timestamp":1500152738659}
{"from":"pfrazee","message":"barnie: you need to use a js engine right?","timestamp":1500152757259}
{"from":"pfrazee","message":"eg v8 + jvm","timestamp":1500152777110}
{"from":"barnie","message":"no, was just refering to what was mentioned earlier","timestamp":1500152803120}
{"from":"barnie","message":"i focus on android + ios via RN","timestamp":1500152824295}
{"from":"barnie","message":"but jvm has nashorn, ringojs and probably others","timestamp":1500152855606}
{"from":"barnie","message":"both multi-threaded, whereas node is not","timestamp":1500152916209}
{"from":"pfrazee","message":"barnie: no I mean, do those projects use a js engine? chakra or v8 or something?","timestamp":1500152960228}
{"from":"pfrazee","message":"or do those projects run the JS on the jvm directly?","timestamp":1500152975588}
{"from":"barnie","message":"both are a js engine of their own i believe","timestamp":1500152988156}
{"from":"barnie","message":"ever heard about vert.x? its polyglot programming","timestamp":1500153022923}
{"from":"dat-gitter","message":"(whilo) they are 100% standard compliant implementations. nashorn has almost v8 performance with tight java integration and multithreading","timestamp":1500153027894}
{"from":"barnie","message":"uses js on jvm","timestamp":1500153029943}
{"from":"barnie","message":"combine with java and any other language in a single app","timestamp":1500153048656}
{"from":"dat-gitter","message":"(whilo) the problem are the native libraries of node.js","timestamp":1500153049486}
{"from":"pfrazee","message":"oh that's interesting, I didnt know that","timestamp":1500153057068}
{"from":"barnie","message":"write modules in your prefered language","timestamp":1500153057947}
{"from":"barnie","message":"quite cool","timestamp":1500153060879}
{"from":"dat-gitter","message":"(whilo) like with any other classical runtime that drops to C libraries for performance","timestamp":1500153068973}
{"from":"dat-gitter","message":"(whilo) the jvm ecosystem is far more self-contained than node.js","timestamp":1500153080320}
{"from":"barnie","message":"its part of eclipse foundation","timestamp":1500153082818}
{"from":"dat-gitter","message":"(whilo) on the jvm it is bad practice to use native libraries for portability","timestamp":1500153104089}
{"from":"barnie","message":"yes","timestamp":1500153124993}
{"from":"barnie","message":"the jvm is also blazingly fast in itself","timestamp":1500153150783}
{"from":"dat-gitter","message":"(whilo) this is why its ecosystem is different (similar to the clr). it is a problem with these traditional runtimes (Python, R, ruby and now node.js) that the native library loading is painful to port between runtimes","timestamp":1500153170319}
{"from":"dat-gitter","message":"(whilo) other multilanguage runtimes would already be common, like the jvm is","timestamp":1500153188846}
{"from":"dat-gitter","message":"(whilo) the jvm is not slower than node to startup. hello world takes <100 ms to run on my computer","timestamp":1500153213908}
{"from":"dat-gitter","message":"(whilo) the enterprise stacks are bloated, it is true","timestamp":1500153225802}
{"from":"dat-gitter","message":"(whilo) you can also run the jvm with a really small heap (32 MiB) for instance on a raspberry pi","timestamp":1500153248939}
{"from":"dat-gitter","message":"(whilo) the jvm has a lot of historical baggage, it is true","timestamp":1500153264151}
{"from":"dat-gitter","message":"(whilo) but it never broke backwards compatibility","timestamp":1500153271440}
{"from":"barnie","message":"so true, and such an enormous pain in js world","timestamp":1500153292098}
{"from":"dat-gitter","message":"(whilo) look at Python, Ruby or now the node.js ecosystem of how much fraction there is because people constantly change APIs","timestamp":1500153297082}
{"from":"barnie","message":"now at least","timestamp":1500153298484}
{"from":"dat-gitter","message":"(whilo) i can give you a jar file from 1997 and you can use it in your project today without any problems","timestamp":1500153346790}
{"from":"dat-gitter","message":"(whilo) i actually like node.js for its async first APIs and its appeal to a lean core system","timestamp":1500153370642}
{"from":"barnie","message":"yes, huge benefit, people often dont see that","timestamp":1500153373311}
{"from":"barnie","message":"but the async is simulated","timestamp":1500153388156}
{"from":"dat-gitter","message":"(whilo) but it was frustrating for me to find out that even many core js APIs differ between browser and node.js","timestamp":1500153395695}
{"from":"barnie","message":"that's also cool about vert.x ... scales easily on all cores","timestamp":1500153417564}
{"from":"dat-gitter","message":"(whilo) i also think that go-lang is ahead of the async approaches in js by a large margin","timestamp":1500153425996}
{"from":"barnie","message":"just type the number you want","timestamp":1500153427094}
{"from":"dat-gitter","message":"(whilo) i use go abstractions in clojure and all of replikativ is built on top of it. pub-sub on async channels","timestamp":1500153462886}
{"from":"dat-gitter","message":"(whilo) i think otherwise i could have never pulled of replikativ","timestamp":1500153474648}
{"from":"barnie","message":"whilo: the problem with clojure is its barrier to entry, for the rest it is really beautiful","timestamp":1500153505376}
{"from":"barnie","message":"i think it will never become a really big language in terms of users","timestamp":1500153541234}
{"from":"dat-gitter","message":"(whilo) i agree. i would like to move a bit more mainstream, if i wouldn't lose all the properties. but the lispy nature should very much appeal to js people","timestamp":1500153547449}
{"from":"dat-gitter","message":"(whilo) modern js is often fairly functional","timestamp":1500153556936}
{"from":"barnie","message":"functional designs are now everywhere","timestamp":1500153570847}
{"from":"dat-gitter","message":"(whilo) i agree, but it doesn't need to. since it is hosted and all primitives are js primitives, we can export a high-quality js interface","timestamp":1500153602579}
{"from":"dat-gitter","message":"(whilo) i think for distributed databases you want to have as strong language semantics as possible","timestamp":1500153630652}
{"from":"barnie","message":"http://vertx.io/","timestamp":1500153653885}
{"from":"dat-gitter","message":"(whilo) that is why many of the solid distributed dbs are written with erlang for example","timestamp":1500153656779}
{"from":"dat-gitter","message":"(whilo) i haven't used vertx yet","timestamp":1500153683196}
{"from":"barnie","message":"really cool, and simple","timestamp":1500153692837}
{"from":"barnie","message":"well intuitive","timestamp":1500153702366}
{"from":"barnie","message":"async apps can be hard to grasp","timestamp":1500153717910}
{"from":"barnie","message":"whilo: what are good distributed db candidates","timestamp":1500153747537}
{"from":"dat-gitter","message":"(whilo) riak, couchdb, rabbitmq and i know a lot of distributed systems research working in erlang, e.g. in https://syncfree.github.io/antidote/","timestamp":1500153874994}
{"from":"dat-gitter","message":"(whilo) in fact most people at the conference i were at were working in erlang","timestamp":1500153898225}
{"from":"dat-gitter","message":"(whilo) i like the ideas behind erlang, i have for instance ported its error handling to clojure (or at least similar error handling): https://github.com/replikativ/superv.async","timestamp":1500153943077}
{"from":"blahah","message":"interesting thanks whilo and barnie - did not know a bunch of that","timestamp":1500153971632}
{"from":"barnie","message":"yeah, i know riak, read nice things on couchdb (and pouchdb), used similar mq to rabbitmq. thanks.","timestamp":1500154072330}
{"from":"blahah","message":"never experienced fast jvm startup - possibly because in the fields the tools I've been using the tendency is to import massive libraries of generic stuff","timestamp":1500154109627}
{"from":"barnie","message":"antidote looks good","timestamp":1500154128311}
{"from":"barnie","message":"blahah: did you read the vert.x front page? size of it's core below 1 mb","timestamp":1500154180233}
{"from":"dat-gitter","message":"(whilo) for js there is also https://github.com/albertlinde/Legion/ from the syncfree group","timestamp":1500154192802}
{"from":"dat-gitter","message":"(whilo) replikativ is also semi-officially part of it","timestamp":1500154201012}
{"from":"barnie","message":"you know about old stuffz. there is a new modern jvm based trend","timestamp":1500154208855}
{"from":"dat-gitter","message":"(whilo) historically the jvm, similar to js, was also slow","timestamp":1500154229609}
{"from":"dat-gitter","message":"(whilo) and gc was a problem etc. but you don't have to argue about that with js people thankfully :)","timestamp":1500154249820}
{"from":"dat-gitter","message":"(whilo) javascript engines are amazing and it is a pleasure to work with clojurescript on js clients","timestamp":1500154268987}
{"from":"dat-gitter","message":"(whilo) react and react native are amazing in a functional pipeline","timestamp":1500154282183}
{"from":"blahah","message":"vert.x looks v interesting","timestamp":1500154284028}
{"from":"barnie","message":"i've worked with it. rocks.","timestamp":1500154313842}
{"from":"dat-gitter","message":"(whilo) we use figwheel for hot-code reloading https://www.youtube.com/watch?v=LW8v6Cr9BcM","timestamp":1500154369790}
{"from":"blahah","message":"i haven't done mobile dev since js floated to the top of my stack but react is just like bsod for my brain - maybe react-native would make it clear why","timestamp":1500154387771}
{"from":"barnie","message":"what is bsod (i'm dutch)","timestamp":1500154435622}
{"from":"blahah","message":"hmm i wonder if some of the v cool but v clunky tools for text mining in java could be made modern (fast startup, concise)","timestamp":1500154452088}
{"from":"blahah","message":"blue screen of death","timestamp":1500154461020}
{"from":"blahah","message":":)","timestamp":1500154464297}
{"from":"barnie","message":"ha ha.","timestamp":1500154478502}
{"from":"barnie","message":"but i don't know about that","timestamp":1500154485084}
{"from":"barnie","message":"rather like react, but not well-known with it yet","timestamp":1500154501374}
{"from":"blahah","message":"https://en.m.wikipedia.org/wiki/Blue_Screen_of_Death","timestamp":1500154513007}
{"from":"blahah","message":"i don't know why but everything about it feels like the wrong decision to me","timestamp":1500154534512}
{"from":"barnie","message":"yes, i know that one. started as a vb, then c# developer","timestamp":1500154546005}
{"from":"barnie","message":"can you give an example?","timestamp":1500154576941}
{"from":"barnie","message":"the deviation from w3c standards (ie the inlining of html rather than compliant html pages)","timestamp":1500154612977}
{"from":"blahah","message":"virtual dom, jsx, monorepos","timestamp":1500154622047}
{"from":"barnie","message":"so the whole bunch :) don't know about monorepos","timestamp":1500154649213}
{"from":"barnie","message":"but virtual dom doesn't sound bad to me. the opposite","timestamp":1500154664978}
{"from":"barnie","message":"its an innovate idea","timestamp":1500154676795}
{"from":"barnie","message":"and i've worked with real dom's","timestamp":1500154684689}
{"from":"barnie","message":"problematic","timestamp":1500154691255}
{"from":"barnie","message":"slow","timestamp":1500154696722}
{"from":"barnie","message":"large sites are using it","timestamp":1500154709821}
{"from":"blahah","message":"modern real dom is fast","timestamp":1500154710402}
{"from":"barnie","message":"that may be true...RN might walk too far in front of the parade","timestamp":1500154732386}
{"from":"barnie","message":"but w3c and browser vendors need to catch up also","timestamp":1500154749702}
{"from":"barnie","message":"w3c has lost value to me. I used to watch it every day, not anymore","timestamp":1500154774167}
{"from":"barnie","message":"sporadically","timestamp":1500154786999}
{"from":"blahah","message":"https://github.com/patrick-steele-idem/morphdom#isnt-the-dom-slow","timestamp":1500154787224}
{"from":"barnie","message":"nice, some valid points there","timestamp":1500154920693}
{"from":"barnie","message":"but still react native solves the issue of dual codebases (in a better way than cordova)","timestamp":1500154962469}
{"from":"barnie","message":"blahah: i'm calling it quits here, its about midnight. was nice talking to you again.","timestamp":1500155040804}
{"from":"blahah","message":"barnie same here, and same here :)","timestamp":1500155133094}
{"from":"barnie","message":":)","timestamp":1500155151222}
{"from":"dat-gitter","message":"(whilo) same here :)","timestamp":1500155168440}
{"from":"blahah","message":"whilo +1 :)","timestamp":1500155353475}
{"from":"jondashkyle","message":"also pretty cool, sound salvager has grabbed 2475 tracks so far","timestamp":1500155501166}
{"from":"jondashkyle","message":"🎉","timestamp":1500155514988}
{"from":"louisc","message":"mafintosh: .clear() looks good! if the author of the archive clears some data, is that change also synced to everyone else in the swarm?","timestamp":1500156092658}
{"from":"pfrazee","message":"jondashkyle: nice","timestamp":1500156173145}
{"from":"pfrazee","message":"louisc: no it just deletes the local cache","timestamp":1500156189041}
{"from":"pfrazee","message":"louisc: if youre the author and you have a peer/peers you trust to keep the data, you could clearly locally and redownload from the peers later","timestamp":1500156219482}
{"from":"pfrazee","message":"which IMO is pretty magical","timestamp":1500156237835}
{"from":"louisc","message":"pfrazee: the use case I was thinking was if you have a beaker website and accidentally publish some sensitive information which is then streamed out to peers, you can revise the website but peers of course can look back through revisions","timestamp":1500156328034}
{"from":"pfrazee","message":"louisc: yeah clearing locally would at least stop peers from getting the data again from you","timestamp":1500156371758}
{"from":"pfrazee","message":"but obviously there's no perfect fix to that situation","timestamp":1500156402512}
{"from":"louisc","message":"pfrazee: if I cleared a revision from my beaker site, which say, 5 peers had already downloaded, and then a 6th peer comes along, what happens there?","timestamp":1500156467088}
{"from":"pfrazee","message":"louisc: the other 5 peers will helpfully serve it to peer 6","timestamp":1500156486176}
{"from":"pfrazee","message":"louisc: that said, beaker doesnt download history unless it's requeted","timestamp":1500156521222}
{"from":"pfrazee","message":"requested*","timestamp":1500156523087}
{"from":"pfrazee","message":"so if you very quickly deleted a sensitive file, there's a *chance* that nobody would ever download it, and if we tracked that kind of info we could actually know if it got distributed","timestamp":1500156557873}
{"from":"louisc","message":"pfrazee: yep makes sense. so the summary is that if at least one peer has received that revision, it can continue to be distributed despite the original author clearing that from their own copy.","timestamp":1500156610060}
{"from":"pfrazee","message":"louisc: correct","timestamp":1500156643391}
{"from":"louisc","message":"pfrazee: thank you for clarifying ☺️","timestamp":1500156661138}
{"from":"pfrazee","message":"louisc: sure thing","timestamp":1500156666607}
{"from":"bret","message":"mafintosh is node-modules.com down?","timestamp":1500165230034}
{"from":"bret","message":"I'm dead!","timestamp":1500165238538}
{"from":"bret","message":"wa7son: do you have access to that too?","timestamp":1500165400671}
{"from":"bret","message":"otherwise i have to use the default npm search... oh brother","timestamp":1500165417434}
{"from":"bret","message":"juliangruber: whats the story with juliangruber-shallow-equal ?","timestamp":1500165721473}
{"from":"mafintosh","message":"bret: ah yea, need to fix it","timestamp":1500188789740}
{"from":"barnie","message":"hi all! i just updated the discussion thread with some technical considerations..","timestamp":1500203065777}
{"from":"barnie","message":"please have a peek at https://github.com/datproject/dat/issues/824","timestamp":1500203074778}
{"from":"wa7son","message":"bret: yes, but I can't even SSH into the box","timestamp":1500203199477}
{"from":"wa7son","message":"I'm thinking maybe the droplet is down, cc mafintosh","timestamp":1500203328610}
{"from":"dat-gitter","message":"(whilo) @barnie, stupid question. you said that clojure will never become a mainstream language, and i agree, but i would still like to know why you think this is a problem and where you think the problem is.","timestamp":1500204044223}
{"from":"dat-gitter","message":"(whilo) i have arrived at clojure coming from mainstream languages and i have used them afterwards and for me it is still a very big difference to develop in it because of its emphasis on productivity, not just some l33t behaviour of nerdy programmers. i am very keen on community, so i have thought a lot if i should move to something more mainstream just to appeal to more people. but it brings a felt factor of 5-10x in productivit","timestamp":1500204289941}
{"from":"dat-gitter","message":"projects it is even more i guess) and it made programming fun again for me. so how do you think should i think about it?","timestamp":1500204290032}
{"from":"barnie","message":"i'll explain my reonons, but I understand you want to keep programming in it, because of fun, beauty and power.","timestamp":1500204423536}
{"from":"barnie","message":"you could ping an ex-colleague of mine that gives regular clojure meetups in amsterdam: skuro on github","timestamp":1500204460412}
{"from":"barnie","message":"now, you can compare the problem when you look at scala.","timestamp":1500204482410}
{"from":"barnie","message":"i've done the scala functional programming course and was really delighted with all the cool language concepts in it","timestamp":1500204518385}
{"from":"barnie","message":"very steep learning curve, but looked worth it","timestamp":1500204535907}
{"from":"barnie","message":"then i started using akka, for a cqrs app","timestamp":1500204554277}
{"from":"barnie","message":"it was young, written in scala, and still buggy in places","timestamp":1500204584239}
{"from":"barnie","message":"so i had to debug..","timestamp":1500204596701}
{"from":"barnie","message":"the code i saw there, written by academic experts, was really really really hard to understand for me","timestamp":1500204625522}
{"from":"barnie","message":"mostly magic one liners","timestamp":1500204631378}
{"from":"barnie","message":"therefore - and some other things - i decided to abondon scala with pain in my heart","timestamp":1500204677404}
{"from":"barnie","message":"too academical, too many features, too overwhelming","timestamp":1500204696994}
{"from":"barnie","message":"clojure is not entirely like that, i know","timestamp":1500204714517}
{"from":"barnie","message":"but due to its learning curve, apparent complexity, the community will not grow that fast so that you either can keep the barrier to entry low, or keep up with latest tech trends","timestamp":1500204816285}
{"from":"barnie","message":"but i think there is a place for clojure to exist for a long time","timestamp":1500204839717}
{"from":"barnie","message":"it depends of what you want with your app what you choose","timestamp":1500204854131}
{"from":"barnie","message":"do you want to support a large ecosystem with many developers...then maybe go for some other language, or better... go polyglot","timestamp":1500204893104}
{"from":"barnie","message":"i'm writing a post on that topic now on https://github.com/datproject/dat/issues/824 don't know if i have time to finish it today","timestamp":1500204927378}
{"from":"barnie","message":"whilo: btw i refered to you in https://github.com/beakerbrowser/beaker/issues/606","timestamp":1500205254864}
{"from":"dat-gitter","message":"(whilo) @barnie i will be give a presentation about replikativ (i talked to carlo about it) at the end of july","timestamp":1500208678857}
{"from":"barnie","message":"oh nice. maybe you had mentioned it before. sorry","timestamp":1500208731406}
{"from":"dat-gitter","message":"(whilo) clojure is really different to that. that is why i decided strongly against scala and it is what most people complain about. there are too many ways to do the same thing and you can be \"too\" smart about it. clojure is about simplicity (logical minimalism)","timestamp":1500208756060}
{"from":"barnie","message":"i'd heard about this from carlo and others","timestamp":1500208793529}
{"from":"dat-gitter","message":"(whilo) @barnie i have difficulty to follow all your discussions ^^","timestamp":1500208803892}
{"from":"barnie","message":"they fell in love with those things","timestamp":1500208805266}
{"from":"barnie","message":"please explain, i'll try to clarify","timestamp":1500208831940}
{"from":"dat-gitter","message":"(whilo) yes, it is more like python in that regard. although python is just easy, clojure really tries to break down the underlying complexity. and complexity cannot be abstracted away after the fact","timestamp":1500208954331}
{"from":"dat-gitter","message":"(whilo) this is an important distinction https://www.infoq.com/presentations/Simple-Made-Easy","timestamp":1500209029510}
{"from":"barnie","message":"i understand all your arguments, guess all i'm saying is, if you want a large developer base for your framework you'd have to facilitate for that","timestamp":1500209050333}
{"from":"barnie","message":"that can be either with or without clojure depending how you organize it","timestamp":1500209070400}
{"from":"barnie","message":"i could see plenty of configurations where you use clojure for what its particular good at, and use other languages if they are a better fit","timestamp":1500209169673}
{"from":"barnie","message":"but i am not the export, you and carlo are :D","timestamp":1500209201516}
{"from":"barnie","message":"expert","timestamp":1500209208972}
{"from":"barnie","message":"guru's maybe even ha ha :)","timestamp":1500209232087}
{"from":"barnie","message":"probably","timestamp":1500209240536}
{"from":"barnie","message":"for sure","timestamp":1500209290145}
{"from":"dat-gitter","message":"(whilo) i don't want to be a guru, it feels monstrous. i want to form a community. if leadership is necessary, fine, but i want to keep it real.","timestamp":1500210328177}
{"from":"dat-gitter","message":"(whilo) to come back from our detour. on what points would be collaboration with the dat project possible?","timestamp":1500210348857}
{"from":"dat-gitter","message":"(whilo) i would like to start with some practical low-risk effort where we could benefit from each other's work","timestamp":1500210367711}
{"from":"dat-gitter","message":"(whilo) like sharing libraries or building a joint application","timestamp":1500210398587}
{"from":"dat-gitter","message":"(whilo) i think we share similar visions, so drafting too much of a big picture might risk that we are lost in discussion","timestamp":1500210428846}
{"from":"dat-gitter","message":"(whilo) i am willing to discuss any questions ofcourse though","timestamp":1500210465037}
{"from":"barnie","message":"for the record: i am not on the dat team. bumped into it only 1,5 wk ago","timestamp":1500210865418}
{"from":"barnie","message":"and good spirit not willing to be a guru. i like that, thanks","timestamp":1500210887811}
{"from":"barnie","message":"i would also like to be involved in community cultivation","timestamp":1500210938340}
{"from":"barnie","message":"but also need to get some smoke coming from the chimney, which is now cold ;)","timestamp":1500210970099}
{"from":"dat-gitter","message":"(whilo) ok. i agree about economics. but it is more difficult to cooperate there at first","timestamp":1500211027271}
{"from":"dat-gitter","message":"(whilo) who is on the dat team?","timestamp":1500211032619}
{"from":"barnie","message":"you checked that discussion thread, right?","timestamp":1500211088983}
{"from":"dat-gitter","message":"(whilo) Yes, but it is already quite a lot and I am not sure what it is really about.","timestamp":1500211139914}
{"from":"dat-gitter","message":"(whilo) I haven't read it completely.","timestamp":1500211148158}
{"from":"dat-gitter","message":"(whilo) I have tons of stuff to do...","timestamp":1500211155841}
{"from":"barnie","message":"its about repositioning for success","timestamp":1500211167729}
{"from":"barnie","message":"and applies as well to replikativ i would imagine","timestamp":1500211181015}
{"from":"dat-gitter","message":"(whilo) i think success will come by building successful applications. people will not primarily come through googling the projects","timestamp":1500211218717}
{"from":"barnie","message":"that's exactly what i discuss there. i made a number of updates","timestamp":1500211248893}
{"from":"dat-gitter","message":"(whilo) ok, cool","timestamp":1500211260382}
{"from":"barnie","message":"the first message was clickbait","timestamp":1500211274453}
{"from":"barnie","message":"but hey! I've got to go. speak to you later!","timestamp":1500211322385}
{"from":"dat-gitter","message":"(whilo) cu :)","timestamp":1500211390443}
{"from":"dat-gitter","message":"(whilo) i have to catch up with work","timestamp":1500211397233}
{"from":"blahah","message":"barnie I think maybe you are underestimating the size / sucess of the dat community","timestamp":1500213864970}
{"from":"blahah","message":"oh he's gone","timestamp":1500213871974}
{"from":"barnie","message":"In the train now..","timestamp":1500214335572}
{"from":"barnie","message":"But points are still just as valid","timestamp":1500214404192}
{"from":"barnie","message":"I underestimated initially yes","timestamp":1500214404220}
{"from":"barnie","message":"Funny... gitter has a different message sequence","timestamp":1500214864850}
{"from":"barnie","message":"Eventual consistency at work","timestamp":1500214880141}
{"from":"barnie","message":"Only not consitent :)","timestamp":1500214918890}
{"from":"karissa","message":"Barnie thanks for your input, i think that it's good to talk about these things. User facing work isn't good work unless it's in a constant state of change and reflection, hopefully fast enough to keep pace with a rapidly changing world :)","timestamp":1500216978987}
{"from":"karissa","message":"barnie the site as it is now was designed before beaker approached 1.0 and other apps started being built on dat. I think now I'd approach it with the suite of compatible apps and the logos of organizations currently using dat","timestamp":1500217158083}
{"from":"karissa","message":"barnie I also would love to have a sweet demo that visualizes the p2p network similar to webtorrent's demo","timestamp":1500217309857}
{"from":"karissa","message":"We have most of that implemented already, just design and placement and attention to the front page is needed","timestamp":1500217342311}
{"from":"karissa","message":"barnie I'm going to read through that issue thoroughly and take folks' suggestions to come up with a new front page concept. :)","timestamp":1500217411264}
{"from":"TheLink","message":"I think it was a good idea to separate between datproject.org as the portal for everyone including less technical inclined people and datprotocol.com for the technical specs","timestamp":1500217645411}
{"from":"TheLink","message":"I think datprotocol.com could need some more information and datproject.org a hint towards datprotocol.com for the purely technical interested","timestamp":1500217721742}
{"from":"TheLink","message":"at a prominent place","timestamp":1500217739009}
{"from":"pfrazee","message":"https://twitter.com/pfrazee/status/886639935356379136","timestamp":1500227308652}
{"from":"pfrazee","message":"more demos for the demo god","timestamp":1500227320481}
{"from":"barnie","message":"karissa and TheLink: thanks","timestamp":1500228057608}
{"from":"barnie","message":"karissa and TheLink: thanks","timestamp":1500228073257}
{"from":"barnie","message":"TheLink and why have .com? w3c.com, nonprofit.com ... doesnt fit","timestamp":1500228148950}
{"from":"jondashkyle","message":"pfrazee: whoa this is awesome!","timestamp":1500229248319}
{"from":"jondashkyle","message":"what is the dat:// url to check it out?","timestamp":1500229258870}
{"from":"pfrazee","message":"jondashkyle: dat://wysiwywiki-pfrazee.hashbase.io/","timestamp":1500229274773}
{"from":"jondashkyle","message":"pfrazee: so cool. is there a way with the web api to check to see if you are the owner of the archive. for instance, to show only the fork button if you do not own the dat, and if you do, replace it with edit.","timestamp":1500229324261}
{"from":"pfrazee","message":"jondashkyle: it does just that!","timestamp":1500229341120}
{"from":"jondashkyle","message":"pfrazee: do i need a new build of beaker for that to be working correctly? clicking fork opens the modal, but then it immediately closes","timestamp":1500229487452}
{"from":"pfrazee","message":"jondashkyle: are you on the https version?","timestamp":1500229641251}
{"from":"jondashkyle","message":"pfrazee: nah the dat://","timestamp":1500229669693}
{"from":"pfrazee","message":"anything show up in the console?","timestamp":1500229679097}
{"from":"pfrazee","message":"oh no I just reproduced","timestamp":1500229689875}
{"from":"jondashkyle","message":"asking the crucial questions ;) haha, should've checked.","timestamp":1500229693184}
{"from":"pfrazee","message":"ok 1 sec let me check into this","timestamp":1500229695515}
{"from":"jondashkyle","message":"yeah there is an error.","timestamp":1500229697625}
{"from":"pfrazee","message":"ok yeah fork() doesnt support dns it looks like","timestamp":1500229712303}
{"from":"jondashkyle","message":"https://www.irccloud.com/pastebin/qm7Xg5r3/","timestamp":1500229713213}
{"from":"pfrazee","message":"adding to my bugs list and I'll fix my app, 1 sec","timestamp":1500229723206}
{"from":"jondashkyle","message":"cool! this seems great for showing an example/demo site with the “cms”. we did a similar thing for cargo/persona when looking at an example: https://styles.persona.co/","timestamp":1500229847045}
{"from":"jondashkyle","message":"if you click on a thumbnail, you’ll see a preview of the design, and you can click “start with this template”, which in the case of this would be forking the archive!","timestamp":1500229874060}
{"from":"pfrazee","message":"jondashkyle: ok fixed","timestamp":1500229875078}
{"from":"pfrazee","message":"jondashkyle: oh nice","timestamp":1500229905946}
{"from":"jondashkyle","message":"pfrazee: is that up there now (on the dat://hashbase url)","timestamp":1500229983483}
{"from":"pfrazee","message":"jondashkyle: yep should be","timestamp":1500229993543}
{"from":"jondashkyle","message":"looks like the old version for me still","timestamp":1500230018227}
{"from":"jondashkyle","message":"when i check history i see last revision is #76","timestamp":1500230039428}
{"from":"pfrazee","message":"jondashkyle: try again","timestamp":1500230104411}
{"from":"jondashkyle","message":"wow yeah, this is sick","timestamp":1500230177794}
{"from":"jondashkyle","message":"pfrazee: also, just to expand on that idea about conditional edit/fork depending upon archive ownership, i see both the option to edit and fork regardless of who owns the archive right now","timestamp":1500230239080}
{"from":"pfrazee","message":"jondashkyle: isnt the edit button disabled for you?","timestamp":1500230258244}
{"from":"pfrazee","message":"I guess it'd be smarter to just hide the button entirely","timestamp":1500230264371}
{"from":"jondashkyle","message":"it’s disabled but it still appears the same visually","timestamp":1500230277914}
{"from":"jondashkyle","message":"so clicking it feels broken","timestamp":1500230283314}
{"from":"pfrazee","message":"ok changing 1 sec","timestamp":1500230320091}
{"from":"jondashkyle","message":"but i know this was a quick demo more about the forking functionality, not nitpicking at all! i was just wondering if you could check ownership w/ the web api to do that sort of thing","timestamp":1500230327914}
{"from":"jondashkyle","message":"lol sometimes hard to convey what intention was w/ feedback via text","timestamp":1500230354508}
{"from":"pfrazee","message":"jondashkyle: no totally I want this to be a good beaker demo","timestamp":1500230406046}
{"from":"jondashkyle","message":"cool! yeah, i think that would be a great usability fix","timestamp":1500230431825}
{"from":"jondashkyle","message":"getting to preview your changes before committing them is such a solid feature","timestamp":1500230464350}
{"from":"jondashkyle","message":"the history / version control stuff is such a win.","timestamp":1500230479506}
{"from":"jondashkyle","message":"that was one of the biggest requests with cargo… someone would mess up their content, but if you had closed the session, you were SOL","timestamp":1500230500896}
{"from":"jondashkyle","message":"and to roll our own versioning system would've been possible but super expensive, esp when dealing w/ asset distribution across CDNs","timestamp":1500230533050}
{"from":"jondashkyle","message":"one of the most useful unique affordances of the protocol, like we were discussing","timestamp":1500230573987}
{"from":"jondashkyle","message":"i think it's also something which could help to distinguish what is diff about the p2p web from something like publishing on facebook","timestamp":1500230628539}
{"from":"jondashkyle","message":"b/c you own the data, you can manage it and go back in time however you’d like","timestamp":1500230642571}
{"from":"pfrazee","message":"jondashkyle: ok published, try it again","timestamp":1500230811404}
{"from":"jondashkyle","message":"yeah! that’s great. it's cool to leave fork around too, incase you want to create another page starting with that one.","timestamp":1500230878349}
{"from":"pfrazee","message":"yeah exactly, ok reading what you just said","timestamp":1500231061402}
{"from":"pfrazee","message":"yeah a history UI in there would be solid as heck","timestamp":1500231093008}
{"from":"pfrazee","message":"I want to get the static hash-addressed dats in beaker soon too","timestamp":1500231107628}
{"from":"jondashkyle","message":"pfrazee: cool!","timestamp":1500231156789}
{"from":"pfrazee","message":"because hash-addressed dats have the ownerlessness aspect, and also the no-risk-of-change guarantee, so a wiki app like this <script> embed it","timestamp":1500231159904}
{"from":"jondashkyle","message":"“so a wiki app like this <script> embed it” <- what does this mean?","timestamp":1500231203999}
{"from":"pfrazee","message":"haha it means I should go back to grammar school","timestamp":1500231213905}
{"from":"jondashkyle","message":"lol","timestamp":1500231230252}
{"from":"pfrazee","message":"so like, lets say I take all the js and css in this app and bundle them into two files, wiki.css and wiki.js","timestamp":1500231231653}
{"from":"pfrazee","message":"put the in a static dat","timestamp":1500231238403}
{"from":"pfrazee","message":"put them*","timestamp":1500231241529}
{"from":"pfrazee","message":"then somebody can create a new dat, and in index.html do `<script src=\"dat://{that-static-app}/wiki.js\"></script><link rel=\"stylesheet\" href=\"dat://{that-static-app}/wiki.css\">`","timestamp":1500231278890}
{"from":"pfrazee","message":"and boom it's a new wysiwywiki","timestamp":1500231316032}
{"from":"jondashkyle","message":"ah yes!","timestamp":1500231468888}
{"from":"pfrazee","message":"the static dat is particularly cool for something like that because it just becomes \"the wysiwywiki code\" not \"paul's wyiswywiki code\"","timestamp":1500231476867}
{"from":"pfrazee","message":"just, you know, infrastructure out in the world","timestamp":1500231486350}
{"from":"jondashkyle","message":"i see!! yes","timestamp":1500231504859}
{"from":"jondashkyle","message":"that would be perfect for these sort of “starter kit” archives","timestamp":1500231516085}
{"from":"pfrazee","message":"I think we need another scheme for static dats","timestamp":1500231534756}
{"from":"pfrazee","message":"something like dat+snapshot:// or maybe dat+blake2b://","timestamp":1500231545865}
{"from":"jondashkyle","message":"it’s interesting to think about this, and consider how popular things like the react cli took off","timestamp":1500231548656}
{"from":"jondashkyle","message":"and made it accessible to a group of people who might’ve gotten stuck in config before.","timestamp":1500231562225}
{"from":"jondashkyle","message":"there is a lot of room to improve on what they did with that, but thinking about the static dat as a way of saying, “here, take this and make it your’s” is great","timestamp":1500231594084}
{"from":"barnie","message":"strange, I was again typing from train, the comments do not exist on gitter","timestamp":1500231601474}
{"from":"pfrazee","message":"yeah!","timestamp":1500231601502}
{"from":"pfrazee","message":"barnie: I havent seen any comments from you","timestamp":1500231622013}
{"from":"jondashkyle","message":"i think he was talking about what we’ve been discussing","timestamp":1500231646363}
{"from":"barnie","message":"well, wasn't that important, was reacting to people addressing me when i was away..","timestamp":1500231653028}
{"from":"jondashkyle","message":"cool","timestamp":1500231660483}
{"from":"jondashkyle","message":"pfrazee: here is a good example of a site some friends just published which could totally be started from your wiki example, but just needs some design love and interface to get going: http://untitled-mfg.com/","timestamp":1500231683360}
{"from":"jondashkyle","message":"this type of way of understanding sites is very in vogue atm. “web vernacular”, or “brutalist”. these are all sort of painful metaphors, but they represent things very similar to the sorts of pages beaker is good at making","timestamp":1500231731515}
{"from":"jondashkyle","message":"for example: http://brutalistwebsites.com/","timestamp":1500231739570}
{"from":"jondashkyle","message":"(worth reading some of the interviews, too!)","timestamp":1500231776991}
{"from":"pfrazee","message":"jondashkyle: yeah I've noticed that. I think my site is sort of in that vein https://pfrazee.hashbase.io/","timestamp":1500231800418}
{"from":"jondashkyle","message":"absolutely!","timestamp":1500231812015}
{"from":"jondashkyle","message":"i think it’s worth noting, that a lot of the pages on that site, are personal pages of people in the design world who are not really “technology” driven people","timestamp":1500231856213}
{"from":"barnie","message":"nice paper pfrazee","timestamp":1500231864423}
{"from":"jondashkyle","message":"but they are making pages like that as a rejection of things like squarespace, the usability of facebook or medium, etc…","timestamp":1500231899507}
{"from":"pfrazee","message":"barnie: thanks!","timestamp":1500232039127}
{"from":"pfrazee","message":"jondashkyle: right that makes sense","timestamp":1500232053025}
{"from":"pfrazee","message":"kind of a neo web 1.0","timestamp":1500232061824}
{"from":"barnie","message":"did you read the more technical addition to the discussion thread on positioning i wrote today?","timestamp":1500232065501}
{"from":"barnie","message":"wonder if I am on the mark there","timestamp":1500232078956}
{"from":"pfrazee","message":"barnie: link me?","timestamp":1500232083253}
{"from":"barnie","message":"https://github.com/datproject/dat/issues/824","timestamp":1500232095636}
{"from":"jondashkyle","message":"barnie: i dropped a quick note in there!","timestamp":1500232095727}
{"from":"barnie","message":"last comment of mine","timestamp":1500232104409}
{"from":"barnie","message":"johndashkyle: you have a different name on github?","timestamp":1500232160586}
{"from":"pfrazee","message":"barnie: ok cool reading","timestamp":1500232166534}
{"from":"barnie","message":"thx","timestamp":1500232212247}
{"from":"barnie","message":"would fit well with neo web also it think","timestamp":1500232279750}
{"from":"jondashkyle","message":"barnie: nah, i'm jondashkyle there, too.","timestamp":1500232293557}
{"from":"barnie","message":"i don't see you","timestamp":1500232337073}
{"from":"barnie","message":"on #824 i mean","timestamp":1500232362127}
{"from":"jondashkyle","message":"https://github.com/datproject/dat/issues/824#issuecomment-315629413","timestamp":1500232378060}
{"from":"barnie","message":"ah, sorry, the page was not in sync yet. i'll read now","timestamp":1500232406712}
{"from":"barnie","message":"interesting video! saved for later.","timestamp":1500232802713}
{"from":"jondashkyle","message":"pfrazee: oh yeah! i wanted to ask, is it possible to point DNS to hashbase?","timestamp":1500233444665}
{"from":"jondashkyle","message":"for adding a domain. i’m procrastinating on a client project, and would love to put it on dat:// as a way of getting it online, but i don’t want to take the time to set up a server and config everything as i did for my personal site","timestamp":1500233496348}
{"from":"pfrazee","message":"jondashkyle: not yet but you can do this https://gist.github.com/pfrazee/8eb85d7bb33efb52c7d6d1a2e639d979","timestamp":1500233891326}
{"from":"pfrazee","message":"jondashkyle: but I think we will probably try to do that soon because that's a pretty good feature to have","timestamp":1500233917921}
{"from":"jondashkyle","message":"yeah! i do that with my personal site now!","timestamp":1500233925350}
{"from":"pfrazee","message":"sweet","timestamp":1500233954431}
{"from":"jondashkyle","message":"yeah, i would put a few client projects on hashbase and upgrade them (build it into the budget) if i could point a domain","timestamp":1500233958800}
{"from":"pfrazee","message":"shouldnt be too hard thanks to lets encrypt","timestamp":1500234014278}
{"from":"jondashkyle","message":"yeah, although i was having some problems w/ configuring greenkeeper when trying to use dathttpd. it was 100% me goofing up the config.","timestamp":1500234063419}
{"from":"pfrazee","message":"I need to fix up dathttpd's letsencrypt usage","timestamp":1500234162987}
{"from":"pfrazee","message":"I've got some more correct code I can stick in there","timestamp":1500234169082}
{"from":"jondashkyle","message":"doesn't sound like you have enough to do","timestamp":1500234196040}
{"from":"jondashkyle","message":";)","timestamp":1500234216367}
{"from":"pfrazee","message":"haha yah Im slackin","timestamp":1500234266829}
{"from":"jondashkyle","message":"obviously if there are small things which need some attention and it’s in my wheelhouse feel free to mention","timestamp":1500234342684}
{"from":"jondashkyle","message":"also, i generally am not interested in numbers, but it is interesting looking at the analytics for that soundcloud generator tweet. 50k impressions w/ a 10% click-through rate, which is remarkably high","timestamp":1500234470453}
{"from":"jondashkyle","message":"(the click through rate, i mean)","timestamp":1500234532656}
{"from":"pfrazee","message":"yeah that's on the tweet?","timestamp":1500235364113}
{"from":"jondashkyle","message":"yep!","timestamp":1500235590025}
{"from":"pfrazee","message":"that is quite good, yeah","timestamp":1500235731553}
{"from":"jondashkyle","message":"pfrazee: looking at the wiki example some more, it’s currently base64 encoding images and inlining them into the html, yeah?","timestamp":1500236105112}
{"from":"jondashkyle","message":"prob be pretty easy to get it to write those files to the dat","timestamp":1500236112131}
{"from":"pfrazee","message":"yeah it would be, I should do that","timestamp":1500236140489}
{"from":"jondashkyle","message":"i also really like that you used jquery for this. i get so caught up sometimes with implementation details that are more rooted in my own preferences that i forget about what people will understand","timestamp":1500236213359}
{"from":"jondashkyle","message":"definitely wanting to promote things like choo as healthy alternatives to things like react, so the stuff i'm doing will likely focus on that, but there isn’t any reason not to have all sorts of variations on this to make it relatable to different people","timestamp":1500236297772}
{"from":"jondashkyle","message":"e.g., you could see a series of tutorials, using some key words like “distributed web with jquery!” as titles, to make the ideas accessible","timestamp":1500236334160}
{"from":"jondashkyle","message":"and maybe even parlay that into introducing someone to choo!","timestamp":1500236375081}
{"from":"cblgh","message":"mafintosh: does hyperdb have an on-change event or similar?","timestamp":1500237809915}
{"from":"cblgh","message":"i find myself wanting to be notified on new values for keys, would enable a lot of fun stuff","timestamp":1500237826374}
{"from":"pfrazee","message":"jondashkyle: yeah actually I just chose that because the wysiwyg needed it, and for something so small it's like, yeah jquery cool","timestamp":1500237829252}
{"from":"cblgh","message":"mafintosh: plannin to implement it with polling & setInterval atm, but figured i would ping you with that question&request","timestamp":1500237869975}
{"from":"jondashkyle","message":"yeah, totally!","timestamp":1500237870568}
{"from":"pfrazee","message":"https://twitter.com/pfrazee/status/886730741165551619 more demos for the demo god","timestamp":1500249083549}
{"from":"bret","message":"really neat!","timestamp":1500249714472}
{"from":"pfrazee","message":"thanks!","timestamp":1500249778237}
{"from":"pfrazee","message":"we're going to get all these demos cleaned up and then put them on the FP of hashbase","timestamp":1500249795922}
{"from":"pfrazee","message":"along with MOAR DOCS","timestamp":1500249824233}
{"from":"substack","message":"I am going to have some more mapping demos sooooon","timestamp":1500250947301}
{"from":"bret","message":"woop","timestamp":1500251335359}
{"from":"pfrazee","message":"sweeet","timestamp":1500252643522}
{"from":"jondashkyle","message":"substack: awesome !","timestamp":1500262622931}
{"from":"TheLink","message":"https://news.ycombinator.com/item?id=14786509","timestamp":1500284609889}
{"from":"karissa","message":"lol their site is down","timestamp":1500284706492}
{"from":"TheLink","message":"yes, that's why I linked to hn","timestamp":1500284723242}
{"from":"TheLink","message":"a little irony to sweeten monday mornings","timestamp":1500284755658}
{"from":"barnie","message":"urgent help needed! my dat stream is growing infinitely. all buffers overflow!! need pushback protocol initiation quick: https://github.com/datproject/dat/issues/824#issuecomment-315740501","timestamp":1500294440847}
{"from":"mafintosh","message":"hehe","timestamp":1500300511490}
{"from":"jhand","message":"ogd: interesting https://blog.datadryad.org/2017/07/17/exciting-new-partnership-for-long-term-preservation-of-dryad-data/amp/. Ever hear of easy?","timestamp":1500304875913}
{"from":"blahah","message":"jhand it's a mostly german thing I think","timestamp":1500305886807}
{"from":"jondashkyle","message":"haha! luckyme records just retweeted our soundcloud archiving tool: https://twitter.com/LuckyMe","timestamp":1500306350012}
{"from":"jondashkyle","message":"(they are a rather huge independent electronic music label)","timestamp":1500306367396}
{"from":"jondashkyle","message":"nice seeing an endorsement from the music industry","timestamp":1500306414586}
{"from":"TheLink","message":"would it be possible to create an webextension that works in firefox and chrome which could handle dat urls and show dats?","timestamp":1500306465815}
{"from":"barnie","message":"that rocks jondashkyle!","timestamp":1500306795533}
{"from":"pfrazee","message":"jondashkyle: dang!","timestamp":1500306804631}
{"from":"jondashkyle","message":"yeah! haha a lot of kanye’s producers release their solo records on that label.","timestamp":1500306816091}
{"from":"pfrazee","message":"TheLink: if so I've been wasting a lot of time","timestamp":1500306819429}
{"from":"jondashkyle","message":"also we got a nice little retweet from one of the more influential writers in the art section of the new york times: https://twitter.com/gregorg","timestamp":1500306905172}
{"from":"jondashkyle","message":"don't know greg personally but his writing is some of my favorite","timestamp":1500306915493}
{"from":"TheLink","message":"beaker's scope goes far beyond of what I had in mind","timestamp":1500306920184}
{"from":"TheLink","message":"I was thinking of sth that does dat via webrtc or sth like that","timestamp":1500306981281}
{"from":"TheLink","message":"and registers for dat urls","timestamp":1500306991921}
{"from":"TheLink","message":"not sure if that makes sense","timestamp":1500307034354}
{"from":"TheLink","message":"read-only capabilities of course","timestamp":1500307091838}
{"from":"TheLink","message":"and make dat urls automagically clickable","timestamp":1500307128564}
{"from":"pfrazee","message":"TheLink: yeah unfortunately you can't register dat:// urls in extensions","timestamp":1500307819234}
{"from":"TheLink","message":"ok","timestamp":1500307838922}
{"from":"jhand","message":"mafintosh: peerflix to vlc on termux works!","timestamp":1500308119230}
{"from":"jhand","message":"Seeking is pretty quick","timestamp":1500308179307}
{"from":"ogd","message":"jondashkyle: nice coverage!","timestamp":1500309326543}
{"from":"ogd","message":"slash exposure","timestamp":1500309333384}
{"from":"jondashkyle","message":"ogd: thanks! yeah, it's nicely organic. certainly feeling some energy and looking to funnel that into iterating on some more immediately useful tools for people outside the immediate developer community but still hungry for the ideas!","timestamp":1500309453276}
{"from":"dat-gitter","message":"(benrogmans) @jondashkyle nice work :)","timestamp":1500310338782}
{"from":"dat-gitter","message":"(benrogmans) I've used @mafintosh's download tool and built my own dat site for my Soundcloud profile:","timestamp":1500310375402}
{"from":"dat-gitter","message":"(benrogmans) dat://4723e7ecb38e24429930323d80d6169a940612079e0cc296b8400cf2f0310514/","timestamp":1500310375539}
{"from":"dat-gitter","message":"(benrogmans) Feel free to steal the front-end ;-)","timestamp":1500310436525}
{"from":"jondashkyle","message":"pfrazee: hmm i get this funny error when going to that link benrogmans sent:","timestamp":1500310558835}
{"from":"jondashkyle","message":"https://usercontent.irccloud-cdn.com/file/52g2CQ6B/Screen%20Shot%202017-07-17%20at%2012.55.37%20PM.png","timestamp":1500310567206}
{"from":"pfrazee","message":"jondashkyle: eh that's been happening sporadically. Try restarting beaker :(","timestamp":1500310592669}
{"from":"pfrazee","message":"@benrogmans Im having trouble finding you on the network, are you online? You might want to push to hashbase.io","timestamp":1500310643361}
{"from":"dat-gitter","message":"(benrogmans) I'm online, I did publish to Hashbase but its 900mb","timestamp":1500310669943}
{"from":"dat-gitter","message":"(benrogmans) dat://music-mrjack.hashbase.io/","timestamp":1500310708870}
{"from":"pfrazee","message":"@benrogmans that's interesting, did it finish uploading?","timestamp":1500310823918}
{"from":"pfrazee","message":"@benrogmans you should be over quota","timestamp":1500310891036}
{"from":"dat-gitter","message":"(benrogmans) https://www.dropbox.com/s/29mgdinlfv1clnw/Screenshot%202017-07-17%2019.01.27.png?dl=0","timestamp":1500310898317}
{"from":"dat-gitter","message":"(benrogmans) That explains","timestamp":1500310924385}
{"from":"pfrazee","message":"yeah","timestamp":1500310949836}
{"from":"dat-gitter","message":"(benrogmans) In my account section:","timestamp":1500310992585}
{"from":"dat-gitter","message":"(benrogmans) Storage","timestamp":1500310992677}
{"from":"dat-gitter","message":"(benrogmans) 818.9MB/100MB","timestamp":1500310992704}
{"from":"pfrazee","message":"yeah, I'm going to check but hopefully that's not actual bytes used, just what *would* be used based on the archives you tried to upload","timestamp":1500311044095}
{"from":"dat-gitter","message":"(benrogmans) But still I am online with dat://4723e7ecb38e24429930323d80d6169a940612079e0cc296b8400cf2f0310514","timestamp":1500311052381}
{"from":"dat-gitter","message":"(benrogmans) hmm ok","timestamp":1500311069564}
{"from":"pfrazee","message":"yeah I'm not able to connect to you for some reason","timestamp":1500311086537}
{"from":"dat-gitter","message":"(benrogmans) I've had that problem with a friend last week","timestamp":1500311117114}
{"from":"pfrazee","message":"some networks are too hostile for dat, still","timestamp":1500311140394}
{"from":"dat-gitter","message":"(benrogmans) We were not able to share sites or dat files. Maybe it's my internet service provider or so","timestamp":1500311149719}
{"from":"dat-gitter","message":"(benrogmans) hm yeah","timestamp":1500311156779}
{"from":"pfrazee","message":"yeah, or a bug in the protocol. mafintosh do you have a guide around for the different test and doctor CLIs?","timestamp":1500311174082}
{"from":"dat-gitter","message":"(benrogmans) But after an hour or so, some files suddenly did come through","timestamp":1500311178805}
{"from":"barnie","message":"help! github says 'kernel panick. stream too long, attach processors'. what should I do? https://github.com/datproject/dat/issues/824#issuecomment-315740501","timestamp":1500313335047}
{"from":"cblgh","message":"implemented a messaging system in my lil dat mud!","timestamp":1500322535745}
{"from":"noffle","message":"cblgh: woah, dat mud!? that sounds rad -- link?","timestamp":1500324230051}
{"from":"cblgh","message":"noffle: still in a private repo for the moment, i'll make it public & link in here soonish!","timestamp":1500324462990}
{"from":"cblgh","message":"it's very simple but kinda neat :3","timestamp":1500324477300}
{"from":"noffle","message":"yes please! realtime p2p gaming is such an exciting frontier","timestamp":1500324481603}
{"from":"noffle","message":"how are you handling shared game state? e.g. an unknown # of players trying to pick up the same sword?","timestamp":1500324508139}
{"from":"cblgh","message":"honestly just designing around that atm, only verbs so far are messaging, walking & describing","timestamp":1500325006041}
{"from":"cblgh","message":"more of a practical exploration of dat for me than remaking nethack","timestamp":1500325032789}
{"from":"cblgh","message":"without a blockchain or similar consensus specific items like that aren't trivial to solve","timestamp":1500325109418}
{"from":"cblgh","message":"what you could instead do is have people visit a sword tree and pluck one down","timestamp":1500325126592}
{"from":"cblgh","message":"thus getting a sword and not having an issue with getting the only sword","timestamp":1500325145077}
{"from":"cblgh","message":"but yeah, extremely light on the mechanics atm","timestamp":1500325159848}
{"from":"noffle","message":"nice!","timestamp":1500325193675}
{"from":"yoshuawuyts","message":"mafintosh: substack pfrazee https://mobile.twitter.com/jaffathecake/status/886978481455853568","timestamp":1500327184582}
{"from":"yoshuawuyts","message":"Browser block device proposal","timestamp":1500327204032}
{"from":"yoshuawuyts","message":"And probably more people here - worth taking a look if that would work","timestamp":1500327246005}
{"from":"yoshuawuyts","message":"Oh no, it's using Promise streams - might still be worth validating the API provides at least what you want from a block device in the browser, regardless of the API","timestamp":1500327708896}
{"from":"substack","message":"yeah the streams feel unnecessary","timestamp":1500328120557}
{"from":"substack","message":"read the spec","timestamp":1500328124270}
{"from":"ralphtheninja","message":"-n","timestamp":1500329739456}
{"from":"ralphtheninja","message":"ups","timestamp":1500329741429}
{"from":"mafintosh","message":"creationix: i'll try add more to the impl guide today","timestamp":1500359329407}
{"from":"cblgh","message":"hmm i want to use either hypedrive or dat-node in conjunction with hasbase.io","timestamp":1500382678718}
{"from":"cblgh","message":"can't get it to work though","timestamp":1500382684076}
{"from":"cblgh","message":"hyperdrive i'm not certain how i am to seed the archive, and with dat-node i import files & join the network and give the dat addr to hashbase","timestamp":1500382743544}
{"from":"cblgh","message":"but hashbase is just stuck at 0% so that doesn't appear to work either","timestamp":1500382759014}
{"from":"cblgh","message":"ahh fucked my client, brb","timestamp":1500382781412}
{"from":"cblgh","message":"i basically just want to make a json file available via a client, and have it be mirrored by hashbase","timestamp":1500382973026}
{"from":"Thad","message":"I'd like to help with getting the Explore views of Datproject to include JSON-LD scripts for structured data. It looks like the list of datasets is here https://github.com/datproject/datproject.org/blob/master/client/js/components/list.js ??","timestamp":1500385845652}
{"from":"karissa","message":"Thad: yes that's right","timestamp":1500389065183}
{"from":"barnie","message":"i have added an executive summary to my large input on vision and future of Dat project: https://github.com/datproject/dat/issues/824#issuecomment-316083350","timestamp":1500389113983}
{"from":"Thad","message":"@karissa: whats difference between dattitle and datname ?","timestamp":1500389247715}
{"from":"karissa","message":"Thad: good question. Heated debate. name is like a package name, which has some restrictions. Title is free form string","timestamp":1500389406115}
{"from":"Thad","message":"@karissa: OK, sounds like then that Title is what Schema.org uses as name. And your name is a package name, so that might be mapped to the \"distribution\" name ?","timestamp":1500389572960}
{"from":"Thad","message":"@karissa: a distribution is a property on a particular Dataset... and the distribution property points to a DataDownload that has properties like the contentURL","timestamp":1500389669047}
{"from":"Thad","message":"@karissa: I'll throw this into a gist first later today.... lol...I think that will be easier for all of us to absorb.","timestamp":1500389744860}
{"from":"Thad","message":"@karissa: I'm one of the volunteer experts with Schema.org....and also contributor to OpenRefine.","timestamp":1500389789449}
{"from":"karissa","message":"Nice!","timestamp":1500389844340}
{"from":"karissa","message":"Awesome to meet you","timestamp":1500389848036}
{"from":"karissa","message":"Check out https://github.com/datprotocol/dat.json","timestamp":1500389874515}
{"from":"karissa","message":"Thad: PRs welcome :) we also have a \"keyword\" field","timestamp":1500389907870}
{"from":"karissa","message":"Array of strings","timestamp":1500389916828}
{"from":"Thad","message":"@karissa: yeah, the keyword can map to a category...or map to a tag ? which is it more like ?","timestamp":1500390083034}
{"from":"Thad","message":"@karissa: or how is it mostly used ?","timestamp":1500390097897}
{"from":"cblgh","message":"pfrazee: any idea how to get hyperdrive / dat-node working well with hashbase.io?","timestamp":1500393869465}
{"from":"pfrazee","message":"cblgh: can you be more specific? I'm in the middle of debugging an upload bug with hashbase right now, so uploads are failing for everybody","timestamp":1500393936050}
{"from":"cblgh","message":"pfrazee: oh shit, maybe that's why","timestamp":1500393974489}
{"from":"cblgh","message":"but basically i wanted to host a json-file using dat-node, followed the basic example in the repo wrt files & joining the network and gave hashbase the resulting key","timestamp":1500394029351}
{"from":"cblgh","message":"it was stuck at 0% though, and being kinda new i wasn't sure if i was missing something at any step","timestamp":1500394077585}
{"from":"pfrazee","message":"cblgh: nah we're just bugging. It *might* work now if you want to give it a shot","timestamp":1500394148573}
{"from":"cblgh","message":"i'll give it another spin!","timestamp":1500394191980}
{"from":"cblgh","message":"WOAHHH","timestamp":1500394223575}
{"from":"cblgh","message":"it worked!","timestamp":1500394225627}
{"from":"cblgh","message":"bad timing on my part with sudden burst of inspiration, lol","timestamp":1500394280968}
{"from":"pfrazee","message":"haha yeah :D","timestamp":1500394365287}
{"from":"cblgh","message":"pfrazee: any plans on managing hashbase archives from the terminal?","timestamp":1500394491848}
{"from":"cblgh","message":"would be really cool to be able to register an account & upload an archive from the same app","timestamp":1500394520521}
{"from":"pfrazee","message":"cblgh: yeah that would be cool. jhand has been putting 'registry' tools into dat, I bet we could get conformant with that","timestamp":1500394548018}
{"from":"cblgh","message":"ohh interesting","timestamp":1500394574500}
{"from":"pfrazee","message":"yeah then you could just `dat publish`","timestamp":1500394584877}
{"from":"cblgh","message":"that would be ideal","timestamp":1500394817434}
{"from":"pfrazee","message":"yeah","timestamp":1500394971275}
{"from":"pfrazee","message":"mafintosh: ping","timestamp":1500395308630}
{"from":"mafintosh","message":"pfrazee: pong","timestamp":1500395332792}
{"from":"pfrazee","message":"mafintosh: sup sup. Any idea why calling stat() on an archive file would hang indefinitely? I'm having trouble isolating the conditions that would cause that","timestamp":1500395359719}
{"from":"pfrazee","message":"if nothing comes to mind, dont worry about it","timestamp":1500395400150}
{"from":"mafintosh","message":"pfrazee: no sounds like a bug","timestamp":1500395421981}
{"from":"pfrazee","message":"mafintosh: ok I'll dig in a bit","timestamp":1500395428942}
{"from":"cblgh","message":"pfrazee: oh yeah i think i saw that on your twitter","timestamp":1500397498696}
{"from":"cblgh","message":"v nice","timestamp":1500397503974}
{"from":"pfrazee","message":"cblgh: yeah ✨","timestamp":1500397518981}
{"from":"Thad","message":"@karissa: ah ok, that's what I thought.... a bag of words for classifying and categorizing a dataset. Curious, do you want to support classifying a LIST of datasets or grouping them by some keywords versus another LIST of datasets ? Schema.org could support that.","timestamp":1500401578029}
{"from":"yoshuawuyts","message":"noffle: you should check in with mafintosh on how hyperDB is progressing - think some of its properties might be a great fit for a P2P text editor","timestamp":1500406299262}
{"from":"karissa","message":"Thad: sounds interesting, that would be useful definitely. I'm thinking of organizations, or datasets from the same source","timestamp":1500408684119}
{"from":"Thad","message":"@karissa: yeap, that's the idea... OK... I'll get hacking on some of it tonight when I get home from the \"day job\" :)","timestamp":1500410080732}
{"from":"jhand","message":"mafintosh: made a little websockets think for hypercore-arhciver: https://github.com/joehand/hypercore-archiver-ws. Think I should just add it as an option directly to hypercore-archiver?","timestamp":1500410145359}
{"from":"noffle","message":"yoshuawuyts: yeah! I've been thinking the same thing; seems like a good fit","timestamp":1500410500257}
{"from":"ogd","message":"mafintosh: hey whats a 1 line description of what hyperdb does? lol","timestamp":1500411077505}
{"from":"TheLink","message":"jhand: is there a known issue with dat-cli progress indicator not getting to 100%? (rounding problem?)","timestamp":1500411234036}
{"from":"jhand","message":"TheLink: not that I know of. For downloads?","timestamp":1500411264198}
{"from":"TheLink","message":"jhand: yes, I just cloned a dat","timestamp":1500411293603}
{"from":"jhand","message":"TheLink: can you give the link?","timestamp":1500411329463}
{"from":"TheLink","message":"jhand: https://github.com/beakerbrowser/beaker/issues/608 the one linked there","timestamp":1500411338244}
{"from":"jhand","message":"may be related to why there is a missing file =)","timestamp":1500411374869}
{"from":"jhand","message":"TheLink: hmm mine finished. did you run something besides dat clone to a fresh location?","timestamp":1500411435564}
{"from":"TheLink","message":"no","timestamp":1500411452245}
{"from":"TheLink","message":"perhaps I didn't wait long enough","timestamp":1500411485902}
{"from":"jhand","message":"TheLink: what happens if you run dat sync on what you have donwloaded","timestamp":1500411619063}
{"from":"TheLink","message":"I already deleted it","timestamp":1500411637860}
{"from":"TheLink","message":"now I had a different issue on cloning though","timestamp":1500411670070}
{"from":"TheLink","message":"progress bar going to 100% quickly and then going back and forth","timestamp":1500411696501}
{"from":"TheLink","message":"in the end it cloned successfully though","timestamp":1500411711616}
{"from":"cblgh","message":"ogd: distributed key-value store","timestamp":1500412040652}
{"from":"cblgh","message":"optionally adding: built ontop of hypercore feeds","timestamp":1500412060087}
{"from":"cblgh","message":"i'm no mafintosh but that's what im feeling after 2 weeks of playing with it :3","timestamp":1500412081670}
{"from":"ogd","message":"cblgh: i was looking for a bit more detail than that, like what exactly it does on top of hypercore feeds","timestamp":1500412092074}
{"from":"cblgh","message":"in that case i haven't the sliiightest","timestamp":1500412120820}
{"from":"cblgh","message":"so i too would love a one-liner about that lol","timestamp":1500412176877}
{"from":"noffle","message":"ogd: I think it just presents a view atop a set of hypercores' kv insert/del entries, not unlike hyperlog+hyperkv","timestamp":1500412220615}
{"from":"noffle","message":"the code is pretty readable","timestamp":1500412228491}
{"from":"jhand","message":"TheLink: ya the other issue is https://github.com/datproject/dat/issues/815","timestamp":1500412363918}
{"from":"TheLink","message":"yeah","timestamp":1500412390320}
{"from":"TheLink","message":"jhand: https://datstore.beingalink.de/progress.mov that's what it looks like","timestamp":1500413543309}
{"from":"substack","message":"https://github.com/jakearchibald/byte-storage/issues/4#issuecomment-316209511","timestamp":1500414859937}
{"from":"pfrazee","message":"I've been following the byte-storage conversation but Im not going to comment; dont have a strong opinion on it (except that Im generally in favor)","timestamp":1500417063882}
{"from":"yoshuawuyts","message":"pfrazee: would be cool to get a good API tho; would be sad if it ended up as IndexedDB v2","timestamp":1500417332157}
{"from":"pfrazee","message":"yoshuawuyts: for sure. I dont have an opinion because I've never worked with byte-level storage or needed to","timestamp":1500417414161}
{"from":"pfrazee","message":"so my opinion would just be noise","timestamp":1500417428409}
{"from":"yoshuawuyts","message":"pfrazee: ya agree, important not to overwhelm them","timestamp":1500417543255}
{"from":"pfrazee","message":"ya","timestamp":1500417554937}
{"from":"yoshuawuyts","message":"pfrazee: good to follow along tho - we all use dat and it would help us (:","timestamp":1500417593213}
{"from":"pfrazee","message":"ya","timestamp":1500417633698}
{"from":"pfrazee","message":"mafintosh: ping","timestamp":1500423847278}
{"from":"cblgh","message":"what is dat.land supposed to be?","timestamp":1500452514180}
{"from":"ungoldman","message":"cblgh: i think it was meant to be a sort of hub for discovering public dats, but i believe development stalled on it once the dat folks started focusing on dat desktop","timestamp":1500452656784}
{"from":"ungoldman","message":"there are some traces here https://github.com/datproject/dat.land-roadmap and here https://github.com/yoshuawuyts/dat-land-dashboard","timestamp":1500452691306}
{"from":"yoshuawuyts","message":"ungoldman: yep, rebranded to datproject.org - and you're right; we got the band together and made a good effort to ship dat-desktop","timestamp":1500452935607}
{"from":"ungoldman","message":"it looks great! :clap","timestamp":1500452977683}
{"from":"ungoldman","message":"👏 👏","timestamp":1500452983787}
{"from":"ungoldman","message":"emojis are hard","timestamp":1500452988496}
{"from":"yoshuawuyts","message":"ungoldman: haha","timestamp":1500453006971}
{"from":"yoshuawuyts","message":"ungoldman: re:analytics - started work on a hyperDB backed analytics service https://github.com/shipharbor/analytics-service","timestamp":1500453023525}
{"from":"ungoldman","message":"ah interesting!","timestamp":1500453053035}
{"from":"ungoldman","message":"was thinking it would be easy to log some basic analytics as a little middleware layer in something like merry","timestamp":1500453122434}
{"from":"ungoldman","message":"i'm generally opposed to using google analytics in my own small projects and tend to have all trackers blocked when surfing","timestamp":1500453168381}
{"from":"ungoldman","message":"but it would be nice to collect some basic analytics on backend services","timestamp":1500453181124}
{"from":"yoshuawuyts","message":"ungoldman: yeah, agree!","timestamp":1500453217208}
{"from":"ungoldman","message":"not sure what a reasonable balance is for frontend analytics -- it's certainly useful for making informed decisions about development but conflicts a bit with my own desire to avoid tracking people's actions without their explicit consent","timestamp":1500453242046}
{"from":"yoshuawuyts","message":"ungoldman: also like it for metrics on client side - but generally to get rough insights - how many people complete a sign up flow? How many people visited a certain page, etc.","timestamp":1500453313208}
{"from":"cblgh","message":"ahh thanks ungoldman, saw it mentioned here & there and found your repo yoshuawuyts","timestamp":1500453372151}
{"from":"yoshuawuyts","message":"ungoldman: it's IP logging and stuff that seems like the wrong thing to do","timestamp":1500453416133}
{"from":"ungoldman","message":"yeah i guess if it's explicitly anonymous by design it's alright","timestamp":1500453443290}
{"from":"cblgh","message":"yoshuawuyts: wow the analytics code is so tiny","timestamp":1500453510975}
{"from":"yoshuawuyts","message":"cblgh: would add separate, dedicated services that parse the feed and create cool results","timestamp":1500453583831}
{"from":"ungoldman","message":"might want to randomize `/tmp/analytics.db` a bit in case more than one is running?","timestamp":1500453585811}
{"from":"ungoldman","message":"or is that meant to be overridden eventually?","timestamp":1500453800182}
{"from":"cblgh","message":"yoshuawuyts: yeah, it was meant as a compliment!","timestamp":1500454332081}
{"from":"cblgh","message":"merry looks rad too","timestamp":1500454335912}
{"from":"ungoldman","message":"lol, cowboy bebop https://datproject.org/johnanonalong/cowboybbp","timestamp":1500454748232}
{"from":"cblgh","message":"niiiiiice","timestamp":1500455198917}
{"from":"cblgh","message":"how does the dataset explorer work, sniffing the dht traffic?","timestamp":1500455352677}
{"from":"ungoldman","message":"not sure, i think that's a good guess though cblgh","timestamp":1500455945480}
{"from":"ungoldman","message":"https://github.com/datproject/datproject.org/blob/master/server/router.js#L92","timestamp":1500456087526}
{"from":"barnie","message":"mafintosh: are you online?","timestamp":1500456286921}
{"from":"mafintosh","message":"barnie: hi!","timestamp":1500456853002}
{"from":"barnie","message":"hi! I will create a gh issue on using dat + hypercore in a message-based design along the lines of this: https://github.com/datproject/dat/issues/824#issuecomment-315601713","timestamp":1500456897641}
{"from":"barnie","message":"i am very curious to get feedback on the possibilities","timestamp":1500456929574}
{"from":"mafintosh","message":"barnie: hypercore is already message based","timestamp":1500457223931}
{"from":"barnie","message":"are you refering to hypercore-protocol?","timestamp":1500457247512}
{"from":"mafintosh","message":"ya","timestamp":1500457284612}
{"from":"barnie","message":"i consider that a wire format","timestamp":1500457295757}
{"from":"barnie","message":"i am talking about a messaging layer on top","timestamp":1500457303462}
{"from":"mafintosh","message":"and and hypercore itself is basically persistent pubsub","timestamp":1500457312287}
{"from":"barnie","message":"on in which you have application and domain concepts","timestamp":1500457318579}
{"from":"barnie","message":"i am thinking of Event collaboration (between peers) and Event sourcing, CQRS, DDD (in rich client apps)","timestamp":1500457382981}
{"from":"barnie","message":"thus ideally suited for a social network that can grow organically in feature set","timestamp":1500457450844}
{"from":"yoshuawuyts","message":"cblgh: centralized registry","timestamp":1500457464014}
{"from":"mafintosh","message":"barnie: whats stopping you from doing that now?","timestamp":1500457487477}
{"from":"mafintosh","message":"its perfect for event sourcing and friends atm","timestamp":1500457509115}
{"from":"mafintosh","message":"and the random access aspect makes it v powerful for real time apps","timestamp":1500457544202}
{"from":"barnie","message":"ya i noticed that. its the reason I chose dat above ssb","timestamp":1500457552889}
{"from":"barnie","message":"but what makes me unhappy is that my message-based route would fork from your own direction","timestamp":1500457589494}
{"from":"barnie","message":"while you could easily incorporate it with a slight repositioning of technology","timestamp":1500457609047}
{"from":"barnie","message":"as i described","timestamp":1500457616934}
{"from":"mafintosh","message":"I'm pretty happy with the stack as it is. You can write custom messages over the protocol stream if you prefer. I do that in a couple of applications","timestamp":1500457780061}
{"from":"mafintosh","message":"If you can explain in a bit fewer words what lacking for your application I might understand it better","timestamp":1500457826794}
{"from":"mafintosh","message":"The thread is too long for me to dig in.","timestamp":1500457844104}
{"from":"barnie","message":"Thats why I pointed you to that single comment. Its not that long. The benefits described should be of interest to you","timestamp":1500457899894}
{"from":"barnie","message":"i also created an executive summary: https://github.com/datproject/dat/issues/824#issuecomment-316083350","timestamp":1500457937434}
{"from":"karissa","message":"cblgh: ungoldman no sniffing, you use the cli tool and manually create an account and publish it there. Clearly some messaging is missing here","timestamp":1500458056365}
{"from":"mafintosh","message":"barnie: i dont see the benefit of your approach compared to random access logs.","timestamp":1500458116745}
{"from":"mafintosh","message":"you can easily write a secure messaging system of top of logs","timestamp":1500458144076}
{"from":"barnie","message":"wouldn't having a dat defined messaging layer make it much easier to develop decentralized apps?","timestamp":1500458196049}
{"from":"barnie","message":"now I'll redo the work you've already done","timestamp":1500458216432}
{"from":"yoshuawuyts","message":"barnie: different apps need different schemas; hyperlog just provides the primitives for you to build upon","timestamp":1500458247646}
{"from":"mafintosh","message":"hypercore","timestamp":1500458253580}
{"from":"yoshuawuyts","message":"*hypercore","timestamp":1500458258120}
{"from":"barnie","message":"yes, the - too - primitives","timestamp":1500458274013}
{"from":"barnie","message":"we'll create forks","timestamp":1500458281081}
{"from":"barnie","message":"not of hypercore","timestamp":1500458298216}
{"from":"barnie","message":"but things on top","timestamp":1500458302999}
{"from":"mafintosh","message":"barnie: hypercore is just a distributed message log","timestamp":1500458359036}
{"from":"mafintosh","message":"with guaranteed ordering","timestamp":1500458383986}
{"from":"barnie","message":"i know. i'm not talking only of hypercore, though, but dat ecosystem as a whole","timestamp":1500458386308}
{"from":"barnie","message":"hypercore is fine as it is right now in the whole event-based discussion","timestamp":1500458429471}
{"from":"mafintosh","message":"then i'm not completely sure what you are missing :)","timestamp":1500458443425}
{"from":"mafintosh","message":"dat is a tiny layer on top of hypercore","timestamp":1500458456091}
{"from":"barnie","message":"when you go from hypercore to hyperdrive there is a missed opportunity","timestamp":1500458457485}
{"from":"mafintosh","message":"All replication is still hypercore based","timestamp":1500458472301}
{"from":"barnie","message":"you go from raw streams directly to file chunks (if i am correct)","timestamp":1500458475665}
{"from":"barnie","message":"while you could have a layer in between having file-chunk messages","timestamp":1500458497955}
{"from":"barnie","message":"or any file-related data type you define","timestamp":1500458524218}
{"from":"mafintosh","message":"yea that sounds cool","timestamp":1500458543029}
{"from":"mafintosh","message":"you can just write that module :)","timestamp":1500458556406}
{"from":"barnie","message":"yes, maybe I will, but I am yet an outsider coming from Java background, and not the person to get it there speedily. And in the meantime you may evolve to make it more difficult to use all modules","timestamp":1500458632235}
{"from":"barnie","message":"how much would it cost you to add this in the core design?","timestamp":1500458665161}
{"from":"barnie","message":"not much overhead I presume","timestamp":1500458672162}
{"from":"mafintosh","message":"barnie: i'm still a bit unsure what you need. the core abstraction *is* hypercore :)","timestamp":1500458733839}
{"from":"mafintosh","message":"and thats not gonna change","timestamp":1500458763371}
{"from":"mafintosh","message":"and i think your application sounds like a good fit","timestamp":1500458777402}
{"from":"barnie","message":"i say that's the wire protocol abstraction, you have no data communiction protocol abstraction (if that's the right word)","timestamp":1500458806455}
{"from":"barnie","message":"that is already good to hear, thx!!","timestamp":1500458824457}
{"from":"mafintosh","message":"barnie: hypercore is the data one","timestamp":1500458842839}
{"from":"barnie","message":"it's the raw stream one, AFAICS","timestamp":1500458857840}
{"from":"barnie","message":"and some protocol messages","timestamp":1500458868801}
{"from":"mafintosh","message":"you can send custom messages over the hypercore-protocol stream","timestamp":1500458899549}
{"from":"barnie","message":"do you advice me to extend the protocol-buffer schema's with my message types?","timestamp":1500458901052}
{"from":"mafintosh","message":"i'd advise you to model it on top of hypercore directly","timestamp":1500458925175}
{"from":"barnie","message":"ya, that's the do-it-myself-and-you-do-it-differently abstraction on top","timestamp":1500458930616}
{"from":"mafintosh","message":"then you wont have to worry about integrity/auth","timestamp":1500458945042}
{"from":"barnie","message":"the forks being created as you grow","timestamp":1500458947667}
{"from":"mafintosh","message":"just think of hypercore as your pubsub layer","timestamp":1500458986049}
{"from":"barnie","message":"i agree fully with that, just think dat needs an additional message layer","timestamp":1500459012615}
{"from":"barnie","message":"it only defines formats, not message types","timestamp":1500459023389}
{"from":"barnie","message":"thats for the application designers to do","timestamp":1500459030536}
{"from":"barnie","message":"like a spec","timestamp":1500459046134}
{"from":"mafintosh","message":"Yea there is room for that on top of hypercore","timestamp":1500459056831}
{"from":"mafintosh","message":"I agree","timestamp":1500459060001}
{"from":"barnie","message":"then in hyperdrive you just have thin File + Chunk msg wrapper (couple of bytes overhead)","timestamp":1500459086221}
{"from":"barnie","message":"and it'll have become an application of the message bus","timestamp":1500459098565}
{"from":"mafintosh","message":"I wouldnt at that point call it a message bus","timestamp":1500459153427}
{"from":"barnie","message":"maybe not the right word, agree","timestamp":1500459166809}
{"from":"mafintosh","message":"Cause we do lot of random access on the messages","timestamp":1500459170711}
{"from":"mafintosh","message":"Log is basically the core of it all","timestamp":1500459186458}
{"from":"barnie","message":"ya, but with random access you mean once they have been stored in the log / feed","timestamp":1500459211995}
{"from":"barnie","message":"i am talking on the state where they still roam the network","timestamp":1500459228058}
{"from":"barnie","message":"the random accessing is fine, if I don't want that, because i have event sourcing, then thats application-specific, no problem.","timestamp":1500459259111}
{"from":"mafintosh","message":"ogd: hyperdb is a hash array mapped trie based kv store that is multi writer and allows selective replication of just the keys you want","timestamp":1500459259310}
{"from":"barnie","message":"(that's a really cool thing hyperdb!)","timestamp":1500459295975}
{"from":"mafintosh","message":"ya network roaming is important","timestamp":1500459381188}
{"from":"mafintosh","message":"Thats why hypercore is a minimal dep","timestamp":1500459395719}
{"from":"mafintosh","message":"Cause its the network/state related one","timestamp":1500459407813}
{"from":"barnie","message":"agree. should stay that way","timestamp":1500459423395}
{"from":"barnie","message":"WDYT about the rest?","timestamp":1500459439527}
{"from":"mafintosh","message":"So a server that can replicate a log you deploy today will forever work with super applications we build","timestamp":1500459439965}
{"from":"barnie","message":"yes","timestamp":1500459453377}
{"from":"mafintosh","message":"barnie: i think there is merit for a common set of messages of top above for sure","timestamp":1500459485419}
{"from":"mafintosh","message":"and someone should experiment with building that","timestamp":1500459501129}
{"from":"mafintosh","message":"as a tiny module","timestamp":1500459507252}
{"from":"barnie","message":"the more i hear you talking the more i think you need some kind of small message abstraction on which other devs can build","timestamp":1500459508865}
{"from":"barnie","message":"exactly","timestamp":1500459521998}
{"from":"mafintosh","message":"barnie: you might be able to do it simply as a set of protocol-buffer messages","timestamp":1500459581418}
{"from":"mafintosh","message":"that is then appended to a hypercore","timestamp":1500459594985}
{"from":"barnie","message":"yes, but that binds you to a specific tech (protbuf)","timestamp":1500459607341}
{"from":"mafintosh","message":"any schema would do","timestamp":1500459620542}
{"from":"barnie","message":"but I think i'll a some time to investigate. also looking at mobile device usages","timestamp":1500459641449}
{"from":"barnie","message":"porting to android / ios","timestamp":1500459651228}
{"from":"mafintosh","message":"barnie: sounds good. keep me in the loop","timestamp":1500459658942}
{"from":"barnie","message":"yes, will do. I'll copy this discussion in an issue for later reference, okay?","timestamp":1500459684361}
{"from":"mafintosh","message":"barnie: we found out we can run node on android and everything (the dat stack) already works","timestamp":1500459688396}
{"from":"mafintosh","message":"sure","timestamp":1500459690141}
{"from":"barnie","message":"mafintosh: you mean with termux, but that requires 2 installs","timestamp":1500459709356}
{"from":"barnie","message":"or embedding termux","timestamp":1500459726205}
{"from":"barnie","message":"or you found other means?","timestamp":1500459752515}
{"from":"barnie","message":"browserify?","timestamp":1500459756289}
{"from":"mafintosh","message":"using termux ya","timestamp":1500459776109}
{"from":"mafintosh","message":"my point is mostly that it can be made to work without rooting the phone","timestamp":1500459801971}
{"from":"barnie","message":"for an end-user installation process, that means embedding","timestamp":1500459802224}
{"from":"barnie","message":"don't know whether its too invasive as well","timestamp":1500459811855}
{"from":"barnie","message":"yeah, non-rooting is cool! its a great project","timestamp":1500459830461}
{"from":"barnie","message":"i think with shimming in react-native i can come quite far","timestamp":1500459918220}
{"from":"barnie","message":"and point out some locations where the code is making it difficult","timestamp":1500459937190}
{"from":"mafintosh","message":"ya that'd be great as well","timestamp":1500460034864}
{"from":"barnie","message":"i keep you posted. thanks for your input!","timestamp":1500460056060}
{"from":"barnie","message":"mafintosh: fyi https://github.com/datproject/dat/issues/826","timestamp":1500460919966}
{"from":"barnie","message":"mafintosh: one more thing. https://github.com/whilo confused me for someone representing Dat Project. Is looking for cooperation on incorporating CRDT's","timestamp":1500461570113}
{"from":"barnie","message":"he's at christian@topiq.es if you're interested","timestamp":1500461754268}
{"from":"yoshuawuyts","message":"ogd: how'd you feel about renaming the miss* streams lib to streams-collection or smth? - want to use it on projects, but I end up getting the name wrong so often that it's usually just easier to get the individual modules","timestamp":1500465132055}
{"from":"yoshuawuyts","message":"(this comes after having getting the name wrong twice just now, and just having to google it)","timestamp":1500465166732}
{"from":"yoshuawuyts","message":"also happy to just fork an publish lol - but yeah","timestamp":1500465184431}
{"from":"karissa","message":"yoshuawuyts: yeah I agree","timestamp":1500466803289}
{"from":"karissa","message":"yoshuawuyts: as a child I was indoctrinated to be able to recite and spell from memory every state of the United States","timestamp":1500466884853}
{"from":"yoshuawuyts","message":"karissa: hah, yeah us non-US folks usually don't know too much about US geography haha","timestamp":1500467356162}
{"from":"karissa","message":"what a pity, missing out on such key knowledge!","timestamp":1500468772344}
{"from":"louisc","message":"I only just found out how say Arkansas","timestamp":1500469709989}
{"from":"louisc","message":"wut","timestamp":1500469729492}
{"from":"mafintosh","message":"yoshuawuyts: why don't you use the individual libs?","timestamp":1500471782935}
{"from":"mafintosh","message":"yoshuawuyts: https://github.com/yoshuawuyts/http-sse <-- you should make this a writable stream","timestamp":1500474924275}
{"from":"mafintosh","message":"i have this on the client https://github.com/mafintosh/event-source-stream","timestamp":1500474956485}
{"from":"dat-gitter","message":"(benrogmans) Hey guys quick question about dat-node: is there any mkdir method available like Hyperdrive has? Looking for something like dat.archive.mkdir()","timestamp":1500476618643}
{"from":"yoshuawuyts","message":"mafintosh: re:miss - I always use the individual ones for modules; but for client work I've found it nice to just hand over projects with fewer dependencies in package json - more discoverable but also keeps certain forms of code style politics in check","timestamp":1500477387432}
{"from":"mafintosh","message":"@benroans .archive.mkdir should do it :)","timestamp":1500477475206}
{"from":"pfrazee","message":"mafintosh: hey does hyperdrive still support static hash-addressed archives? It looks like that mightve been removed","timestamp":1500477990873}
{"from":"dat-gitter","message":"(benrogmans) thx @mafintosh now it's working, I probably messed up my directory paths","timestamp":1500478178521}
{"from":"pfrazee","message":"ogd: in my house, we call the cat flap a \"cat fupa\"","timestamp":1500479295084}
{"from":"yoshuawuyts","message":"mafintosh: re:http-sse - wasn't sure how to make it a read stream, b/c the write API is `.write(name, data)`; SSE messages take two arguments; unless we want to make it an object stream or smth","timestamp":1500479326755}
{"from":"barnie","message":"mafintosh: i added some first meat to the 'message abstraction layer' discussion: https://github.com/datproject/dat/issues/826#issuecomment-316431570","timestamp":1500479982516}
{"from":"barnie","message":"btw it also has a sideways connection to SSE, which i see is being discussed now","timestamp":1500480218583}
{"from":"dat-gitter","message":"(benrogmans) @mafintosh maybe this is a bug?","timestamp":1500480558476}
{"from":"dat-gitter","message":"(benrogmans)","timestamp":1500480558614}
{"from":"dat-gitter","message":"(benrogmans) This works like a charm (-rw-r--r-- 1 root root 32):","timestamp":1500480558751}
{"from":"dat-gitter","message":"(benrogmans) [full message: https://gitter.im/datproject/discussions?at=596f842ef5b3458e30615e88]","timestamp":1500480558751}
{"from":"dat-gitter","message":"(benrogmans) Or am I doing it wrong?","timestamp":1500480602305}
{"from":"mafintosh","message":"jhand: o/ is that a bug?","timestamp":1500481556935}
{"from":"jhand","message":"mafintosh: lemme see","timestamp":1500483353199}
{"from":"jhand","message":"mafintosh: did the utp windows thing get fixed? feel like I saw something about that but can't find it","timestamp":1500483374995}
{"from":"jhand","message":"@benrogmans can confirm I'm getting empty file too. It works with plain hyperdrive & dat-storage so is something in dat-node. I'll open issue and try to debug later.","timestamp":1500484217246}
{"from":"jhand","message":"https://github.com/datproject/dat-node/issues/163","timestamp":1500484374785}
{"from":"barnie","message":"jhand: would you consider keeping that issue open a bit longer?","timestamp":1500484956410}
{"from":"barnie","message":"discussions project last issue is 1 year old and has only 33 watchers vs. 300 on dat project","timestamp":1500484978145}
{"from":"barnie","message":"https://github.com/datproject/dat/issues/826#issuecomment-316431570","timestamp":1500485002937}
{"from":"ogd","message":"barnie: we created the discussions repo for discussions like those, we try to keep the other repos less noisy when possible","timestamp":1500485897719}
{"from":"barnie","message":"understand, but if its about a possibly fundamental change it helps if as many people possible are aware. But np, i'll recreate it in Discussions","timestamp":1500485992006}
{"from":"ogd","message":"jhand: mafintosh im gonna get started on the http transport","timestamp":1500486120038}
{"from":"mafintosh","message":"ogd: wooot","timestamp":1500487180006}
{"from":"ogd","message":"mafintosh: im gonna try to do it as a stream that plugs into hypercore.replicate(), the stream would issue a number of http requests internally and keep state. the server would translate calls to fs calls into the .dat folder. does that seem like the right approach?","timestamp":1500487401141}
{"from":"mafintosh","message":"ogd: and not as a storage driver?","timestamp":1500487439422}
{"from":"ogd","message":"mafintosh: ahhhhh","timestamp":1500487494761}
{"from":"ogd","message":"mafintosh: then i can wrap it in a hypercore instance and call replicate to pipe it somewhere else...","timestamp":1500487519125}
{"from":"bret","message":"https://www.youtube.com/watch?v=EClPAFPeXIQ","timestamp":1500487528176}
{"from":"bret","message":"new file coin paper https://filecoin.io/filecoin.pdf","timestamp":1500487537239}
{"from":"mafintosh","message":"ogd: yea exactly :)","timestamp":1500487556653}
{"from":"ogd","message":"https://usercontent.irccloud-cdn.com/file/ZMe4xdbO/Screen%20Shot%202017-07-19%20at%2011.07.38%20AM.png","timestamp":1500487673191}
{"from":"bret","message":"i know right?","timestamp":1500487798376}
{"from":"ogd","message":"bret: did you happen to listen to the zcash radiolab this week","timestamp":1500487849060}
{"from":"bret","message":"naw","timestamp":1500487872648}
{"from":"ogd","message":"mafintosh: you should listen to it too lol. i have a funny story about it","timestamp":1500487873452}
{"from":"bret","message":"is it good?","timestamp":1500487874251}
{"from":"ogd","message":"yea","timestamp":1500487876048}
{"from":"bret","message":"ok ill listen to it","timestamp":1500487880514}
{"from":"ogd","message":"http://www.radiolab.org/story/ceremony/","timestamp":1500487891079}
{"from":"bret","message":"hows it going over on the east side? i haven't seen you in months","timestamp":1500487902112}
{"from":"ogd","message":"https://usercontent.irccloud-cdn.com/file/SuB0qYNe/Screen%20Shot%202017-07-19%20at%2011.12.13%20AM.png","timestamp":1500487950306}
{"from":"bret","message":"dawww","timestamp":1500488149159}
{"from":"bret","message":"ogd: join #cats","timestamp":1500488158389}
{"from":"mafintosh","message":"ogd: haha, okay i will","timestamp":1500488611972}
{"from":"taravancil","message":"455","timestamp":1500489222730}
{"from":"taravancil","message":"^ sorry! kitten on the keyboard","timestamp":1500489228702}
{"from":"pfrazee","message":"mafintosh: do we still have static archive support?","timestamp":1500490273045}
{"from":"bret","message":"lol","timestamp":1500491532044}
{"from":"bret","message":"pfrazee: i hope so!","timestamp":1500491543208}
{"from":"pfrazee","message":"bret: I actually cant find it in the code atm","timestamp":1500491561828}
{"from":"bret","message":"pfrazee: being able to swap out the static subset of an 'owned/writable' tree was a cool feature I wanted to expand on","timestamp":1500491572930}
{"from":"pfrazee","message":"bret: me too","timestamp":1500491592588}
{"from":"bret","message":"it was one concept I was going to rely on to not re-create napster","timestamp":1500491601946}
{"from":"bret","message":"also maybe do some interesting 'family/freinds' sharing data-distribution features with","timestamp":1500491638458}
{"from":"bret","message":"e.g. not napster, but you can get your data from your friends if there is library overlap","timestamp":1500491660096}
{"from":"pfrazee","message":"bret: yeah there's a lot of value in the content hashed archives","timestamp":1500491931475}
{"from":"pfrazee","message":"(I was just saying this on twitter) strong security guarantee, there's no \"owner\", the data-distribution you just mentioned","timestamp":1500491963610}
{"from":"bret","message":"pfrazee: isn't the process of 'forking' removing the writer hash from the content tree and adding a new one?","timestamp":1500491979898}
{"from":"pfrazee","message":"I want to get a feature in beaker where you can \"snapshot\" an archive to get a static version of it","timestamp":1500491982797}
{"from":"pfrazee","message":"bret: that's true","timestamp":1500492055805}
{"from":"barnie","message":"mafintosh, jhand: completely cleaned up discussion on Vision + Future of Dat: https://github.com/datproject/discussions/issues/58","timestamp":1500495455709}
{"from":"barnie","message":"including the Message-based abstration layer design issue","timestamp":1500495598326}
{"from":"barnie","message":"https://github.com/datproject/discussions/issues/61","timestamp":1500495635560}
{"from":"jhand","message":"barnie: thank you for moving them to that repo.","timestamp":1500497058619}
{"from":"barnie","message":"np, I hope it will get attention given the low activity there","timestamp":1500497161016}
{"from":"jhand","message":"barnie: each issue you've written are pretty extensive discussions that makes quite a few points, making them hard to engage in. It may get more traction if you spend some time distilling them. To comprehend what you are discussing, you are asking others to spend a lot of time reading and understanding.","timestamp":1500497380607}
{"from":"barnie","message":"yes, i've broken them down into 5 parts and skimmed most of the rest of the stuff (https://github.com/datproject/discussions/issues/58 is ToC)","timestamp":1500497430514}
{"from":"barnie","message":"and I have patience, don't worry","timestamp":1500497537624}
{"from":"barnie","message":"this was about the moving, not the time","timestamp":1500497560979}
{"from":"jhand","message":"For example, reading the executive summary, it is not clear to me what parts #2, #2a, or #4 have to do with it without diving into those issues.","timestamp":1500497582912}
{"from":"barnie","message":"true, it was written as a general overview. maybe i should remove it, if it distracts","timestamp":1500497648629}
{"from":"barnie","message":"sorry for providing so much feedback :)","timestamp":1500497703239}
{"from":"mafintosh","message":"pfrazee: might have accidentally removed it in the refactor of hyperdrive","timestamp":1500498189382}
{"from":"mafintosh","message":"pfrazee: but yea hypercore still supports it","timestamp":1500498195645}
{"from":"pfrazee","message":"mafintosh: ok","timestamp":1500498344560}
{"from":"mafintosh","message":"pfrazee: do you need it?","timestamp":1500498424802}
{"from":"pfrazee","message":"mafintosh: not nearly as much as anything else on your plate","timestamp":1500498435946}
{"from":"mafintosh","message":"cool :)","timestamp":1500498445198}
{"from":"pfrazee","message":"and if you did implement it now I wouldnt be able to capitalize on it nearterm","timestamp":1500498446898}
{"from":"pfrazee","message":"mafintosh: I've been idly wondering, though, if it would use the dat:// scheme or if we need something like dat+blake2b:// or dat+static://","timestamp":1500498467060}
{"from":"mafintosh","message":"pfrazee: dat supports multiple hashes / static / live over the same protocol","timestamp":1500498506615}
{"from":"mafintosh","message":"only requirement is that a dat always uses the same hash function (ie no multihash)","timestamp":1500498525703}
{"from":"pfrazee","message":"mafintosh: yeah you dont think there will ever be a case where we need prior knowledge?","timestamp":1500498527823}
{"from":"pfrazee","message":"even with multiple hashes you could just try more than one","timestamp":1500498549420}
{"from":"pfrazee","message":"though that's not very efficient","timestamp":1500498554168}
{"from":"mafintosh","message":"multiwriter supports each feed having its own hash function tho","timestamp":1500498574159}
{"from":"pfrazee","message":"oh yeah?","timestamp":1500498583696}
{"from":"mafintosh","message":"for each feed in multiwriter","timestamp":1500498591022}
{"from":"mafintosh","message":"pfrazee: but i think the one-hash per feed is an ok limitation in practice","timestamp":1500498611878}
{"from":"pfrazee","message":"yeah","timestamp":1500498624928}
{"from":"mafintosh","message":"supporting multple means non-static length of the hash","timestamp":1500498625798}
{"from":"mafintosh","message":"which makes *everything* complicated storage wise","timestamp":1500498639142}
{"from":"pfrazee","message":"yeah","timestamp":1500498699130}
{"from":"barnie","message":"jhand: maybe now you're more content: https://github.com/datproject/discussions/issues/58","timestamp":1500499322140}
{"from":"ogd","message":"mafintosh: i dont know why this module exists really but i feel compelled to use it cause you use it https://github.com/calvinmetcalf/process-nextick-args","timestamp":1500500260645}
{"from":"mafintosh","message":"ogd: its just a polyfill for old node versions","timestamp":1500500284641}
{"from":"mafintosh","message":"ogd: i don't think you need it anymore","timestamp":1500500291229}
{"from":"ogd","message":"mafintosh: ah ok","timestamp":1500500293973}
{"from":"mafintosh","message":"nexttick supports passing args now","timestamp":1500500306051}
{"from":"mafintosh","message":"(passing args just saving you a function context in a hot code path)","timestamp":1500500319061}
{"from":"ogd","message":"ahh","timestamp":1500500323466}
{"from":"substack","message":"nice","timestamp":1500500413574}
{"from":"mafintosh","message":"ogd: this is supported in any node verison >0.12 i just looked up","timestamp":1500500578077}
{"from":"ogd","message":"mafintosh: for the http storage provider... should i use multi random access to make one where all read operations are http requests and all write operations use the default dat-node style local fs storage?","timestamp":1500501409575}
{"from":"mafintosh","message":"ogd: just plain readonly-http","timestamp":1500501451062}
{"from":"mafintosh","message":"ogd: and then do the replicate \"trick\"","timestamp":1500501458762}
{"from":"ogd","message":"mafintosh: so im implementing the abstract-random-access api for my http thing and im not sure what to do with .write","timestamp":1500501489674}
{"from":"ogd","message":"and .del","timestamp":1500501495622}
{"from":"mafintosh","message":"ogd: just have them throw","timestamp":1500501506495}
{"from":"mafintosh","message":"you just need to be readable","timestamp":1500501518169}
{"from":"ogd","message":"mafintosh: ah ok","timestamp":1500501520730}
{"from":"ogd","message":"taravancil: based on your cats input it seems its trying to restrict write access on your filesystem","timestamp":1500501970107}
{"from":"ogd","message":"mafintosh: so if i wanna plug in my http readonly provider to dat-node, should i just create a dat-node with the default local fs hyperdrive and then call replicate on dat.archive? will subsequent calls to dat.archive.get magically know to 'resolve' the query to the active replicated remote (hope you get what i mean)","timestamp":1500502405493}
{"from":"mafintosh","message":"ogd: ya","timestamp":1500502460609}
{"from":"ogd","message":"mafintosh: another way of asking, if a hypercore a is replicating from hypercore b, do calls to a.get for content wait to ask b if that content exists?","timestamp":1500502471499}
{"from":"mafintosh","message":"ogd: yup","timestamp":1500502488439}
{"from":"ogd","message":"mafintosh: alright cool otherwise i was kind of confused as to how that would work","timestamp":1500502493244}
{"from":"ogd","message":"mafintosh: replicate() is pretty magic","timestamp":1500502498303}
{"from":"ogd","message":"mafintosh: its kinda like adding a remote source when you replicate, that the local dat uses to resolve things on demand","timestamp":1500502520992}
{"from":"mafintosh","message":"ogd: replicate() just replicates all data. .get(..) just prioritises getting that specific piece first","timestamp":1500502534474}
{"from":"ogd","message":"mafintosh: magic in a good way lol","timestamp":1500502534814}
{"from":"ogd","message":"mafintosh: ah gotcha","timestamp":1500502544778}
{"from":"mafintosh","message":"haha ya","timestamp":1500502546413}
{"from":"mafintosh","message":"godo","timestamp":1500502547482}
{"from":"mafintosh","message":"good :)","timestamp":1500502550552}
{"from":"mafintosh","message":"ogd: any success?","timestamp":1500503063644}
{"from":"ogd","message":"mafintosh: trying now","timestamp":1500503085087}
{"from":"ogd","message":"mafintosh: im doing localReplicate.pipe(httpDrive.replicate()).pipe(localReplicate)","timestamp":1500503088209}
{"from":"ogd","message":"mafintosh: localReplicate is the 'real' hyperdrive from dat-node","timestamp":1500503097651}
{"from":"mafintosh","message":"nice","timestamp":1500503097789}
{"from":"ogd","message":"mafintosh: but its causing a write to httpDrive for some reason:","timestamp":1500503118059}
{"from":"ogd","message":"https://www.irccloud.com/pastebin/dtTjAOY2/","timestamp":1500503120510}
{"from":"mafintosh","message":"ogd: try to just call the callback without an error in write","timestamp":1500503148109}
{"from":"mafintosh","message":"i think this is a known issue. it might always try to persist the public key","timestamp":1500503197668}
{"from":"mafintosh","message":"i'll make an issue for it","timestamp":1500503202281}
{"from":"mafintosh","message":"ya here, https://github.com/mafintosh/hypercore/blob/master/index.js#L219","timestamp":1500503252178}
{"from":"mafintosh","message":"issue, https://github.com/mafintosh/hypercore/issues/108","timestamp":1500503300620}
{"from":"ogd","message":"mafintosh: hmm 'Error: First shared hypercore must be the same'","timestamp":1500503402419}
{"from":"ogd","message":"mafintosh: do i have to set a key somewhere or something","timestamp":1500503422679}
{"from":"mafintosh","message":"ogd: you have to set the key for the local dat","timestamp":1500503449106}
{"from":"mafintosh","message":"ogd: wait for your http one to fire .on('ready') then instantiate the dat-node one with httpDat.key","timestamp":1500503470474}
{"from":"mafintosh","message":"and then do .replicate()","timestamp":1500503477173}
{"from":"ogd","message":"ah ok","timestamp":1500503487686}
{"from":"ogd","message":"mafintosh: hangs here now https://github.com/datproject/dat-http/blob/master/test.js#L26","timestamp":1500503662314}
{"from":"ogd","message":"mafintosh: battery abt to die, might be offline for a little bit","timestamp":1500503674402}
{"from":"mafintosh","message":"ogd: can i clone that and run it?","timestamp":1500503689143}
{"from":"ogd","message":"mafintosh: yea","timestamp":1500503693734}
{"from":"ogd","message":"mafintosh: npm test","timestamp":1500503699053}
{"from":"mafintosh","message":"ogd: sweet, will take a look","timestamp":1500503709023}
{"from":"mafintosh","message":"ogd: got it working now!","timestamp":1500505941743}
{"from":"mafintosh","message":"https://github.com/datproject/dat-http/pull/1","timestamp":1500506114079}
{"from":"jhand","message":"nice","timestamp":1500509053581}
{"from":"ogd","message":"mafintosh: awesome thanks","timestamp":1500509762459}
{"from":"ogd","message":"mafintosh: is setting length in the .open callback mandatory?","timestamp":1500509899623}
{"from":"bret","message":"aschrijver carpet bombed your issues. maybe worth getting him in irc before writing up everything too much","timestamp":1500510169016}
{"from":"ogd","message":"bret: i think ashrijver is barnie","timestamp":1500510218584}
{"from":"bret","message":"oh nice already here! it is i who needs to catch up","timestamp":1500510244576}
{"from":"bret","message":"I do want an iOS implementation of hypercore/drive","timestamp":1500513557553}
{"from":"ralphtheninja","message":"karissa: got the SO account back","timestamp":1500528561313}
{"from":"barnie","message":"FYI I posted a question on porting hypercore to mobile: https://stackoverflow.com/questions/45209191/is-it-possible-to-shim-nodes-fs-readfilesync-in-react-native","timestamp":1500538865354}
{"from":"mafintosh","message":"ogd: no thats a bug i'll fix today","timestamp":1500543511561}
{"from":"mafintosh","message":"ogd, substack hypercore 6.6.6 removes the storage.length requirement","timestamp":1500545226065}
{"from":"karissa","message":"ralphtheninja: nice!","timestamp":1500546315647}
{"from":"karissa","message":"ralphtheninja: are you happy to make a few tags for dat?","timestamp":1500546511742}
{"from":"ralphtheninja","message":"karissa: absolutely, let me know what you need","timestamp":1500547534994}
{"from":"ralphtheninja","message":"karissa: anyone with reputation 1500+ can create tags .. and you do that when asking a question","timestamp":1500547651820}
{"from":"karissa","message":"ralphtheninja: nice. maybe a 'dat' tag","timestamp":1500547682015}
{"from":"ralphtheninja","message":"karissa: so I guess I need to create a few questions","timestamp":1500547686645}
{"from":"karissa","message":"'hyperdrive'","timestamp":1500547698730}
{"from":"ralphtheninja","message":"'dat-project' maybe?","timestamp":1500547738626}
{"from":"karissa","message":"ya","timestamp":1500547745804}
{"from":"ralphtheninja","message":"so what we could do is try to figure out a common question .. use those tags with it .. and then answer it ourselves","timestamp":1500547800728}
{"from":"karissa","message":"ralphtheninja: here are some ^_^ https://docs.datproject.org/faq","timestamp":1500547893268}
{"from":"ralphtheninja","message":"sweet","timestamp":1500547900072}
{"from":"ralphtheninja","message":"karissa: https://stackoverflow.com/questions/45212858/what-is-hyperdrive-and-how-is-that-different-from-dat","timestamp":1500548642640}
{"from":"ralphtheninja","message":"threw in node.js and p2p there as well","timestamp":1500548656909}
{"from":"ralphtheninja","message":"already created tags of course but will signal to people following those tags","timestamp":1500548675982}
{"from":"ralphtheninja","message":"if no one answers we can always provide an answer ourselves .. better to let the community try answering it","timestamp":1500548740865}
{"from":"mafintosh","message":"ralphtheninja: hey!","timestamp":1500549805747}
{"from":"ralphtheninja","message":"mafintosh: yo yo","timestamp":1500549811365}
{"from":"mafintosh","message":"i was just in stockholm on a 3 day vacaytion","timestamp":1500549814757}
{"from":"mafintosh","message":"getting my yearly kötbullar ration","timestamp":1500549829870}
{"from":"ralphtheninja","message":"haha","timestamp":1500549834564}
{"from":"ralphtheninja","message":"I recently came home from HackOn in Amsterdam","timestamp":1500549860264}
{"from":"ralphtheninja","message":"was a guy there talking about decentralized web and he mentioned beaker browser","timestamp":1500549881063}
{"from":"mafintosh","message":"cool!","timestamp":1500549888305}
{"from":"mafintosh","message":"is that a hacker camp?","timestamp":1500549891952}
{"from":"ralphtheninja","message":"yep","timestamp":1500549904325}
{"from":"ralphtheninja","message":"hosted by ADM https://adm.amsterdam/","timestamp":1500549924066}
{"from":"mafintosh","message":"ogd: hey, you wanna work on an implementers guide for dat with me when i come over?","timestamp":1500549948720}
{"from":"ralphtheninja","message":"it's an area in the amsterdam harbour that they have been squatting for 20 years, very cool place","timestamp":1500549949428}
{"from":"mafintosh","message":"whoa looks nice","timestamp":1500549968260}
{"from":"mafintosh","message":"how long where you there for?","timestamp":1500549972225}
{"from":"ralphtheninja","message":"six days about","timestamp":1500549995745}
{"from":"ralphtheninja","message":"https://hackon.nl/ for more info","timestamp":1500550024722}
{"from":"ralphtheninja","message":"someone gave my question a minus :/ .. anyway there are some tags there now at least","timestamp":1500550269158}
{"from":"mafintosh","message":"ralphtheninja: ah yes. think i saw this mentioned at ccc","timestamp":1500550389851}
{"from":"ralphtheninja","message":"mafintosh: going to https://sha2017.org/ in a couple of weeks","timestamp":1500550450951}
{"from":"mafintosh","message":"whoa nice","timestamp":1500550629929}
{"from":"mafintosh","message":"all about nl camping","timestamp":1500550638591}
{"from":"ralphtheninja","message":"mafintosh: they have a badge too of course :) https://wiki.sha2017.org/w/Projects:Badge","timestamp":1500550762736}
{"from":"ralphtheninja","message":"micro python + white screen","timestamp":1500550778108}
{"from":"mafintosh","message":"whoa","timestamp":1500550796407}
{"from":"mafintosh","message":"someone needs to get node on these","timestamp":1500550804073}
{"from":"ralphtheninja","message":"hehe","timestamp":1500550808863}
{"from":"ralphtheninja","message":"they even made a repository and an online editor where you can write micro python in the browser and publish to devices","timestamp":1500550843073}
{"from":"ralphtheninja","message":"it's using the ESP32 chip, not sure if node can run on it","timestamp":1500550866790}
{"from":"dat-gitter","message":"(benrogmans) @ralphtheninja upvoted & answered ;)","timestamp":1500557373555}
{"from":"dat-gitter","message":"(e-e-e) Hey guys - @sdockray and I just bundled our alpha release of the federated library app we have been developing using dat as its core. Its still super rough but mostly working if you want to have a play. https://github.com/e-e-e/dat-library/releases/tag/v0.1.0","timestamp":1500558736963}
{"from":"dat-gitter","message":"(e-e-e) not sure if it actually works on windows or not","timestamp":1500558754323}
{"from":"dat-gitter","message":"(e-e-e) Incidently also our first electron app. :smile:","timestamp":1500558840924}
{"from":"dat-gitter","message":"(benrogmans) Cool stuff @e-e-e","timestamp":1500559915307}
{"from":"karissa","message":"Benrograms nice!","timestamp":1500560092343}
{"from":"karissa","message":"E-e-e ooo can't wait to try it","timestamp":1500560105989}
{"from":"ralphtheninja","message":"oh cool someone from gitter answered .. thanks benrogmans!","timestamp":1500561953774}
{"from":"ralphtheninja","message":"karissa: also, there is a small wiki page for each tag, I wrote the following for dat and dat-project https://stackoverflow.com/tags/dat/info","timestamp":1500563312827}
{"from":"ralphtheninja","message":"karissa: let me know if you need another text in there, write a .md file in a gist and then I can copy from there or something like that","timestamp":1500563337015}
{"from":"karissa","message":"ralphtheninja: looks great!","timestamp":1500563472914}
{"from":"pfrazee","message":"ralphtheninja: nice!","timestamp":1500563552492}
{"from":"pfrazee","message":"we should populate it with some Q & As","timestamp":1500563573053}
{"from":"ralphtheninja","message":"pfrazee: +1 and maybe also add links to beaker browser as related project","timestamp":1500565064397}
{"from":"ralphtheninja","message":"-maybe :)","timestamp":1500565070117}
{"from":"pfrazee","message":":) yeah","timestamp":1500565083397}
{"from":"pfrazee","message":"ralphtheninja: is there a way to follow a tag so I can get notified of new questions?","timestamp":1500565654391}
{"from":"pfrazee","message":"I guess there's RSS","timestamp":1500565676267}
{"from":"ralphtheninja","message":"pfrazee: yep, hover over a tag in a question and pick 'subscribe'","timestamp":1500565685363}
{"from":"pfrazee","message":"ralphtheninja: oh nice thanks","timestamp":1500565693713}
{"from":"ralphtheninja","message":"there's rss as well","timestamp":1500565701061}
{"from":"ralphtheninja","message":"hmm it would be nice if questions could show up here","timestamp":1500565720241}
{"from":"ralphtheninja","message":"unless it gets too spammy of course","timestamp":1500565727291}
{"from":"pfrazee","message":"I just found a tool to get daily digests of questions emailed","timestamp":1500565857210}
{"from":"pfrazee","message":"https://stackexchange.com/filters","timestamp":1500565879846}
{"from":"pfrazee","message":"so Im using that","timestamp":1500565883186}
{"from":"ralphtheninja","message":"whatever floats your boat :D","timestamp":1500565928415}
{"from":"pfrazee","message":":)","timestamp":1500566272979}
{"from":"jhand","message":"@benrogmans got the writeFile bug figured out. you can pass in {indexing: false} to Dat as an option to fix for now. related to this todo: https://github.com/datproject/dat-node/blob/master/index.js#L86-L87","timestamp":1500569704593}
{"from":"jhand","message":"mafintosh: does making empty directories not work in hyperdrive? or maybe a dat-storage issue.","timestamp":1500569760372}
{"from":"jhand","message":"what does this do https://github.com/datproject/dat-storage/commit/38f7d2fd80768fca89b1de69c9a5d6c5b1a0635a","timestamp":1500569809431}
{"from":"ralphtheninja","message":"ogd: https://gist.github.com/maxogden/19d0fa61e36261d61b8776fa36a81f80#gistcomment-2153303","timestamp":1500571088642}
{"from":"ralphtheninja","message":"ogd: note, I might be missing something","timestamp":1500571154517}
{"from":"ralphtheninja","message":"ogd: nevermind .. the first 'lil-pids' is the name of the server","timestamp":1500571310957}
{"from":"ralphtheninja","message":"service :)","timestamp":1500571319959}
{"from":"ogd","message":"mafintosh: hey check out this test https://github.com/datproject/dat-http/blob/master/test.js#L12 i changed the setup to write 2 files to the dat, hello.txt and numbers.txt. at some point near the end of that test finishing the process crashes with one of either 'Error: Request Error 404, http://localhost:9988/numbers.txt' or 'Error: File is closed","timestamp":1500572378642}
{"from":"ogd","message":" at RandomAccessFile._read (/Users/max/src/js/dat-node/node_modules/random-access-file/index.js:97:19)'","timestamp":1500572378690}
{"from":"ogd","message":"mafintosh: im thinking its because replication is triggering a request for numbers.txt, even though i called .close on the hyperdrive instances?","timestamp":1500572423641}
{"from":"cblgh","message":"karissa / someone else: when i do hyperdiscoveryInstance.on(\"connection\", function(peer, type) { console.log(peer.key.toString(\"hex\") }) the key is the same as for the host process","timestamp":1500574016802}
{"from":"cblgh","message":"i thought it would rather be the key of the peer","timestamp":1500574037394}
{"from":"ogd","message":"cblgh: key is the dat key, basically the swarm","timestamp":1500574052085}
{"from":"cblgh","message":"ultimately i'd want to use it to somehow connect them to my hyperdb instance","timestamp":1500574054501}
{"from":"cblgh","message":"ohhhhhh oh","timestamp":1500574057008}
{"from":"cblgh","message":"hmm can a peer include data somehow?","timestamp":1500574080255}
{"from":"ogd","message":"cblgh: we give peers a random 32 byte id in discovery-swarm","timestamp":1500574108128}
{"from":"ogd","message":"cblgh: not sure how/if its surfaced or transmitted over the wire...","timestamp":1500574119194}
{"from":"cblgh","message":"ah that could be an avenue","timestamp":1500574130734}
{"from":"cblgh","message":"basically i want to have some kind of discovery mechanism for my distributed mud","timestamp":1500574142194}
{"from":"cblgh","message":"mafintosh was working on something but told me to figure out another approach until it surfaces iirc","timestamp":1500574171438}
{"from":"cblgh","message":"as it is now i just have the two hypercore keys in my swarm, but that doesn't work if you want other people to be able to join","timestamp":1500574235783}
{"from":"jhand","message":"cblgh: I think the id is sent in the handshake","timestamp":1500574256330}
{"from":"jhand","message":"cblgh: check peer.id and peer.remoteId","timestamp":1500574280716}
{"from":"cblgh","message":"will do!","timestamp":1500574300132}
{"from":"jhand","message":"another option would be to use a named network https://github.com/mafintosh/peer-network and send discovery info over that.","timestamp":1500574417217}
{"from":"mafintosh","message":"ogd: i","timestamp":1500574505208}
{"from":"mafintosh","message":"will take a look after dinner","timestamp":1500574518233}
{"from":"ogd","message":"mafintosh: cool. i noticed its opening content.tree like 10x a second during replication lol","timestamp":1500574531900}
{"from":"cblgh","message":"jhand: oh wow this is very appealing and precisely why i thought i'd surface the question in here","timestamp":1500574538403}
{"from":"cblgh","message":"damn dhts are good","timestamp":1500574557740}
{"from":"mafintosh","message":"ogd: all partial reads","timestamp":1500574768411}
{"from":"mafintosh","message":"to calculate the length","timestamp":1500574789636}
{"from":"ralphtheninja","message":"cblgh: distributed mud sounds awesome :)","timestamp":1500575276169}
{"from":"cblgh","message":"ralphtheninja: ya it's just a simple exercise to make something with all the cool dat primitives","timestamp":1500575609385}
{"from":"ogd","message":"mafintosh: i pushed a test you can run to reproduce","timestamp":1500575782205}
{"from":"cblgh","message":"hmm i want to catch the error from when i try to connect to a peer-network instance when there isn't one (and then create one in turn)","timestamp":1500575884237}
{"from":"cblgh","message":"but i can't for some reason","timestamp":1500575894156}
{"from":"cblgh","message":"https://gist.github.com/cblgh/fb521cdb63f0253538c07add860a6fa9","timestamp":1500575895431}
{"from":"ogd","message":"cblgh: try stream.on('error","timestamp":1500575960988}
{"from":"cblgh","message":"ohh","timestamp":1500575971614}
{"from":"cblgh","message":"ogd: hey that worked!","timestamp":1500576054298}
{"from":"cblgh","message":"is .emit a node thing somehow?","timestamp":1500576061507}
{"from":"cblgh","message":"searched around a bit, both in duplexify's code and on the web but couldn't find anything called emit","timestamp":1500576076838}
{"from":"cblgh","message":"not sure where it comes from","timestamp":1500576089533}
{"from":"dat-gitter","message":"(ralphtheninja) cblgh yes, it's an emitter","timestamp":1500576089817}
{"from":"ogd","message":"emit is part of require('events').EventEmitter, and all streams inherit from that class","timestamp":1500576094051}
{"from":"dat-gitter","message":"(ralphtheninja) event emitter*","timestamp":1500576094902}
{"from":"cblgh","message":"ah thanks","timestamp":1500576108793}
{"from":"cblgh","message":"i'm going to read up on streams","timestamp":1500576115176}
{"from":"ogd","message":"cblgh: rule of thumb for streams, is any time you create a stream you should bind and on('error') handler","timestamp":1500576122570}
{"from":"dat-gitter","message":"(ralphtheninja) cblgh swedish?","timestamp":1500576326341}
{"from":"jhand","message":"ogd: do you think we should make a dat unwritable when you export? we could either delte secret key or at least remote the owner file","timestamp":1500578129364}
{"from":"ogd","message":"jhand: hmmm for now maybe not automatically but there should be a command to flip the owner bit i guess. or just change it so if the file doesnt exist it means you arent owner","timestamp":1500578273351}
{"from":"pfrazee","message":"jhand: is it possible somebody would do an export just to create a backup?","timestamp":1500578362789}
{"from":"ogd","message":"yea exactly","timestamp":1500578374045}
{"from":"jhand","message":"pfrazee: right ya.","timestamp":1500578376964}
{"from":"jhand","message":"just trying to avoid the multiple owners issue, even on the same machine","timestamp":1500578388067}
{"from":"pfrazee","message":"jhand: yeah","timestamp":1500578399000}
{"from":"jhand","message":"we could have `dat keys export` vs `dat keys backup` or something","timestamp":1500578406944}
{"from":"ogd","message":"i think we should go lower level with this api in general","timestamp":1500578443291}
{"from":"ogd","message":"just have a way to print the key so you can copy paste it into the other dat on some other location to make that dat the owner","timestamp":1500578457802}
{"from":"jhand","message":"ogd: cool, i'll head to ❤️ and we can look at what I have so far","timestamp":1500578463061}
{"from":"pfrazee","message":"also for now I think we can get away without saving the user from this particular mistake. Some people will cut off fingers","timestamp":1500578494660}
{"from":"jhand","message":"pfrazee: i already did that once to myself lol","timestamp":1500578521297}
{"from":"pfrazee","message":"jhand: hah damn","timestamp":1500578534830}
{"from":"pfrazee","message":"9 fingers left!","timestamp":1500578542744}
{"from":"dat-gitter","message":"(ralphtheninja) positive thinking lol","timestamp":1500578565871}
{"from":"cblgh","message":"ralphtheninja: yup! live in malmö","timestamp":1500579493989}
{"from":"cblgh","message":"ogd: ah thanks, that's useful","timestamp":1500579539820}
{"from":"cblgh","message":"shit i fucked my screen session again","timestamp":1500579565013}
{"from":"cblgh","message":"brb","timestamp":1500579566901}
{"from":"mafintosh","message":"cblgh: you're in malmö? i'm just across the straight in cph","timestamp":1500579717765}
{"from":"cblgh","message":"mafintosh: !!","timestamp":1500579892465}
{"from":"cblgh","message":"mafintosh: i was in cph last month for an ipfs meetup","timestamp":1500579901830}
{"from":"mafintosh","message":"cblgh: oh my friend teo runs those haha","timestamp":1500579925365}
{"from":"mafintosh","message":"we hack all the time","timestamp":1500579931051}
{"from":"cblgh","message":"ahh yeah he mentioned someone involved with dat","timestamp":1500579959855}
{"from":"cblgh","message":"that's raad","timestamp":1500579964516}
{"from":"ogd","message":"hehe i guess you could call mafintosh 'involved with dat'","timestamp":1500580033898}
{"from":"ogd","message":"hes contributed a module or two","timestamp":1500580040874}
{"from":"ogd","message":":P","timestamp":1500580043621}
{"from":"blahah","message":"lol","timestamp":1500580053039}
{"from":"ogd","message":"blahah hey did you take ppl on a safari and didnt invite me?","timestamp":1500580072138}
{"from":"blahah","message":"haha not yet, just my family","timestamp":1500580089458}
{"from":"ogd","message":"blahah: ah i saw some posts from collaborative knowledge where everyone was on a safari and i figured it must have been your doing","timestamp":1500580116818}
{"from":"blahah","message":"we're actually right now camping between a pride of lions and a very large group of hyenas","timestamp":1500580119527}
{"from":"blahah","message":"ohhh yeah coko visited a while back","timestamp":1500580132986}
{"from":"blahah","message":"for team meetup","timestamp":1500580139789}
{"from":"blahah","message":"still planning safarijs","timestamp":1500580147772}
{"from":"cblgh","message":"blahah: woah where are you?","timestamp":1500580182657}
{"from":"blahah","message":"masai mara","timestamp":1500580189560}
{"from":"blahah","message":"kenya :)","timestamp":1500580203251}
{"from":"cblgh","message":"i bet you have the best vistas to code in","timestamp":1500580282221}
{"from":"blahah","message":"well... tomorrow we'll be working from the bank of the mara river while watching the migration","timestamp":1500580312671}
{"from":"blahah","message":"that should be cool","timestamp":1500580325934}
{"from":"ogd","message":"blahah: migration of birds?","timestamp":1500580386625}
{"from":"cblgh","message":"bring umbrellas","timestamp":1500580470430}
{"from":"blahah","message":"no, wildebeest, zebra, gazelles and some other antelope","timestamp":1500580497601}
{"from":"blahah","message":"https://en.wikipedia.org/wiki/Serengeti#Great_migration","timestamp":1500580498194}
{"from":"blahah","message":"~ 2million wildebeest cross the mara river in a few weeks","timestamp":1500580515176}
{"from":"cblgh","message":"wow","timestamp":1500580541372}
{"from":"ogd","message":"blahah: ok you win","timestamp":1500580716046}
{"from":"ogd","message":"blahah: coolest remote working situation","timestamp":1500580720810}
{"from":"blahah","message":":)","timestamp":1500580736044}
{"from":"ogd","message":"blahah: im building a nocturnal bird migration detection microphone right now for population counting","timestamp":1500580744127}
{"from":"ogd","message":"blahah: if you wanna deploy one in a flight path in kenya that would be cool :D","timestamp":1500580756742}
{"from":"blahah","message":"oooh nice we were talking about doing that here","timestamp":1500580764986}
{"from":"blahah","message":"we wanna mount directional microphones on our vehicles and have them auto-identify species by sound","timestamp":1500580788227}
{"from":"blahah","message":"cool for birds, frogs, insects etc.","timestamp":1500580807000}
{"from":"blahah","message":"definitely interested in migration detection","timestamp":1500580829355}
{"from":"ogd","message":"blahah: oh yea you can use the same thing im building for that","timestamp":1500581166767}
{"from":"blahah","message":"ogd: sweet","timestamp":1500581176672}
{"from":"ogd","message":"karissa: pretty cool jayrbolton migrated dat-pki crypto to be all libsodium based","timestamp":1500582498332}
{"from":"pfrazee","message":"ogd: oh good, that was the biggest negative","timestamp":1500582560793}
{"from":"karissa","message":"ogd: yeah totally. We did it together in less than a day, not too hard actually","timestamp":1500582614508}
{"from":"ogd","message":"karissa: thats awesome","timestamp":1500582690018}
{"from":"ogd","message":"karissa: was looking at how the google drive client does encryption https://github.com/google/skicka#encryption","timestamp":1500582722000}
{"from":"ogd","message":"karissa: and was thinking would be cool to have similar docs for dat cli at some point","timestamp":1500582733526}
{"from":"jhand","message":" mafintosh is this fine to use for writing the secret key? https://github.com/datproject/dat/pull/828/files#diff-c37e859e0678fccdd97a931e965b0047R101","timestamp":1500584631546}
{"from":"taravancil","message":"jondashkyle did you see this? https://twitter.com/textfiles/status/888093838107189249","timestamp":1500584959094}
{"from":"jondashkyle","message":"hah, quite lame.","timestamp":1500584992410}
{"from":"jondashkyle","message":"just shows the degree to which these types of products exist separate from actual internet culture","timestamp":1500585018591}
{"from":"jondashkyle","message":"(to request the internet archive to stop archiving the internet)","timestamp":1500585032065}
{"from":"taravancil","message":"yeah, so much for the open web being readable","timestamp":1500585078753}
{"from":"larpanet","message":"@EMovhfIrFk4NihAKnRNhrfRaqIhBv1Wj8pTxJNgvCCY=.ed25519:This channel is for dats, if you post a dat, put it in this channel or tag it with [#dat](#dat)! ... http://wx.larpa.net:8807/%25f3lVIjn1MtVI7XSadKnmJLTxF3TvDilm7RBgVuFRJJM%3D.sha256","timestamp":1500585257555}
{"from":"larpanet","message":"@EMovhfIrFk4NihAKnRNhrfRaqIhBv1Wj8pTxJNgvCCY=.ed25519:This channel is for dats, if you post a dat, put it in this channel or tag it with #dat! (because of... http://wx.larpa.net:8807/%25f3lVIjn1MtVI7XSadKnmJLTxF3TvDilm7RBgVuFRJJM%3D.sha256","timestamp":1500585257839}
{"from":"domanic","message":"sorry, that posted twice because it was both in the dat channel and tagged with #dat","timestamp":1500585347031}
{"from":"ogd","message":"heh","timestamp":1500585441370}
{"from":"ogd","message":"larpanet, excellent","timestamp":1500585460779}
{"from":"ogd","message":"jondashkyle: can you even see the license of a track through soundcloud.com?","timestamp":1500585565720}
{"from":"jondashkyle","message":"it’s an input field when adding a track, can grab through the api","timestamp":1500585594739}
{"from":"ralphtheninja","message":"cblgh: cool, you can always hit me up if you have node related questions","timestamp":1500585934361}
{"from":"ralphtheninja","message":"support in swedish :D","timestamp":1500585945127}
{"from":"mafintosh","message":"jhand: ya but its a bit iffy :)","timestamp":1500585982491}
{"from":"mafintosh","message":"ralphtheninja, cblgh we need to do a hacking thing soon","timestamp":1500586041320}
{"from":"ralphtheninja","message":"yep, whenever is convenient","timestamp":1500586126984}
{"from":"ralphtheninja","message":"mafintosh: any plans on ccc this year?","timestamp":1500586160479}
{"from":"mafintosh","message":"ya def gonna go if i get a ticket again","timestamp":1500586176495}
{"from":"mafintosh","message":"my parents live 1h away so super easy for me","timestamp":1500586185652}
{"from":"mafintosh","message":"assuming its still in hamburg","timestamp":1500586191166}
{"from":"ralphtheninja","message":"mafintosh: it's in leipzig this year","timestamp":1500586280807}
{"from":"mafintosh","message":"whoa","timestamp":1500586287224}
{"from":"ralphtheninja","message":"they are renovating in hamburg","timestamp":1500586290139}
{"from":"jhand","message":"mafintosh: doing something in my tests to make this error always happen now in node v4 + v6 https://travis-ci.org/datproject/dat/jobs/255835965#L939","timestamp":1500586777301}
{"from":"mafintosh","message":"jhand: ohhhh interesting","timestamp":1500586888708}
{"from":"jhand","message":"mafintosh: this may have fixed it: https://github.com/datproject/dat/pull/828/commits/61b55f0434d7f65dd90802337880e2b7bd1823ac","timestamp":1500587151355}
{"from":"jhand","message":"mafintosh: was closing and then trying to read it again right without waiting for cb","timestamp":1500587179372}
{"from":"jhand","message":"except the test before that is passing now too, so may have just been coincidence that it failed a bunch in a row...","timestamp":1500587224799}
{"from":"ralphtheninja","message":"mafintosh: are there any implementations of the dat protocol in other languages?","timestamp":1500587806171}
{"from":"ralphtheninja","message":"mafintosh: python would def be interesting to have","timestamp":1500587817051}
{"from":"ralphtheninja","message":"if we can make it run with micropython there's a huge amount of devices we could talk to","timestamp":1500587843992}
{"from":"pfrazee","message":"if I had the time I'd write a C implementation","timestamp":1500587844157}
{"from":"ralphtheninja","message":"c would be even better","timestamp":1500587851487}
{"from":"ralphtheninja","message":"I'd be up for helping out, but know too little about the protocol and the lingo just yet","timestamp":1500587905649}
{"from":"ralphtheninja","message":"then we could just build a native addon for node :P","timestamp":1500587932719}
{"from":"mafintosh","message":"i'd love a c impl","timestamp":1500587979859}
{"from":"ralphtheninja","message":"could potentially also mean improvements in speed","timestamp":1500587981633}
{"from":"ralphtheninja","message":"libdat :D","timestamp":1500588034154}
{"from":"pfrazee","message":"lovethatlibdat","timestamp":1500588058518}
{"from":"ralphtheninja","message":"liblovethatlibdat :P","timestamp":1500588076395}
{"from":"ralphtheninja","message":"newb question: what's ScienceFair?","timestamp":1500588377922}
{"from":"jhand","message":"ralphtheninja: its a sweet app for reading papers! http://sciencefair-app.com/","timestamp":1500588645034}
{"from":"ralphtheninja","message":"jhand: really nice!","timestamp":1500589066048}
{"from":"ralphtheninja","message":"blahah: I ran into some problems when installing the dependencies","timestamp":1500589961842}
{"from":"ralphtheninja","message":"I'm on linux","timestamp":1500589966692}
{"from":"ralphtheninja","message":"on sciencefair that is :)","timestamp":1500589977795}
{"from":"ralphtheninja","message":"I changed the postinstall script a bit and then a single 'npm i' is all that is required","timestamp":1500590026890}
{"from":"blahah","message":"ralphtheninja: oh? #sciencefair for chat and of course PRs welcome (although it works on the Linux build server)","timestamp":1500590448121}
{"from":"dat-gitter","message":"(sdockray) mafintosh: (OR anyone who might know this): if i inspect the metadata entries of a dat i make on my linux or osx, the name is \"/some/path/to/a/file\" -- if i made the same dat on a windows machine, would the name use windows path separator (i.e. \\\\some\\\\path\\\\to\\\\a\\\\file)?","timestamp":1500602871280}
{"from":"dat-gitter","message":"(sdockray) the reason I'm wondering is because we are parsing that path for useful information from directory structure... BUT windows is having problems here and I'm thinking its because we are splitting by path separator","timestamp":1500602934629}
{"from":"dat-gitter","message":"(sdockray) splitting by `path.sep`","timestamp":1500602958864}
{"from":"dat-gitter","message":"(sdockray) i just noticed this comment: https://github.com/mafintosh/hyperdrive/issues/112#issuecomment-256614168","timestamp":1500606057914}
{"from":"dat-gitter","message":"(sdockray) meaning maybe i can assume the paths in the entry name will be unix style?","timestamp":1500606083898}
{"from":"dat-gitter","message":"(sdockray) errr... but this just uses `name: path.join()` which suggests that the system-specific path separator is written into the hypercore entries.. https://github.com/mafintosh/mirror-folder/blob/master/index.js","timestamp":1500606400483}
{"from":"pfrazee","message":"@sdockray you might need to file an issue","timestamp":1500607081002}
{"from":"dat-gitter","message":"(sdockray) i will - because i don't have access to a windows machine i guess first i am just fishing for what the windows metadata entries look like... i.e. does the name look the same as it would if the dat were created on linux?","timestamp":1500607175890}
{"from":"pfrazee","message":"yeah I dont know","timestamp":1500607625703}
{"from":"dat-gitter","message":"(sdockray) will hunt for node modules that allow guessing path separator from a string representation of a file path rather than just inheriting it from OS","timestamp":1500607732055}
{"from":"dat-gitter","message":"(sdockray) see if i can fix the bug blind :) if it works (i.e. i did identify the bug properly) ill file the issue","timestamp":1500607770492}
{"from":"barnie","message":"hi everyone! i've investigated some message layer implementation ideas wrt hypercore(-protocol): https://github.com/datproject/discussions/issues/64","timestamp":1500615482026}
{"from":"barnie","message":"@mafintosh and other dat experts: i need some feedback before continuing","timestamp":1500615648983}
{"from":"mafintosh","message":"@sdockray its always /","timestamp":1500616791898}
{"from":"mafintosh","message":"ie unix style","timestamp":1500616805714}
{"from":"mafintosh","message":"windows paths are normalised tho","timestamp":1500616819531}
{"from":"barnie","message":"mafintosh: hi!","timestamp":1500617130219}
{"from":"mafintosh","message":"barnie: morning","timestamp":1500617168832}
{"from":"barnie","message":"good morning. I've some design input on messaging layer impl","timestamp":1500617199148}
{"from":"barnie","message":"need feedback","timestamp":1500617212288}
{"from":"barnie","message":"you have time somewhere today, or tomorrow?","timestamp":1500617241383}
{"from":"barnie","message":"2 pages","timestamp":1500617246986}
{"from":"dat-gitter","message":"(sdockray) mafintosh: thanks. that makes sense (and would also explain our bug, as splitting a forward-slashed path with path.sep on windows won't work)","timestamp":1500617419288}
{"from":"mafintosh","message":"@sdockray ah yea. just always use / :)","timestamp":1500617461981}
{"from":"mafintosh","message":"barnie: cool, might have time later","timestamp":1500617483033}
{"from":"dat-gitter","message":"(e-e-e) thanks mafintosh:","timestamp":1500617493176}
{"from":"barnie","message":"great! thx.","timestamp":1500617494110}
{"from":"mafintosh","message":"ralphtheninja: if you wanna start hacking on a c lib i'd be more than happy to help","timestamp":1500617686042}
{"from":"mafintosh","message":"ralphtheninja: simply getting hypercore to work would be amazing","timestamp":1500617711428}
{"from":"mafintosh","message":"libhyper","timestamp":1500618645386}
{"from":"barnie","message":"does libhypercore already exist? i'd take that","timestamp":1500618694308}
{"from":"dat-gitter","message":"(sdockray) if anyone runs windows? and could test one thing in this app, we'd appreciate it! https://github.com/e-e-e/dat-library/releases/download/v0.1.1/dat-library-setup-0.1.1.exe","timestamp":1500619100405}
{"from":"mafintosh","message":"@sdockray i can test it later on my nuc","timestamp":1500620376977}
{"from":"cblgh","message":"ralphtheninja: oh thanks for the offer, will do! :3","timestamp":1500624417146}
{"from":"cblgh","message":"mafintosh: ralphtheninja i'm definitely game for a hack thing!","timestamp":1500624488003}
{"from":"ralphtheninja","message":"mafintosh: it def sounds like a fun thing to do :)","timestamp":1500645226646}
{"from":"ralphtheninja","message":"something else that would be cool is a p2p dating app based on dat :)","timestamp":1500645264307}
{"from":"ralphtheninja","message":"there are many centralized services taking huge profit on something \"simple\" as dating","timestamp":1500645307598}
{"from":"ralphtheninja","message":"and also many singles in sweden :)","timestamp":1500645329515}
{"from":"bret","message":"Hey, bae, u into dat too?","timestamp":1500650762117}
{"from":"bret","message":"ralphtheninja: you thinking of doing a c implementation of hypercore?","timestamp":1500650790042}
{"from":"bret","message":"I thought about doing a go version since it sounded easier and could still be linked like a c lib","timestamp":1500650841641}
{"from":"ralphtheninja","message":"bret: we started talking about alternative implementations of the dat protocol yesterday","timestamp":1500650847015}
{"from":"ralphtheninja","message":"and I mentioned that python would be nice to have","timestamp":1500650867749}
{"from":"bret","message":"I really think a mobile lib would open up a lot of opportunities","timestamp":1500650874009}
{"from":"ralphtheninja","message":"then pfrazee mentioned writing a c lib instead, which obv is a lot better since you can easily use c functions in other languages","timestamp":1500650894858}
{"from":"pfrazee","message":"if go links just as well tho","timestamp":1500650922412}
{"from":"bret","message":"I'm not sure it does. It claims too","timestamp":1500650937743}
{"from":"pfrazee","message":"doesnt go ship with a GC too","timestamp":1500650950949}
{"from":"pfrazee","message":"?","timestamp":1500650951599}
{"from":"ralphtheninja","message":"bret: I haven't commited to it, but I would be happy to help out","timestamp":1500651015466}
{"from":"bret","message":"I've never done something like that before. Would be a big task for me","timestamp":1500651081345}
{"from":"ralphtheninja","message":"that's what I feel too atm, but it's changing gradually each day :)","timestamp":1500651147221}
{"from":"pfrazee","message":"if either of you can invent a cloning machine, I'll dedicate my copy to it","timestamp":1500651172453}
{"from":"pfrazee","message":"plus I'll make my copy get better at C++ so it can work on electron","timestamp":1500651186322}
{"from":"ralphtheninja","message":"lol","timestamp":1500651206361}
{"from":"mafintosh","message":"I'd prefer c over go for sure","timestamp":1500652322838}
{"from":"mafintosh","message":"no gc needed","timestamp":1500652329083}
{"from":"barnie","message":"mafintosh: created test on auto-generating protobuf docs for hypercore-protocol with Travis","timestamp":1500652960942}
{"from":"barnie","message":"works from inline comments, https://github.com/datproject/docs/blob/master/papers/dat-paper.md#4-dat-network-protocol need not be outdated","timestamp":1500653015975}
{"from":"barnie","message":"currently empty, but here: https://aschrijver.github.io/hypercore-protocol/","timestamp":1500653053132}
{"from":"pfrazee","message":"barnie: oh nice","timestamp":1500655824763}
{"from":"barnie","message":"with some more fiddling it could be auto-added to docs.datproject.org maybe","timestamp":1500655900663}
{"from":"barnie","message":"https://github.com/datproject/discussions/issues/66","timestamp":1500655959063}
{"from":"dat-gitter","message":"(whilo) hey","timestamp":1500657113625}
{"from":"dat-gitter","message":"(whilo) @barnie what problems do you try to solve with your messaging layer?","timestamp":1500657226462}
{"from":"dat-gitter","message":"(whilo) replikativ will get a new one too for the next major release. we already have a design, but it would be nice to share some efforts","timestamp":1500657257770}
{"from":"dat-gitter","message":"(whilo) messaging is a fairly low-level concept and just an interface though. one needs to be aware on what kind of problem to solve by it imo","timestamp":1500657292743}
{"from":"barnie","message":"hi whilo. did you read the proposal?","timestamp":1500657297135}
{"from":"barnie","message":"https://github.com/datproject/discussions/issues/62","timestamp":1500657343227}
{"from":"barnie","message":"and the next 2 documents evaluate design options","timestamp":1500657375490}
{"from":"barnie","message":"sorry whilo, have gotta go for a while","timestamp":1500657451978}
{"from":"pfrazee","message":"https://twitter.com/BeakerBrowser/status/888458769654566912 we wrote up a response to tom's post","timestamp":1500660129824}
{"from":"yoshuawuyts","message":"pfrazee: good post, enjoyed reading it ✨","timestamp":1500663660803}
{"from":"pfrazee","message":"yoshuawuyts: thanks 🌈","timestamp":1500663678755}
{"from":"barnie","message":"good post, agree!","timestamp":1500663685260}
{"from":"yoshuawuyts","message":"pfrazee: good post, enjoyed reading it ✨","timestamp":1500663699275}
{"from":"pfrazee","message":"lol","timestamp":1500663716733}
{"from":"pfrazee","message":"barnie: thanks!","timestamp":1500663721647}
{"from":"creationix","message":"pfrazee: love the two posts. I've got some ideas to solve the open questions in your post.","timestamp":1500664897091}
{"from":"pfrazee","message":"creationix: great","timestamp":1500665234925}
{"from":"cblgh","message":"thirding the sentiments, good read pfrazee!","timestamp":1500668554049}
{"from":"cblgh","message":"also impressed with the speed of response","timestamp":1500668576462}
{"from":"cblgh","message":"that fire","timestamp":1500668577778}
{"from":"pfrazee","message":":D","timestamp":1500668582359}
{"from":"pfrazee","message":"cblgh: thanks!","timestamp":1500668587431}
{"from":"jondashkyle","message":"pfrazee taravancil v solid","timestamp":1500668601409}
{"from":"taravancil","message":"thanks!","timestamp":1500668614301}
{"from":"jondashkyle","message":"hitting some of these same questions in conversations i'm having with others about dat","timestamp":1500668655477}
{"from":"jondashkyle","message":"sort of difficult to articulate off the cuff so articles like these are super useful in follow up","timestamp":1500668670352}
{"from":"cblgh","message":"oh yes taravancil too!","timestamp":1500668701656}
{"from":"cblgh","message":"re ipfs vs dat, i def feel like dat is easier to develop for atm","timestamp":1500668727413}
{"from":"cblgh","message":"depending on your application ofc, but dat feels more hackfriendly for some reason","timestamp":1500668749499}
{"from":"cblgh","message":"while ipfs has kind of a top-down waterfall kinda feel? idk!","timestamp":1500668796580}
{"from":"cblgh","message":"both are cool af though, obv","timestamp":1500668803733}
{"from":"pfrazee","message":"cblgh: yeah IPFS is very opinionated","timestamp":1500668889061}
{"from":"millette","message":"https://github.com/ipfs/ipfs#protocol-implementations is a distinguishing feature - although maturity plays a big role","timestamp":1500668904123}
{"from":"pfrazee","message":"millette: yeah agree","timestamp":1500668920047}
{"from":"millette","message":"dat is still very entrenched in js, like scuttlebutt, but you gotta start somewhere :-)","timestamp":1500669019589}
{"from":"cblgh","message":"definitely, but starting in js you get a lot of stuff for free","timestamp":1500669075842}
{"from":"cblgh","message":"any list that shows what's missing from ipfs's js impl?","timestamp":1500669198461}
{"from":"cblgh","message":"looking through the js-ipfs repo atm but not seeing anything","timestamp":1500669212535}
{"from":"cblgh","message":"i guess this? https://github.com/ipfs/js-ipfs#api","timestamp":1500669251597}
{"from":"cblgh","message":"so missing dht & pin seems like","timestamp":1500669266126}
{"from":"creationix","message":"millette: until recently, the DAT whitepaper wasn't detailed enough to implement the protocol. It's much better now and I intend to make a Lua, C, or Rust version when I get time (not sure when that till be though)","timestamp":1500669431617}
{"from":"millette","message":"yay :-)","timestamp":1500669458464}
{"from":"millette","message":"cblgh, I don't follow ipfs much","timestamp":1500669482009}
{"from":"creationix","message":"I really liked IPFSs internal data structure (the merkle tree) when I first saw it. But the way they added abstractions on top seems kinda complicated","timestamp":1500669541593}
{"from":"millette","message":"I gave scuttlebutt a try a little while ago, so people trying to implent it in go. It's always good to see multiple implementations.","timestamp":1500669558523}
{"from":"millette","message":"s/so/saw/","timestamp":1500669573306}
{"from":"creationix","message":"the way dat layers on top of hypercore looks a lot more elegant to me","timestamp":1500669577958}
{"from":"creationix","message":"yeah, I'm not a fan of the bloat that comes along with the npm ecosystem","timestamp":1500669600795}
{"from":"millette","message":"yeah, I found dat easier to understand too (hypercore/drive, etc.)","timestamp":1500669603305}
{"from":"creationix","message":"though it seems mafintosh","timestamp":1500669622518}
{"from":"creationix","message":"...took my complaints seriously and trimmed up a lot of the dependencies","timestamp":1500669638090}
{"from":"jondashkyle","message":"yeah also LOL at the filecoin ICO","timestamp":1500669648835}
{"from":"jondashkyle","message":"have to be making >$200k/year or have networth of >$1m","timestamp":1500669661449}
{"from":"cblgh","message":"my path for all of this is ipfs <-> ssb <-> dat","timestamp":1500669664118}
{"from":"cblgh","message":"jondashkyle: yeah that's probably gonna be a pattern going forward","timestamp":1500669676983}
{"from":"cblgh","message":"trying to avoid future lawsuits and stuff","timestamp":1500669684848}
{"from":"jondashkyle","message":"for sure…","timestamp":1500669687486}
{"from":"millette","message":"what, no freenet?","timestamp":1500669690375}
{"from":"jondashkyle","message":"hahaha","timestamp":1500669695764}
{"from":"cblgh","message":"*googles freenet*","timestamp":1500669699399}
{"from":"millette","message":"https://en.wikipedia.org/wiki/Freenet","timestamp":1500669722020}
{"from":"millette","message":"not the free isp kind (those were the days)","timestamp":1500669737459}
{"from":"cblgh","message":" 249 Followers 249 Likes","timestamp":1500669815998}
{"from":"cblgh","message":":<","timestamp":1500669816698}
{"from":"cblgh","message":"https://freenetproject.org/","timestamp":1500669821617}
{"from":"millette","message":"Had my first email addresss thanks to http://victoria.tc.ca/ way back when...","timestamp":1500669889760}
{"from":"cblgh","message":"this site feels very wholesome","timestamp":1500669964014}
{"from":"mafintosh","message":"Am now also running Dat on my phone","timestamp":1500673989195}
{"from":"pfrazee","message":"nice","timestamp":1500674034289}
{"from":"creationix","message":"mafintosh: how'd you pull that off","timestamp":1500674060090}
{"from":"mafintosh","message":"Turmux on android","timestamp":1500674114739}
{"from":"mafintosh","message":"Just bought a Pixel after I found out this was possible","timestamp":1500674139811}
{"from":"ogd","message":"mafintosh: dang should i get a pixel","timestamp":1500674166397}
{"from":"ogd","message":"mafintosh: would be cool to have a dat proxy that lets you save pages offline","timestamp":1500674195998}
{"from":"mafintosh","message":"ogd: ya","timestamp":1500674325867}
{"from":"mafintosh","message":"ogd: usbc, same size / design as 6s, lighter, runs node and Dat","timestamp":1500674387668}
{"from":"ogd","message":"ooo","timestamp":1500674398642}
{"from":"mafintosh","message":"I got the 128gb cause of Dat haha","timestamp":1500674427834}
{"from":"ogd","message":"lol","timestamp":1500674457472}
{"from":"noffle","message":"termux is great. I use it with airpaste to send photos around instead of fiddling with android camera protocols","timestamp":1500674513981}
{"from":"mafintosh","message":"Whoa hadnt even thought of that","timestamp":1500674570983}
{"from":"ogd","message":"mafintosh: you and joe should do the first phone-to-phone dat transfer","timestamp":1500674596574}
{"from":"mafintosh","message":"Ya","timestamp":1500674656346}
{"from":"noffle","message":"having most of npm on my phone is super cool","timestamp":1500674671288}
{"from":"mafintosh","message":"Just ran p2p-test on my phone","timestamp":1500674737400}
{"from":"ralphtheninja","message":"noffle: what kind of phone do you have?","timestamp":1500674768155}
{"from":"ralphtheninja","message":"oh android nvm","timestamp":1500674788468}
{"from":"ralphtheninja","message":"on f-droid too! :)","timestamp":1500674810686}
{"from":"noffle","message":"ralphtheninja: moto x","timestamp":1500674911338}
{"from":"ogd","message":"can you set the font size on termux to be bigger/smaller","timestamp":1500675012639}
{"from":"mafintosh","message":"noffle: how do i ctrl-c?","timestamp":1500675025902}
{"from":"jhand","message":"mafintosh: dat://df7955a905ca4d4567571c274e0fc731584861a0e87c449a6e80955c606f20bb","timestamp":1500675118031}
{"from":"jhand","message":"mafintosh: volume down is ctrl","timestamp":1500675130143}
{"from":"jhand","message":"https://termux.com/touch-keyboard.html","timestamp":1500675164254}
{"from":"jhand","message":"Sharing that dat from traffic coming back from gorge","timestamp":1500675185761}
{"from":"mafintosh","message":"jhand: you sharing that now? Was busy peerflixing haha","timestamp":1500675514016}
{"from":"jhand","message":"mafintosh: ya think she","timestamp":1500675542383}
{"from":"jhand","message":"So","timestamp":1500675544114}
{"from":"ogd","message":"trying to one up jhand with coolest dat location https://usercontent.irccloud-cdn.com/file/5zuqkIKD/IMG_1179.JPG","timestamp":1500675558562}
{"from":"jhand","message":"Way cooler than mine lol. I was in troutdale","timestamp":1500675594221}
{"from":"ogd","message":"jhand: cant connect from my att tethered laptop","timestamp":1500675629540}
{"from":"jhand","message":"Ya no idea if it works over lte","timestamp":1500675675661}
{"from":"ogd","message":"jhand: i added it to the bot and it got 11% on my laptop","timestamp":1500675809361}
{"from":"mafintosh","message":"Ya same. Run p2p-test at some point","timestamp":1500675819164}
{"from":"jhand","message":"Just got home so may have connected to WiFi and stopped","timestamp":1500675848064}
{"from":"ogd","message":"holePunchable: false","timestamp":1500675855749}
{"from":"ralphtheninja","message":"wtf, I'm doing 'npm version patch' and npm errors because package-lock.json is in .gitignore","timestamp":1500676150821}
{"from":"jhand","message":"My lte is hole punch able","timestamp":1500676151354}
{"from":"ralphtheninja","message":"so apparently now npm version command wants to do 'git add package-lock.json' .. annoying","timestamp":1500676210225}
{"from":"ogd","message":"jhand: whoa cool","timestamp":1500676329089}
{"from":"ogd","message":"ralphtheninja: yea ouch","timestamp":1500676340596}
{"from":"millette","message":"ralphtheninja, https://codeburst.io/disabling-package-lock-json-6be662f5b97d might help (haven't tried)","timestamp":1500676728548}
{"from":"ogd","message":"bought a pixel muahahahah","timestamp":1500677151138}
{"from":"ogd","message":"gotta be in the cool kids club","timestamp":1500677157388}
{"from":"ralphtheninja","message":"millette: thanks a million!","timestamp":1500677990418}
{"from":"ralphtheninja","message":"echo 'package-lock=false' >> .npmrc","timestamp":1500678003683}
{"from":"millette","message":"and it solves the npm version patch problem?","timestamp":1500678026528}
{"from":"ralphtheninja","message":"did the trick, but had to commit that and also remove previously generated package-lock.json .. then npm version patch worked fine!","timestamp":1500678027440}
{"from":"millette","message":"good to know","timestamp":1500678038541}
{"from":"ralphtheninja","message":"note though that this will prevent npm from generating package-lock.json in that project","timestamp":1500678060653}
{"from":"ralphtheninja","message":"completely","timestamp":1500678064966}
{"from":"ralphtheninja","message":"'npm config set package-lock false' also useful","timestamp":1500678115922}
{"from":"ralphtheninja","message":"global setting","timestamp":1500678123247}
{"from":"ogd","message":"https://www.irccloud.com/pastebin/KarPOcH3/","timestamp":1500678243645}
{"from":"ogd","message":"dat replicating over http o/","timestamp":1500678249066}
{"from":"ralphtheninja","message":"ncie","timestamp":1500678324105}
{"from":"ogd","message":"https://www.npmjs.com/package/dat-http","timestamp":1500678885086}
{"from":"ogd","message":"mafintosh: do you have ideas for an elegant way i can implement caching for o/","timestamp":1500679096180}
{"from":"ogd","message":"mafintosh: basically i wanna have a sparse fs in memory and for byte ranges i dont have yet, issue range requests to get them to satisfy the .read call. i also was gonna request 500kb at least per request to proactively fill cache","timestamp":1500679153688}
{"from":"domanic","message":"ogd: my aligned-block-file module does that (am discussing merging api with random-access-file, with mafintosh too)","timestamp":1500685091378}
{"from":"dat-gitter","message":"(e-e-e) pfrazee: Tom’s write up is really great - I concur with his observation on the friendly and collaborative culture that Dat + Beaker have fostered / are fostering. Its primarily the reason that I personally have chosen to play with dat over other alternatives.","timestamp":1500686347087}
{"from":"louisc","message":"dat stands for distributed awesome togetherness","timestamp":1500715003633}
{"from":"barnie","message":"FYI demonstration of auto-generated protobuf schema docs using TravisCI : https://github.com/aschrijver/hypercore-protocol/tree/feature/document-generator","timestamp":1500719245687}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh: gmorning. How production ready is the ‘production ready’ branch in hyperdb? What is left to resolve?","timestamp":1500719973569}
{"from":"barnie","message":"answering SO, promoting Dat in the process :D https://stackoverflow.com/questions/40295215/is-there-a-way-to-create-a-type-alias-in-protobuf-proto2/45254099#45254099","timestamp":1500724731320}
{"from":"barnie","message":"and here: https://stackoverflow.com/questions/4992199/generate-protobuf-documentation/45253665#45253665","timestamp":1500725320468}
{"from":"mafintosh","message":"Sweet","timestamp":1500727105654}
{"from":"dat-gitter","message":"(sdockray) mafintosh: sorry, qq, but is there a quick way to get the size (byteLength) of a sparse hyperdrive?","timestamp":1500727307089}
{"from":"dat-gitter","message":"(sdockray) right now archive.content.byteLength is 0 when sparse","timestamp":1500727337630}
{"from":"mafintosh","message":"@sdockray call .update(cb) on the content feed","timestamp":1500727380307}
{"from":"mafintosh","message":"same goes for the metadata one","timestamp":1500727391470}
{"from":"dat-gitter","message":"(sdockray) will try right now","timestamp":1500727399475}
{"from":"mafintosh","message":"That wait and fetch an update for the feed","timestamp":1500727424470}
{"from":"dat-gitter","message":"(sdockray) mafintosh: that worked - thanks!!","timestamp":1500727800211}
{"from":"dat-gitter","message":"(rjsteinert) Hi folks - I'm wondering if it's possible now for peers of a dat archive to send messages to each other that do not get written to disk. Right now if you have the secret key for a dat archive, you can send messages to peers that get written to disk, this would be a message that without the secret key you can send to peers that dissappears if no one is listening for these kind of \"self destruct\" messages. If a peer how","timestamp":1500733344012}
{"from":"dat-gitter","message":"for example facilitate peer to peer communication for something like a chat application that the dat archive is hosting the code for.","timestamp":1500733344121}
{"from":"cblgh","message":"@rjsteinert not entirely sure what you're after, and someone else might have a better answer, but i'm trying out https://github.com/mafintosh/peer-network for that kind of usecase","timestamp":1500733631531}
{"from":"dat-gitter","message":"(rjsteinert) This has interesting applications for granting multiple peers permission to write to a single archive. For example, if the peer who has the secret key wanted to write the chat history to the archive, then as peers send chat messages as \"self destruct\" messages, the person with the secret key inside of their application can decide if those messages are written to the archive.","timestamp":1500733706375}
{"from":"dat-gitter","message":"(rjsteinert) @cblgh I'll check that out","timestamp":1500733725974}
{"from":"dat-gitter","message":"(rjsteinert) In my above example, the peer that has write access becomes the consensus maker of state in the application that is reduced from the messages of peers without write access. Very Redux.","timestamp":1500734393073}
{"from":"mafintosh","message":"@rjsteinert you can send custom messages over the wire that isnt persisted","timestamp":1500734624884}
{"from":"mafintosh","message":"if you tell me a bit more about your usecase that'd be super helpful","timestamp":1500734663198}
{"from":"dat-gitter","message":"(rjsteinert) @cblgh peer-network looks promising, part of the full dat stack I'm assuming. That is the part of the stack we would want to target to enable this kind of communication I think.","timestamp":1500734784159}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh @cblgh I'm on a WebRTC connection over at https://talky.io/beaker if you want to join, will be easier to explain this for me by talking :)","timestamp":1500734849442}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh I have a couple of use cases, they all circle around the ability for peers of a dat archive, while using Beaker Browser, to send messages to each other that don't persist. This enables use cases like peers telling each other about forks so a master peer can collect form response data from their fork. Another use case I'm interested in is a master peer behaving as consensus maker of a live document between ","timestamp":1500735172850}
{"from":"mafintosh","message":"@rjsteinert yea sounds good. Sending custom messages as a protocol extension is def the way to go for this. I can share some more details on how to do this later if you ping me.","timestamp":1500736923793}
{"from":"jhand","message":"hooked up hyperirc to a hypercored websocket http://dat-chat.netlify.com/","timestamp":1500751697829}
{"from":"jhand","message":"https://github.com/joehand/hyperirc-web","timestamp":1500751704661}
{"from":"emilbayes","message":"taravancil: pfrazee Have you looked at password based key derivation? It is feasible to generate ed25519 keys, that are safe, given a good enough password","timestamp":1500751764165}
{"from":"dat-gitter","message":"(lukeburns) jhand: sweet","timestamp":1500751765809}
{"from":"emilbayes","message":"it does however involve some more crypto stuff, but nothing we don't have in sodium-native already","timestamp":1500751782397}
{"from":"emilbayes","message":"(however some stuff is still lacking in sodium-javascript)","timestamp":1500751797610}
{"from":"dat-gitter","message":"(lukeburns) would be cool to implement something like gitter that syncs with irc on top of dat","timestamp":1500751820506}
{"from":"pfrazee","message":"emilbayes: we haven't dug into it yet but we're interested for sure. What's a usecase we might use?","timestamp":1500751822656}
{"from":"pfrazee","message":"jhand: dang man that's nice","timestamp":1500751834576}
{"from":"pfrazee","message":"jhand: you going to tweet it?","timestamp":1500751850688}
{"from":"emilbayes","message":"pfrazee: Ah, so looking at the critique of keys not being portable","timestamp":1500751856850}
{"from":"emilbayes","message":"pfrazee: Saw this tweet https://twitter.com/gratidue/status/888559934169894913","timestamp":1500751878066}
{"from":"pfrazee","message":"jhand: (it seems to say \"Connecting...\" permanently under the channel name)","timestamp":1500751880078}
{"from":"jhand","message":"pfrazee: ya not quite. still want to fix a few bugs","timestamp":1500751890355}
{"from":"pfrazee","message":"jhand: cool","timestamp":1500751894818}
{"from":"jhand","message":"pfrazee: ya think my websocket thing went down for a second","timestamp":1500751899465}
{"from":"jhand","message":"should be back up if you refrsh","timestamp":1500751910880}
{"from":"pfrazee","message":"emilbayes: yeah so the idea would be that you could sit down at any computer, type in the password, and your keys will regen right?","timestamp":1500751914474}
{"from":"emilbayes","message":"And changing keys will not be like changing passwords,","timestamp":1500751916706}
{"from":"emilbayes","message":"pfrazee: exactly","timestamp":1500751921719}
{"from":"jhand","message":"pfrazee: don't think i put in the logic for reconnections, only if it failed","timestamp":1500751929106}
{"from":"emilbayes","message":"pfrazee: Given that you're the only person that knows the password, it would still be single writer","timestamp":1500751937576}
{"from":"pfrazee","message":"jhand: ok works now","timestamp":1500751949061}
{"from":"jhand","message":"yay","timestamp":1500751972935}
{"from":"pfrazee","message":"emilbayes: yeah we'd still need to coordinate the multiple devices somehow, right?","timestamp":1500751985289}
{"from":"pfrazee","message":"emilbayes: we also have a lot of dats that need their keys managed","timestamp":1500752002496}
{"from":"emilbayes","message":"pfrazee: Yeah well, this is under the assumption that you only write from a single computer at any given time","timestamp":1500752008364}
{"from":"emilbayes","message":"pfrazee: You can use a Key Derivation Function to generate multiple keys from a single master key","timestamp":1500752031275}
{"from":"emilbayes","message":"pfrazee: So you have a single master password and maybe a name for each of your dat's and the name is used to partition your master key","timestamp":1500752054692}
{"from":"emilbayes","message":"pfrazee: This is *kinda* the idea: https://github.com/emilbayes/mindvault","timestamp":1500752099173}
{"from":"pfrazee","message":"emilbayes: that makes sense. I'll keep that in mind. If we can figure out how to coordinate the multiwriter, then this would be pretty great. Though, people are also really bad about passwords. What are the odds that folks would collide their passwords... pretty high, right?","timestamp":1500752109676}
{"from":"jhand","message":"@lukeburns definitely would be cool to make that! want to get this up at chat.datproject.org and make it a bit easier for folks to come in a chat without using gitter/irc.","timestamp":1500752116621}
{"from":"pfrazee","message":"\"password123\" and such","timestamp":1500752118978}
{"from":"emilbayes","message":"pfrazee: No, so the idea is that you use something unique like their email as a salt","timestamp":1500752152905}
{"from":"pfrazee","message":"emilbayes: ah that makes sense","timestamp":1500752167671}
{"from":"emilbayes","message":"pfrazee: But because the \"hash\" is shared (ie the public key) you need to have a strong password","timestamp":1500752184523}
{"from":"pfrazee","message":"emilbayes: well, to deal with multiwriter, maybe you keep a file somewhere on the machine that has a device-specific salt","timestamp":1500752185893}
{"from":"emilbayes","message":"pfrazee: That would work too!","timestamp":1500752195516}
{"from":"emilbayes","message":"pfrazee: And be even better since you could have a much stronger salt","timestamp":1500752208847}
{"from":"pfrazee","message":"emilbayes: we can also use some software that only accepts passwords of acceptable entropy","timestamp":1500752232525}
{"from":"pfrazee","message":"\"not strong enough\"","timestamp":1500752235711}
{"from":"emilbayes","message":"but since the key derivation happens on the users computer you could make it take something like 5s of CPU time so you get really high entropy","timestamp":1500752247499}
{"from":"pfrazee","message":"yeah","timestamp":1500752264692}
{"from":"emilbayes","message":"pfrazee: That's exactly why the mindvault example uses the eff diceware list","timestamp":1500752265663}
{"from":"emilbayes","message":"then you can know exactly how random the password is","timestamp":1500752276222}
{"from":"emilbayes","message":"if passwords are open ended you can only guesstimate","timestamp":1500752293303}
{"from":"emilbayes","message":"(which is something i'm also working on ^^)","timestamp":1500752305704}
{"from":"pfrazee","message":"yeah that's interesting. The alternative is something like \"device pairing\" I think. Using QR codes to exchange a shared secret to a phone, and then using that shared secret to sync more data","timestamp":1500752366347}
{"from":"pfrazee","message":"or you could use a flow like magic wormhole to do the initial device pairing","timestamp":1500752389295}
{"from":"emilbayes","message":"pfrazee: which would be good too, but then you need to do it up front and can't just sit down at any computer","timestamp":1500752406626}
{"from":"pfrazee","message":"emilbayes: right, the password method has the advantage of being \"eventually consistent\"","timestamp":1500752422745}
{"from":"pfrazee","message":"so to speak ;)","timestamp":1500752429691}
{"from":"emilbayes","message":"pfrazee: I'd rather say always available :p","timestamp":1500752439759}
{"from":"pfrazee","message":"right right","timestamp":1500752453455}
{"from":"emilbayes","message":"but another app that does this is minilock","timestamp":1500752477381}
{"from":"emilbayes","message":"and I do this for a grain exchange so the crypto checks out. It's just a bit hard then you don't have a trusted third party, because then if you loose your keys or forget your password, access is lost forever","timestamp":1500752542327}
{"from":"emilbayes","message":"but! there's ways around that like secret sharing and \"commitments\"","timestamp":1500752566952}
{"from":"pfrazee","message":"right. We were just discussing that this morning. We *could* run a trusted cloud provider but that feels kind of gross. I mean, I dont really like the idea of using apple's icloud already, so why would beaker be any different","timestamp":1500752594886}
{"from":"pfrazee","message":"I'd rather have a way to sync/pair devices on the lan","timestamp":1500752621157}
{"from":"pfrazee","message":"maybe support both","timestamp":1500752648063}
{"from":"pfrazee","message":"emilbayes: so you've deployed the system you're suggesting?","timestamp":1500752716637}
{"from":"emilbayes","message":"pfrazee: So what I do is that we split the key in parts using a threshold scheme. So it takes eg. 5 shares to reconstruct your keys. My TTP takes 3 shares and your colleagues (which also trust the 3. party) have a share each. If you have 5 colleagues they can help recover you key, or you can ask two and me and I will help recover","timestamp":1500752728201}
{"from":"emilbayes","message":"pfrazee: I've built it, but not deployed to users yet","timestamp":1500752744045}
{"from":"pfrazee","message":"yeah","timestamp":1500752746267}
{"from":"emilbayes","message":"or not recover the key itself, but a kind of \"revocation cert\" that you committed to when you made your feed","timestamp":1500752776942}
{"from":"emilbayes","message":"so there is a fair bit of complexity in this, but I feel like most of it can be hidden with good ux","timestamp":1500752804514}
{"from":"pfrazee","message":"yeah. I'll do a bit of research and see what other crypto projects are doing. I think people may prefer to not need to remember anything, even if that solution is more resilient","timestamp":1500752926101}
{"from":"ralphtheninja","message":"emilbayes: sounds really cool :)","timestamp":1500753079418}
{"from":"emilbayes","message":"ralphtheninja: :D","timestamp":1500753259295}
{"from":"emilbayes","message":"pfrazee: I'm also not sure how trust is going to work with hyperdb","timestamp":1500753291258}
{"from":"yoshuawuyts","message":"pfrazee: might be fun to talk with shibacomputer about key exchange UX, he's designed a fair few UIs for crypto apps","timestamp":1500753294305}
{"from":"pfrazee","message":"emilbayes: last I heard it'll be an owner hypercore that can add and remove author hypercores","timestamp":1500753337932}
{"from":"pfrazee","message":"yoshuawuyts: shibacomputer: Im game","timestamp":1500753353747}
{"from":"emilbayes","message":"pfrazee: Yeah, but I just wonder how you build trust in other feeds. Like, it would be very suspicious if i had a bunch of messages, and then my last message is just \"pointing to my new feed B\". How can I know that it wasn't an adversary that did that","timestamp":1500753424428}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: pfrazee: does this do what you need? https://github.com/sodium-friends/sodium-native#key-derivation","timestamp":1500753474693}
{"from":"emilbayes","message":"lukeburns: that's the kdf api I talked about ^^","timestamp":1500753498837}
{"from":"emilbayes","message":"but it assumes that the master key is high entropy","timestamp":1500753506934}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: gotcha","timestamp":1500753518049}
{"from":"emilbayes","message":"otherwise \"garbage in, garbage out\"","timestamp":1500753518370}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: mmm so you need to be able to verify that two subkeys were generated from the same master?","timestamp":1500753546958}
{"from":"emilbayes","message":"lukeburns: I take your password and run it through argon2 until it is too painful and then use that as the master key","timestamp":1500753570855}
{"from":"dat-gitter","message":"(lukeburns) re trust ^","timestamp":1500753588930}
{"from":"emilbayes","message":"lukeburns: You can't do that, it's more a way to make many keys from one single key","timestamp":1500753593313}
{"from":"emilbayes","message":"lukeburns: crypto_kdf just runs your master key though blake2b with subkeyId as salt","timestamp":1500753642380}
{"from":"emilbayes","message":"so it's just a crypto one way hash function, not a strung kdf like you might use for password hashing","timestamp":1500753684286}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: right. i'm responding to the trust issue ^ peers would need to verify whether two keys belong to the same person","timestamp":1500753726595}
{"from":"emilbayes","message":"lukeburns: Ah yeah, so this won't help there","timestamp":1500753746509}
{"from":"dat-gitter","message":"(lukeburns) unless i'm misunderstanding","timestamp":1500753751654}
{"from":"emilbayes","message":"the only way I know of currently without a trusted third party is to make a commitment, so eg. taking your next keypair or a message that says \"when I reveal the value that hashe's to X, you know this is my new keys\"","timestamp":1500753829624}
{"from":"emilbayes","message":"and hash it","timestamp":1500753838529}
{"from":"emilbayes","message":"and put the hash into your feed as your first message. Because if you don't trust the first message, you wont trust any subsequent messages","timestamp":1500753863678}
{"from":"emilbayes","message":"but I don't feel like that solves the underlying problem, because then you just have two keys to deal with, or you need to have a message that's very hard to guess and then you might as well have a password","timestamp":1500753936002}
{"from":"emilbayes","message":"pfrazee: Bitcoin BIP's might be worth looking at too. They've tried to solve this problem too","timestamp":1500754050135}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: https://www.ietf.org/proceedings/interim-2017-cfrg-01/slides/slides-interim-2017-cfrg-01-sessa-bip32-ed25519-01.pdf","timestamp":1500754068439}
{"from":"dat-gitter","message":"(lukeburns) would require some messing with libsodium and hypercore","timestamp":1500754104326}
{"from":"emilbayes","message":"lukeburns Cool will look! Looks like dominictarr/private-box on the surface","timestamp":1500754149992}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: so your idea is -- X is the first message of feed 2 and then feed 1 reveals Y that hashes to X to prove control of feed 2?","timestamp":1500754179326}
{"from":"emilbayes","message":"using ECC scalar multiplication","timestamp":1500754198779}
{"from":"emilbayes","message":"lukeburns: yeah, classic commitments","timestamp":1500754205975}
{"from":"emilbayes","message":"lukeburns: oh no, misunderstood","timestamp":1500754219553}
{"from":"emilbayes","message":"lukeburns: feed 1's very first message is hash(next key), and then when you want to change keys for whatever reason, you reveal `next key`","timestamp":1500754260272}
{"from":"emilbayes","message":"so that's why it's not very nice","timestamp":1500754270402}
{"from":"dat-gitter","message":"(lukeburns) ah","timestamp":1500754286610}
{"from":"emilbayes","message":"lukeburns: so those slides are exactly what crypto_kdf does, just that libsodium uses blake2b, while they use hmac-sha512 if I read it right","timestamp":1500754420870}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: i don't think that's correct","timestamp":1500754486414}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: at least i was under the impression that bip32 allows one to verify that multiple derived keys belong to the same hierarchy","timestamp":1500754584856}
{"from":"emilbayes","message":"lukeburns: so in the second to last slide they show how a hash of the master key and and the A public key (?) is put through a HMAC to generate other key pairs","timestamp":1500754618367}
{"from":"emilbayes","message":"lukeburns: maybe I will read the BIP. would be awesome if it worked!","timestamp":1500754648495}
{"from":"emilbayes","message":"substack did some work on hierarchal keys too for groups, but I don't think you can prove causality in that system either","timestamp":1500754693123}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: here's a more detailed implementation https://github.com/WebOfTrustInfo/rebooting-the-web-of-trust-fall2016/blob/master/topics-and-advance-readings/HDKeys-Ed25519.pdf","timestamp":1500754795907}
{"from":"emilbayes","message":"lukeburns: so from the bip it's obivious that you can't deduce causality either","timestamp":1500754826492}
{"from":"pfrazee","message":"emilbayes: I'll check out BIP","timestamp":1500754835296}
{"from":"emilbayes","message":"lukeburns: will look at that paper","timestamp":1500754842109}
{"from":"emilbayes","message":"lukeburns: that paper is what crypto_kdf does, crypto_kdf is just slightly different","timestamp":1500754966655}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: see last sentence of section II","timestamp":1500755025235}
{"from":"emilbayes","message":"lukeburns: that just means that if you have the master key and the \"chain code\" you can derive the child key pair","timestamp":1500755090255}
{"from":"emilbayes","message":"chain code is called subkeyId in libsodium","timestamp":1500755098191}
{"from":"pfrazee","message":"(ps check out our hot new web zone https://bluelinklabs.com/)","timestamp":1500755103618}
{"from":"emilbayes","message":"it just says that it is deterministic","timestamp":1500755121720}
{"from":"emilbayes","message":"lukeburns: I will try and read the paper, because there is some stuff in there that's different from crypto_kdf. I want to make sure that's just because they're using a hmac","timestamp":1500755240407}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: with master secret key and chain code you derive child keypairs. pretty sure the main idea here is that anyone with a master *public* key and chain code can derive child *public* key","timestamp":1500755266790}
{"from":"dat-gitter","message":"(lukeburns) so can prove the hierarchy","timestamp":1500755301623}
{"from":"jhand","message":"pfrazee: don't think the style is loading on your bll site ;)","timestamp":1500755323273}
{"from":"pfrazee","message":"jhand: where we're going, we dont *need* styles","timestamp":1500755350950}
{"from":"emilbayes","message":"lukeburns: reading closely now. One thing they do have that libsodium doesn't is that you can generate the public and private keys independently of each other, which seems pretty cool","timestamp":1500755490321}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: ya! that's the same property that allows you to prove key hierarchy","timestamp":1500755575221}
{"from":"emilbayes","message":"lukeburns: So you can derive a public key independently of the private key. That seems really cool! However this is well within the \"don't roll you own crypto\" area for me so I don't have the guts to implement it :p","timestamp":1500756062328}
{"from":"emilbayes","message":"lukeburns: Saved the paper! thank you","timestamp":1500756188917}
{"from":"emilbayes","message":"lukeburns: did you go to this workshop?","timestamp":1500756253371}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: nope, came across all this when i was looking at the possibility of deterministically deriving relationship dats for dat-pki (https://github.com/jayrbolton/dat-pki/issues/9)","timestamp":1500756332605}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: there are some forces out there pushing for this, so maybe we'll find ourselves with the necessary tools in libsodium to implement this someday","timestamp":1500756413408}
{"from":"emilbayes","message":"reading the issue now","timestamp":1500756459198}
{"from":"emilbayes","message":"lukeburns: ha, great: https://ristretto.group/","timestamp":1500756531250}
{"from":"dat-gitter","message":"(lukeburns) lol ik","timestamp":1500756569917}
{"from":"emilbayes","message":"lukeburns: did you get to study a lot of number theory and abstract algebra as part of your physics degree?","timestamp":1500756624200}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: nah just independent learning","timestamp":1500756670347}
{"from":"emilbayes","message":"lukeburns: oh okay, hardcore stuff to go at on your own :p Looking at the decaf paper now","timestamp":1500756807177}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh Some app developers and I are hanging out on https://talky.io/tangerine and would love to discuss messaging over dat","timestamp":1500756861455}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: decaf is one approach -- another proposed by tor folks is described here: https://moderncrypto.org/mail-archive/curves/2017/000866.html","timestamp":1500757288837}
{"from":"emilbayes","message":"lukeburns: will read","timestamp":1500757417006}
{"from":"emilbayes","message":"lukeburns: wow that font is confusing. Didn't realise it was mod l until late","timestamp":1500757438334}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: yeah that got me too haha","timestamp":1500757457681}
{"from":"dat-gitter","message":"(lukeburns) the 1s and ls are hard to distinguish","timestamp":1500757484794}
{"from":"emilbayes","message":"lukeburns: I feel like pythagoras and the slave. It all makes sense but I don't have enough background to make know the implication","timestamp":1500757550828}
{"from":"emilbayes","message":"s","timestamp":1500757551633}
{"from":"ralphtheninja","message":"pfrazee: awesome, this is web 3.0 right?","timestamp":1500757589972}
{"from":"pfrazee","message":"ralphtheninja: yes it is. As we all know, design is a flat circle","timestamp":1500757608931}
{"from":"ralphtheninja","message":"hehe","timestamp":1500757616339}
{"from":"pfrazee","message":"it repeats itself and also flat is super in right now","timestamp":1500757639639}
{"from":"ralphtheninja","message":"minimalistic badassery :P","timestamp":1500757689223}
{"from":"pfrazee","message":":D","timestamp":1500757779019}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: the main thing is: there's some bit-twiddling that happens to secret keys before they are used to prevent certain (subgroup) attacks but at the cost of algebraic properties needed to implement bip32. https://moderncrypto.org/mail-archive/curves/2017/000866.html describes a way to accomplish the same thing while preserving algebraic properties","timestamp":1500757880996}
{"from":"dat-gitter","message":"(lukeburns) (same link)","timestamp":1500757892195}
{"from":"emilbayes","message":"lukeburns: right. I guess this won't see any review from djb :p","timestamp":1500758004072}
{"from":"emilbayes","message":"looking at the *25519-dalek repos now","timestamp":1500758076873}
{"from":"emilbayes","message":"lukeburns: eager to see what frank says about the bip32-ed25519: https://twitter.com/emilbayes/status/888877931774312449","timestamp":1500760022498}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh I also just saw you had gotten dat working on your phone in termux. I had just tried that yesterday but was unable to `dat clone` or `dat share`. I'm on a Nexus 5x.","timestamp":1500760043053}
{"from":"jhand","message":"@rjsteinert what happens when you run them? Im using nexus 5x too","timestamp":1500760089901}
{"from":"dat-gitter","message":"(rjsteinert) @jhand By run do you mean just type `dat`? I do see the CLI help when I do that.","timestamp":1500760729640}
{"from":"jhand","message":"what part of `dat clone` were you unable to run? was it an error or did it just not connect","timestamp":1500760796642}
{"from":"dat-gitter","message":"(rjsteinert) @jhand, wait, reviewing those termux logs...","timestamp":1500760802157}
{"from":"dat-gitter","message":"(rjsteinert) @jhand, let's see if I can get a log out of this","timestamp":1500760822435}
{"from":"dat-gitter","message":"(rjsteinert) @jhand @mafintosh @pfrazee OMG I was just able to `dat share` in Termux on my Nexus 5x and clone that over on my laptop!","timestamp":1500761151848}
{"from":"pfrazee","message":"@rjsteinert nice!","timestamp":1500761166241}
{"from":"jhand","message":"sweet","timestamp":1500761172007}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: sweet also curious to learn about his thoughts","timestamp":1500761235448}
{"from":"emilbayes","message":"lukeburns: he replied","timestamp":1500761327240}
{"from":"dat-gitter","message":"(rjsteinert) @jhand @mafintosh @pfrazee and now I have dat shared from my laptop and dat cloned on my phone! w00t! This is huge for us folks who work in the offline world where Android is King.","timestamp":1500761335176}
{"from":"pfrazee","message":"yeah that's great news","timestamp":1500761353223}
{"from":"dat-gitter","message":"(rjsteinert) @jhand @mafintosh @pfrazee Spreading verifiable content and apps offline in the developing world and in places where the Internet is censored is a big challenge. This has potential.","timestamp":1500761459139}
{"from":"dat-gitter","message":"(rjsteinert) @jhand @mafintosh @pfrazee We've done this in the past with CouchDB on Android but the fact that only someone with the secret key can write to a dat archive means that the person on the motorcycle spreading content/apps from town to town can't mess with content. Not to rag on the folks who do that hard work but just to make the point. Folks receiving content can rest assured that the content is not somehow tampered wi","timestamp":1500761592359}
{"from":"pfrazee","message":"@rjsteinert yeah","timestamp":1500761618779}
{"from":"substack","message":"related to this problem, is there a way to `dat share` but only on the local network?","timestamp":1500761629647}
{"from":"substack","message":"same for `dat sync` and the other commands","timestamp":1500761646907}
{"from":"dat-gitter","message":"(rjsteinert) @substack That's what i've been doing with dat as a proof of concept","timestamp":1500761648228}
{"from":"dat-gitter","message":"(rjsteinert) @substack but only on Mac and Linux boxes up to this point","timestamp":1500761662095}
{"from":"substack","message":"I mean, if I have an internet uplink but it's metered and I don't want to waste bytes if I know that another computer on the local network has the data","timestamp":1500761681338}
{"from":"pfrazee","message":"probably would be pretty easy to add that switch to dat cli","timestamp":1500761717044}
{"from":"dat-gitter","message":"(rjsteinert) @substack That is a HUGE motivation for the folks we work with who need to deploy to hundreds of tablets in bandwidth constrained environments","timestamp":1500761720331}
{"from":"substack","message":"rjsteinert: I also worked on a project with ddem folk to do offline p2p mapping and this exact problem comes up a lot","timestamp":1500761752788}
{"from":"substack","message":"http://www.digital-democracy.org/blog/osm-p2p/","timestamp":1500761766140}
{"from":"dat-gitter","message":"(rjsteinert) @substack I was on a call with Taylor Savage, the PM of Google Chrome, with a bunch of other offline folks like myself and it became apparent that this is a huge issue for Progressive Web Apps","timestamp":1500761810749}
{"from":"substack","message":"and once hyperlog can work on top of hyperdb, then everything can sync over dat to","timestamp":1500761816126}
{"from":"substack","message":"another thing that would help a lot is a faster block storage than indexeddb","timestamp":1500761858208}
{"from":"substack","message":"like https://github.com/jakearchibald/byte-storage but that API is still taking shape","timestamp":1500761895471}
{"from":"dat-gitter","message":"(rjsteinert) @substack I'm trying to get Tyler's attention towards what dat and beaker browser are doing https://twitter.com/rjsteinert/status/888116930770485248","timestamp":1500761920126}
{"from":"dat-gitter","message":"(rjsteinert) \"@TangerineTool needs offline distribution: One tablet updates online, the update spreads via LAN while offline. #BeakerBrowserForAndroid?\"","timestamp":1500761942885}
{"from":"substack","message":"rjsteinert: are you distributing the app updates p2p offline too?","timestamp":1500762009588}
{"from":"dat-gitter","message":"(rjsteinert) @substack Cool to see something Jake Archibald working on some tech that will be useful for Dat. He's a big pioneer in PWA space and helped develop Service Worker in Chrome.","timestamp":1500762050045}
{"from":"dat-gitter","message":"(rjsteinert) @substack We've distributed app updates in the past P2P offline by using CouchDB, more recently APK files. Both suffer from issues that would be solved by using dat, not to mention a better UX if we can nail a Beaker Browser for Android.","timestamp":1500762116688}
{"from":"substack","message":"would you ship html in dat or the apk files?","timestamp":1500762196538}
{"from":"substack","message":"I guess using beaker would mean html","timestamp":1500762219268}
{"from":"dat-gitter","message":"(rjsteinert) The hope is we can ship a browser in an APK that can talk dat, then distribute applications via HTML in a dat archive","timestamp":1500762253537}
{"from":"substack","message":"and add links to the homescreen to open beaker?","timestamp":1500762304759}
{"from":"dat-gitter","message":"(rjsteinert) @substack Ya, a link from the homescreen would be a good starting point. A UX in a browser that understands the dats you have similar to how Beaker Browser does it gets us there as well.","timestamp":1500762371573}
{"from":"dat-gitter","message":"(rjsteinert) @substack We are also thinking through the problem of how we distribute around data that users generate in an app. At the moment our thinking is similar to the one database per user model that is common in CouchDB. In this case it's users using a Dat Archive as their app, then they have another Dat Archive where their own data is stored. The trick is developing some kind of messaging between dat archives so that peopl","timestamp":1500762466762}
{"from":"dat-gitter","message":"dat archives thus aggregate them into one application state.","timestamp":1500762466886}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh mentioned earlier today it's possible to send messages that are not written to disk between peers of a dat archive. This would allow the peers of a particular dat app archive to know of the existence of the individual user dat archives","timestamp":1500762542798}
{"from":"dat-gitter","message":"(lukeburns) @rjsteinert some talk of push messaging on dat-pki https://github.com/jayrbolton/dat-pki/issues/7 that would probably also need messaging between dat archives for key exchange","timestamp":1500762677015}
{"from":"dat-gitter","message":"(rjsteinert) Thanks for the link @lukeburns","timestamp":1500762707959}
{"from":"dat-gitter","message":"(rjsteinert) @lukeburns Sounds right to me","timestamp":1500762780372}
{"from":"dat-gitter","message":"(rjsteinert) \"The receiving peer needs to learn about the sending peer's key somehow (in order to get the relationship dat)\"","timestamp":1500762784582}
{"from":"dat-gitter","message":"(lukeburns) emilbayes: what do you think? is there another way to solving this problem without key hierarchy verification?","timestamp":1500762914606}
{"from":"dat-gitter","message":"(rjsteinert) Heading offline for a couple hours. See y'all later.","timestamp":1500762991121}
{"from":"substack","message":"rjsteinert: you can also look at hyperlog as a data model that you can build materialized views (like a kv store on top of)","timestamp":1500762995847}
{"from":"substack","message":"and also hyperdb which has a kv store itself","timestamp":1500763002787}
{"from":"pfrazee","message":"HN can get toxic but god help you on reddit","timestamp":1500767646891}
{"from":"dat-gitter","message":"(e-e-e) @rjsteinert - regrading messages - mafintosh: in the past pointed me to hyperdb for an example of how he has implemented messaging through hypercore-protocol. https://github.com/mafintosh/hyperdb/blob/master/index.js#L53","timestamp":1500768294761}
{"from":"dat-gitter","message":"(e-e-e) and https://github.com/mafintosh/hypercore-protocol/blob/master/index.js#L31","timestamp":1500768316304}
{"from":"dat-gitter","message":"(e-e-e) Although if you are relying on the dat with write permissions seeing messages - couldn’t you face issues where a peer is only connected to other peers and not the master dat. In which case any messages it sends would be lost.","timestamp":1500768448540}
{"from":"dat-gitter","message":"(lukeburns) @e-e-e could get around by passing messages through peers","timestamp":1500768684914}
{"from":"pfrazee","message":"I've been thinking about this messaging issue","timestamp":1500768946299}
{"from":"pfrazee","message":"what we need to be able to do is establish data channels between computers","timestamp":1500768957616}
{"from":"pfrazee","message":"and it seems like there's an opportunity to reuse some of dat's infrastructure to accomplish that, because, clearly, dat is already doing it","timestamp":1500768983794}
{"from":"dat-gitter","message":"(e-e-e) @lukeburns but then even then no guarantee","timestamp":1500768999248}
{"from":"pfrazee","message":"though we need to take care about the nature of dat's channels. IIRC, Dat encrypts its messaging channels but doesnt authenticate the messages, because it assumes the data structures (hypercore) will be authenticated","timestamp":1500769046756}
{"from":"pfrazee","message":"but at a basic level there's really 2 kinds of scenarios we need to handle: here is an unguessable shared secret, I'd like to establish a messaging channel with anybody that holds that secret","timestamp":1500769095875}
{"from":"pfrazee","message":"that's 1","timestamp":1500769100962}
{"from":"pfrazee","message":"and 2 is, here's a pubkey, I'd like to establish a messaging channel with the holder of its matching private key","timestamp":1500769128507}
{"from":"ralphtheninja","message":"makes sense","timestamp":1500769168640}
{"from":"dat-gitter","message":"(e-e-e) pfrazee: looking at what mafintosh: has in place already it seems like not a lot of extra work to get this working.","timestamp":1500769183967}
{"from":"pfrazee","message":"@e-e-e: potentially. What mafintosh is suggesting is, use the hypercore logs to communicate","timestamp":1500769211170}
{"from":"dat-gitter","message":"(e-e-e) is 1 the same as becoming a peer generally, you both have the public key.","timestamp":1500769225620}
{"from":"pfrazee","message":"right","timestamp":1500769241605}
{"from":"pfrazee","message":"let me continue braindumping here","timestamp":1500769249396}
{"from":"dat-gitter","message":"(e-e-e) go go","timestamp":1500769254313}
{"from":"pfrazee","message":"there are 3 options I can see","timestamp":1500769254419}
{"from":"pfrazee","message":"1) we use webrtc data channels. This would require a signalling solution that we're all happy with. Something like a federated signalling service. For use case 1, that","timestamp":1500769288379}
{"from":"pfrazee","message":"'s super easy. For use case 2, we need key distribution which is trusted","timestamp":1500769302135}
{"from":"pfrazee","message":"2) we add some capabilities to dat","timestamp":1500769346300}
{"from":"pfrazee","message":"'s infrastructure (oh boy I have fat fingers rn)","timestamp":1500769355514}
{"from":"pfrazee","message":"which makes it possible for peers to connect using hyperdiscovery, and then we add *authenticated* encrypted channels on top of dat's use of tcp and utp","timestamp":1500769432360}
{"from":"pfrazee","message":"and we send messages using the channels, more or less as sockets","timestamp":1500769454619}
{"from":"pfrazee","message":"3) we add the same discovery capabilities to dat, but all peers in a session create temporary hypercores, and use the new discovery capabilities to simply exchange the pubkey of their newly created hypercores","timestamp":1500769493090}
{"from":"pfrazee","message":"then all parties read each others' logs -- it's basically a pull based channel","timestamp":1500769507592}
{"from":"pfrazee","message":"pros and cons for the 3 options:","timestamp":1500769517499}
{"from":"pfrazee","message":"all 3 are probably going to need proxy fallbacks at some point, because we still dont get reliable connections between peers (nats and firewalls and etc)","timestamp":1500769559077}
{"from":"pfrazee","message":"1 has authenticated channels already so that's a plus, but it depends on new infrastructure to be deployed (the signalling services)","timestamp":1500769583663}
{"from":"pfrazee","message":"2 requires a little bit of work to be done to add authenticated messaging","timestamp":1500769605839}
{"from":"pfrazee","message":"and I'm not sure whether there are message-acks in 1, 2, or 3 now that I think of it, but it would be very nice if there was. I'm pretty sure there are in 1.","timestamp":1500769687105}
{"from":"pfrazee","message":"whichever we use, what I know Beaker needs is the ability for the 2 cases -- shared secret token and channels to a specific user's pubkey -- to be just APIs that any app can use. \"Setup this messaging channel please, I want to do a chat, or I want to act like a server for this forum\"","timestamp":1500769780635}
{"from":"pfrazee","message":"so the infrastructure aspects (ie the signalling servers) need to be hidden away","timestamp":1500769797913}
{"from":"pfrazee","message":"end brain dump","timestamp":1500769801233}
{"from":"pfrazee","message":"actually we may not *require* trusted key distribution for webrtc or any others. Copy-pasting through a trusted channel is fine for the early days","timestamp":1500769909162}
{"from":"pfrazee","message":"trusted key distribution just makes it possible for the apps/browser to automatically say \"hey here's paul, 99% sure that's him\"","timestamp":1500769939280}
{"from":"pfrazee","message":"another observation- webrtc has the TURN protocol which is (IIRC) an abstracted-away fallback proxy https://en.wikipedia.org/wiki/Traversal_Using_Relays_around_NAT","timestamp":1500770108977}
{"from":"pfrazee","message":"and fallback proxies are going to be pretty important, so it'd be nice to have that already done for us. (Then again, Im not sure it's very easy to deploy TURN)","timestamp":1500770189841}
{"from":"dat-gitter","message":"(lukeburns) pfrazee: number 2 is the two person case of number 1 right?","timestamp":1500770456735}
{"from":"pfrazee","message":"all 3 of those options should support the 2 scenarios: a shared-secret \"group\" connection and a pubkey \"individual\" connection","timestamp":1500770502380}
{"from":"pfrazee","message":"giving names to all 3 of those options, in order: webrtc messaging, tcp/utp messaging, and hypercore messaging","timestamp":1500770570960}
{"from":"pfrazee","message":"oh you know, here's another negative for webrtc messaging: it's not feasible to do outside the browser right now :p","timestamp":1500770607325}
{"from":"dat-gitter","message":"(e-e-e) I might be wrong but 2 and 3 are kind of similar - as 2 would use hypercore-messaging but of the dat, and 3 would use hypercore-messaging but with a log recorded. Or am I miss understanding?","timestamp":1500770688341}
{"from":"pfrazee","message":"yeah that's right","timestamp":1500770702706}
{"from":"dat-gitter","message":"(lukeburns) The second scenario is the same as the first where only two people have the same secret key ^","timestamp":1500770706119}
{"from":"pfrazee","message":"@lukeburns oh yes that's right","timestamp":1500770721221}
{"from":"dat-gitter","message":"(e-e-e) so the two options could actually coexist. with an option of acting as a log.","timestamp":1500770723748}
{"from":"pfrazee","message":"I'm *assuming* that, to establish these channels, we'll use a shared secret on the discovery network","timestamp":1500770760947}
{"from":"dat-gitter","message":"(e-e-e) those acting as a log could record transactions and help propagate the messages to other peers in the network - including acknowledgement.","timestamp":1500770813988}
{"from":"pfrazee","message":"that's a possibility, I'd need to explore it a bit more","timestamp":1500770846125}
{"from":"pfrazee","message":"on the plus side, the hypercore-messaging solution just needs one addition (AFAIK): the ability for 2+ peers with a shared secret to exchange the pubkeys of their logs, and therefore start their subscriptions to each other","timestamp":1500770887793}
{"from":"pfrazee","message":"and yeah @e-e-e the logs can basically be \"proxied\"","timestamp":1500770908087}
{"from":"pfrazee","message":"but that would require other peers to subscribe to the logs","timestamp":1500770924067}
{"from":"pfrazee","message":"so, for instance, my computer could setup a session with me and 2 other peers. There are therefore 3 hypercore logs. My PC could push all 3 to hashbase and say \"follow these please, in case the network gets ducky.\" Hashbase would then basically be a proxy for us","timestamp":1500770994871}
{"from":"dat-gitter","message":"(e-e-e) yep - sounds reasonable.","timestamp":1500771024606}
{"from":"dat-gitter","message":"(e-e-e) I was playing around with messaging a while ago - https://github.com/e-e-e/hyperchat - no sure if any of that could be of use. Each peer had there own log - recorded all the events they heard","timestamp":1500771360991}
{"from":"dat-gitter","message":"(e-e-e) @lukeburns pointed me to key exchange via handshake","timestamp":1500771401655}
{"from":"dat-gitter","message":"(e-e-e) only implemented a poc via console logs atm.","timestamp":1500771420604}
{"from":"dat-gitter","message":"(e-e-e) Just thinking about scaling though, if you use a hypercore to log all messages and acknowledgments - won't that become large quickly. How do you perge the logs while still having trusted log?","timestamp":1500771660379}
{"from":"ralphtheninja","message":"this sounds more like bitcoin each day :)","timestamp":1500771709444}
{"from":"ralphtheninja","message":"at least similar problems to solve","timestamp":1500771714983}
{"from":"dat-gitter","message":"(e-e-e) could I crash your server just by spamming messages.","timestamp":1500771725983}
{"from":"pfrazee","message":"this is where an ack would be helpful","timestamp":1500771976043}
{"from":"pfrazee","message":"because after an ack, the server could delete the old messages","timestamp":1500771988091}
{"from":"pfrazee","message":"and the same is true of the clients, the old messages could be consumed and then deleted","timestamp":1500772000734}
{"from":"domanic","message":"hey, question about how dat compares to ipfs - I'm thinking it's that ipfs advertises peers for each block, but dat follows bittorrent architecture a bit more, so DHT only advertises each dat","timestamp":1500772140798}
{"from":"domanic","message":"which is equivalent to each torrent","timestamp":1500772158157}
{"from":"domanic","message":"(but more flexible) and then peers in that swarm figure out who has what blocks","timestamp":1500772273880}
{"from":"pfrazee","message":"domanic: afaik that's right, maf isnt around","timestamp":1500772325516}
{"from":"domanic","message":"where as in ipfs there isn't an explicit concept of a collective object (repo/dat)","timestamp":1500772333238}
{"from":"pfrazee","message":"right","timestamp":1500772344312}
{"from":"domanic","message":"which means ipfs is oriented towards seamlessly overlapping datasets","timestamp":1500772360307}
{"from":"pfrazee","message":"correct","timestamp":1500772366526}
{"from":"domanic","message":"however i'm gonna guess that blocks can be shared between dats, though","timestamp":1500772391678}
{"from":"pfrazee","message":"they could be but theyre not right now","timestamp":1500772401649}
{"from":"domanic","message":"ah","timestamp":1500772409480}
{"from":"domanic","message":"anyway, I don't think this is a huge problem, unless you have a app that repeats lots","timestamp":1500772452548}
{"from":"pfrazee","message":"yeah, I've been curious whether that just comes down to a performance optimization via dedup","timestamp":1500772496682}
{"from":"domanic","message":"and you are likely to have pull (what is dat's preferred verb for this?) overlapping repos","timestamp":1500772501627}
{"from":"domanic","message":"pfrazee: I guess if you are making a p2p web then you can all share jquery, etc","timestamp":1500772556377}
{"from":"domanic","message":"or react-native or whatever the kids are using these day","timestamp":1500772574834}
{"from":"pfrazee","message":"domanic: right and dat does support hash-addressed archives, just no dedup *between* archives","timestamp":1500772576694}
{"from":"pfrazee","message":"haha right","timestamp":1500772579865}
{"from":"pfrazee","message":"so dat can still do efficient shared caching","timestamp":1500772610426}
{"from":"pfrazee","message":"domanic: one thing Ive missed from ssb is hash-addressed messages. Dat doesnt expose hash addresses to its blobs yet","timestamp":1500772616173}
{"from":"pfrazee","message":"that guaranteed happens-before relationship is handy","timestamp":1500772626345}
{"from":"domanic","message":"oh so if you replicate a directory tree with dat it doesn't record the hash of the files?","timestamp":1500772673489}
{"from":"pfrazee","message":"there are hashes internally somewhere, but I dont have them exposed in beaker","timestamp":1500772692564}
{"from":"pfrazee","message":"all references, so far, are URLs like \"dat://{pubkey}/foo.json\"","timestamp":1500772715457}
{"from":"domanic","message":"pfrazee: if it's recorded internally it wouldn't be that hard implement - you wouldn't have to rearchitect dat","timestamp":1500772781238}
{"from":"pfrazee","message":"domanic: yeah totally, just need to get around to it","timestamp":1500772795587}
{"from":"domanic","message":"any way, I think this means that dat is actually somewhat more scalable than ipfs","timestamp":1500772848797}
{"from":"domanic","message":"because peer advertise every block","timestamp":1500772861607}
{"from":"domanic","message":"on the dht in ipfs","timestamp":1500772872917}
{"from":"domanic","message":"which is O(n)","timestamp":1500772892426}
{"from":"domanic","message":"and these are reposted every 24 hours","timestamp":1500772925657}
{"from":"pfrazee","message":"interesting","timestamp":1500772951011}
{"from":"domanic","message":"so if you had lots of small messages ... as in ssb","timestamp":1500772960017}
{"from":"domanic","message":"so far have 150k posts or there about","timestamp":1500772969397}
{"from":"domanic","message":"would be something like 8mb into the DHT every day per peer","timestamp":1500772984000}
{"from":"pfrazee","message":"yikes","timestamp":1500773069056}
{"from":"domanic","message":"more would be spent on that than actually posting messages","timestamp":1500773081073}
{"from":"pfrazee","message":"have you brought that up to the ipfs folks?","timestamp":1500773128587}
{"from":"domanic","message":"just thought of this now...","timestamp":1500773137097}
{"from":"pfrazee","message":"yeah","timestamp":1500773161278}
{"from":"domanic","message":"you could fix it in ipfs by moving towards a pattern more like dat... giving applications control over what blocks to advertise","timestamp":1500773210049}
{"from":"domanic","message":"and I guess, finding peers by tracing backlinks until you find an advertised block","timestamp":1500773248669}
{"from":"ralphtheninja","message":"it's very interesting to see how different trade offs play out","timestamp":1500775082512}
{"from":"pfrazee","message":"yoshuawuyts: if you think of it, you should add blue link labs to the choo site's list of ppl using it","timestamp":1500775103479}
{"from":"ralphtheninja","message":"I'll guess we'll know more as adoption also increases","timestamp":1500775104845}
{"from":"pfrazee","message":"ralphtheninja: yeah","timestamp":1500775114344}
{"from":"ralphtheninja","message":"domanic: o/ btw :)","timestamp":1500775157755}
{"from":"domanic","message":"ralphtheninja: having a tradeoff means you havn't solved the problem ;)","timestamp":1500775207317}
{"from":"domanic","message":"solution gives you best of both","timestamp":1500775218979}
{"from":"yoshuawuyts","message":"pfrazee: yay! PR / issue welcome! Not behind computer rn, might forget otherwise haha","timestamp":1500775292057}
{"from":"ralphtheninja","message":"domanic: not exactly sure what you mean (maybe we are using different lingo), but you always make trade offs and you can never get rid of them .. I mereley meant that ipfs and dat have different trade offs .. for instance the thing with blocks and scalability .. ipfs might not scale as well but instead have other benefits because of that trade off, no?","timestamp":1500775417687}
{"from":"domanic","message":"ralphtheninja: let me give an example","timestamp":1500775466073}
{"from":"domanic","message":"from ssb","timestamp":1500775473590}
{"from":"ralphtheninja","message":"shoot!","timestamp":1500775480333}
{"from":"domanic","message":"in the naive replication protocol that we started with, there was a tradeoff","timestamp":1500775491737}
{"from":"domanic","message":"the tradeoff was between redundency and latency","timestamp":1500775532098}
{"from":"pfrazee","message":"yoshuawuyts: cool will do","timestamp":1500775545394}
{"from":"domanic","message":"if you connected to N peers, you'd tend to receive every new message N times","timestamp":1500775550156}
{"from":"domanic","message":"so there was a multiplier on bandwidth, but if you only connected to 1 peer, you can't be sure the network isn't split","timestamp":1500775596851}
{"from":"domanic","message":"but in the new version, using EBT, you can connect to many peers but still only receive the message once","timestamp":1500775644197}
{"from":"domanic","message":"you get to have redundancy AND low latency","timestamp":1500775659876}
{"from":"ralphtheninja","message":"aah ok, I get it :)","timestamp":1500775663698}
{"from":"domanic","message":"better design removes the tradeoff","timestamp":1500775709636}
{"from":"ralphtheninja","message":"so you are essentially saying that trade offs can be solved, but I guess you do that by more complexity?","timestamp":1500775713935}
{"from":"ralphtheninja","message":"or maybe remove complexity? :)","timestamp":1500775758735}
{"from":"domanic","message":"not always, because the best designs are often simpler","timestamp":1500775762947}
{"from":"domanic","message":"but I guess they are harder to find","timestamp":1500775778192}
{"from":"domanic","message":"the main \"tradeoff\" with ssb is that it targets a specific application architecture (social networks) and optimizes that","timestamp":1500775841357}
{"from":"ralphtheninja","message":"domanic: def get your point .. but you can still compare designs of different protocols (dat vs ipfs) and come to some conclusions about the differences in their designs .. e.g. this block thing","timestamp":1500776197593}
{"from":"ralphtheninja","message":"domanic: let me turn it around a bit, what does the ipfs design give them that dat doesn't have?","timestamp":1500776248141}
{"from":"domanic","message":"ralphtheninja: yes my tradeoff vs. solution idea isn't hard and fast","timestamp":1500776259997}
{"from":"domanic","message":"it looks like quicksort is simply better than bubble sort but that isn't true!","timestamp":1500776283600}
{"from":"domanic","message":"bubble sort can be better than quick sort if the input is already nearly sorted!","timestamp":1500776298207}
{"from":"ralphtheninja","message":"hehe","timestamp":1500776316343}
{"from":"domanic","message":"if you have just one pair swapped out of orded ABDCEF","timestamp":1500776418866}
{"from":"domanic","message":"then quicksort is N*log(N) but bubble sort is only 2*N","timestamp":1500776467477}
{"from":"domanic","message":"ralphtheninja: currently, ipfs doesn't have a \"collection\" concept","timestamp":1500776547596}
{"from":"domanic","message":"so it can deduplicate blocks seamlessly","timestamp":1500776569897}
{"from":"ralphtheninja","message":"nods","timestamp":1500776590338}
{"from":"ralphtheninja","message":"but can't you look at a block as a collection?","timestamp":1500776625719}
{"from":"ralphtheninja","message":"or is a collection always a set of things?","timestamp":1500776633935}
{"from":"domanic","message":"well, a dat link is all the blocks in that dat","timestamp":1500776646698}
{"from":"domanic","message":"you could interpret an ipfs link as a reference to all the blocks linked recursively","timestamp":1500776679147}
{"from":"ralphtheninja","message":"yep","timestamp":1500776685941}
{"from":"ralphtheninja","message":"it's more 'nested' somehow","timestamp":1500776710158}
{"from":"domanic","message":"if you had a way that peers could agree (without coordinating too much) which is the anchor for the swarm, that would approach dat","timestamp":1500776729647}
{"from":"ralphtheninja","message":"what do you mean by anchor?","timestamp":1500776770778}
{"from":"domanic","message":"bittorrent has an infohash... to implement this on ipfs, you'd have one block that contained the hash of all subblocks","timestamp":1500776814306}
{"from":"domanic","message":"then you only put the providers of the top block in the DHT","timestamp":1500776833776}
{"from":"domanic","message":"the ipfs would be bittorrent","timestamp":1500776846489}
{"from":"domanic","message":"i'm not sure what dat uses to calculate a dat link","timestamp":1500776878216}
{"from":"ralphtheninja","message":"me neither","timestamp":1500776889576}
{"from":"ralphtheninja","message":"not yet :)","timestamp":1500776893152}
{"from":"domanic","message":"and I know dat can do mutable data which must mean signatures at some level","timestamp":1500776919542}
{"from":"domanic","message":"but I think it still uses requests for each block?","timestamp":1500776935219}
{"from":"domanic","message":"(much like bittorrent)","timestamp":1500776944906}
{"from":"ralphtheninja","message":"over my head atm, but I'm happy that we are talking about it","timestamp":1500776980480}
{"from":"ralphtheninja","message":"or you are talking about it rather :)","timestamp":1500777019732}
{"from":"domanic","message":"mafintosh could answer these questions better","timestamp":1500777071616}
{"from":"evbogue","message":"I'm trying to parse these messages better. Does Domanic === dominic tarr, or is this a different person? It's been while since I parsed irc handles.","timestamp":1500779012268}
{"from":"domanic","message":"evbogue: yeah this is dominic","timestamp":1500779034768}
{"from":"evbogue","message":"k","timestamp":1500779108992}
{"from":"evbogue","message":"gb said: 'of course it's dominic, duh'","timestamp":1500779116956}
{"from":"domanic","message":"who else would it be?","timestamp":1500779137527}
{"from":"evbogue","message":"I just got off a double shift where I got banished to the party room, so I'm trying to get back into the loop","timestamp":1500779153839}
{"from":"domanic","message":"banished to the party room?","timestamp":1500779171747}
{"from":"evbogue","message":"yup","timestamp":1500779181197}
{"from":"domanic","message":"oh you mean the private function","timestamp":1500779182146}
{"from":"evbogue","message":"I mean the carolina ale house party room","timestamp":1500779195034}
{"from":"evbogue","message":"where people come have birthdays, and they say they're going to have 20 people, but then 5 show up","timestamp":1500779206733}
{"from":"evbogue","message":"and they tip almost nothing","timestamp":1500779211842}
{"from":"domanic","message":"oh sad","timestamp":1500779218777}
{"from":"evbogue","message":"well, reality","timestamp":1500779226080}
{"from":"evbogue","message":"dish would be more fun","timestamp":1500779238688}
{"from":"domanic","message":"at least they have 5 friends though","timestamp":1500779277788}
{"from":"evbogue","message":"yah, it could be worse.","timestamp":1500779299435}
{"from":"evbogue","message":"I should tell them to advertise their parties on ssb-gatherings, but I'm not sure if I can guarantee more people at their parties yet","timestamp":1500779343568}
{"from":"evbogue","message":"anyway, there's something oddly brilliant about the thread above that I'm trying to decode","timestamp":1500779356025}
{"from":"evbogue","message":"what i mean is whatever pfrazee is getting at","timestamp":1500779432575}
{"from":"evbogue","message":"or maybe dat is just trying to reinvent ssb?","timestamp":1500779612342}
{"from":"evbogue","message":"I'm not sure, I'm exhausted","timestamp":1500779624679}
{"from":"evbogue","message":"I should give up on trying to make sense of anything","timestamp":1500779630838}
{"from":"domanic","message":"evbogue: which thing are you refering to?","timestamp":1500779634540}
{"from":"evbogue","message":"I hadn't been on irc in while, so I just logged in and let it run","timestamp":1500779658916}
{"from":"evbogue","message":"and there is a convo above where pfrazee is braindumping","timestamp":1500779668741}
{"from":"evbogue","message":"because you were doing irc bots, and I wanted to see them work","timestamp":1500779680873}
{"from":"evbogue","message":"I usually just hang out on ssb","timestamp":1500779689955}
{"from":"evbogue","message":"I think braindumping is cool. it's a lot easier to do that on irc I imagine than ssb, because it all goes away","timestamp":1500779735150}
{"from":"evbogue","message":"anyway, I should go to bed","timestamp":1500779805771}
{"from":"evbogue","message":"tell Americans that you're in West Virginia","timestamp":1500779816344}
{"from":"evbogue","message":"when they think you're not forign enough","timestamp":1500779825561}
{"from":"evbogue","message":"goodnight #dat. gb says hellos and goodnights and things.","timestamp":1500779876387}
{"from":"dat-gitter","message":"(lukeburns) domanic: a live dat link is an ed25519 public key","timestamp":1500784285360}
{"from":"dat-gitter","message":"(rjsteinert) @lukeburns @substack @mafintosh Unfortunately I'm seeing a bug when connecting to dat archives when offline with Android https://github.com/datproject/dat/issues/829","timestamp":1500793982538}
{"from":"mafintosh","message":"@rjsteinert thanks, will take a look","timestamp":1500795039865}
{"from":"dat-gitter","message":"(rjsteinert) Thanks @mafintosh","timestamp":1500796106430}
{"from":"ralphtheninja[m]","message":"test","timestamp":1500817156328}
{"from":"mafintosh","message":"test ack","timestamp":1500817388145}
{"from":"creationix","message":"domanic, ralphtheninja","timestamp":1500821306793}
{"from":"creationix","message":"sorry I missed your conversation. I've done *lots* of experimentation in this area since even before IPFS was known","timestamp":1500821355787}
{"from":"creationix","message":"I don't understand the SSB protocol yet, I worry a lot about it's performance and haven't made time to dig in deeper.","timestamp":1500821488820}
{"from":"creationix","message":"What I understand about IPFS is it's basically the thing I would have made if left to my own devices. Most my experiments tend to make similar decisions.","timestamp":1500821488926}
{"from":"creationix","message":"But once I dug in and understood DAT, I learned some really neat stuff.","timestamp":1500821489032}
{"from":"creationix","message":"Dat's not quite ready for me to use. (need multi-writer, need working video streaming, need key and identity management, etc..)","timestamp":1500821570683}
{"from":"creationix","message":"Knowing me I might end up making something new inspired by dat, but hopefully I don't have to. I'm trying to be a good collaborator.","timestamp":1500821612372}
{"from":"karissa","message":"It'd be great to have some key management on top of hyperdb. Have you checked that out? creationix","timestamp":1500821832972}
{"from":"creationix","message":"it hyperdb out yet?","timestamp":1500821926721}
{"from":"ralphtheninja","message":"creationix: I can't say that I understand dat at all atm, trying to grok it in little pieces each day","timestamp":1500821958462}
{"from":"mafintosh","message":"creationix: nope :)","timestamp":1500821974578}
{"from":"ralphtheninja","message":"trying to play around and focus on having fun :)","timestamp":1500821978097}
{"from":"mafintosh","message":"creationix: i'm building all our native deps on arm64 right now","timestamp":1500821993481}
{"from":"mafintosh","message":"for prebuilds","timestamp":1500821998343}
{"from":"creationix","message":"mafintosh: nice, that's what our production board is :D","timestamp":1500822009354}
{"from":"mafintosh","message":"creationix: and my phone hehe","timestamp":1500822018908}
{"from":"mafintosh","message":"creationix: https://www.scaleway.com <-- had reasonable prices for servers","timestamp":1500822037867}
{"from":"creationix","message":"those are pretty powerful?","timestamp":1500822099829}
{"from":"mafintosh","message":"only used them for builds so far","timestamp":1500822114619}
{"from":"mafintosh","message":"but seem ok","timestamp":1500822119179}
{"from":"creationix","message":"I've got the expressobin on my desk, it's a little slow but works for small builds","timestamp":1500822152415}
{"from":"mafintosh","message":"creationix: ie your comments above. what do you mean by video streaming?","timestamp":1500822156767}
{"from":"creationix","message":"*espressobin","timestamp":1500822160245}
{"from":"mafintosh","message":"creationix: the server build sodium in 10-20x of my pi3","timestamp":1500822183314}
{"from":"creationix","message":"my pi3 is about that much faster than my espressobin","timestamp":1500822212360}
{"from":"creationix","message":"so yeah, they are good","timestamp":1500822219176}
{"from":"creationix","message":"by video streaming I mean I need an http or fuse interface that gives me a url I can point to a browser's html5 video tag and have it stream well","timestamp":1500822261443}
{"from":"creationix","message":"I've had serious issues with both beaker and dathttpd","timestamp":1500822277327}
{"from":"creationix","message":"when I wrote my own system before digging in to dat, mine streamed well","timestamp":1500822302072}
{"from":"mafintosh","message":"creationix: i fixed a streaming bug two days ago","timestamp":1500822319539}
{"from":"creationix","message":"cool, that might help","timestamp":1500822330019}
{"from":"mafintosh","message":"don't know if it is in beaker yet","timestamp":1500822341039}
{"from":"creationix","message":"if I get the latest dathttpd will it have it?","timestamp":1500822357225}
{"from":"creationix","message":"maybe not, npm likes to default to the lock file now","timestamp":1500822370231}
{"from":"mafintosh","message":"gah","timestamp":1500822381339}
{"from":"mafintosh","message":"pfrazee: o/","timestamp":1500822385698}
{"from":"mafintosh","message":"(update the lock)","timestamp":1500822392219}
{"from":"creationix","message":"not sure, didn't check","timestamp":1500822452239}
{"from":"pfrazee","message":"ok I'll update the lock","timestamp":1500822526402}
{"from":"pfrazee","message":"also beaker has a crash bug on videos due to electron, I cant seem to get anyone to fix it","timestamp":1500822556218}
{"from":"mafintosh","message":"creationix: have you tried streaming into chrome using dat cli?","timestamp":1500822725939}
{"from":"mafintosh","message":"for video","timestamp":1500822727698}
{"from":"creationix","message":"you mean the `-http` flag?","timestamp":1500822758155}
{"from":"mafintosh","message":"--http but yea","timestamp":1500822765499}
{"from":"mafintosh","message":"that should work if you reinstall dat and get the fix","timestamp":1500822779220}
{"from":"creationix","message":"good point","timestamp":1500822784055}
{"from":"creationix","message":"is there a way to chose what interface/ip it binds to?","timestamp":1500822805416}
{"from":"mafintosh","message":"the http?","timestamp":1500822814479}
{"from":"mafintosh","message":"hmm good question","timestamp":1500822817298}
{"from":"creationix","message":"yeah, I think it's just `127.0.0.1`","timestamp":1500822824592}
{"from":"creationix","message":"also is scaleway's arm64 virtualized?","timestamp":1500822949723}
{"from":"mafintosh","message":"creationix: no but you can add the option here: https://github.com/datproject/dat-node/blob/master/lib/serve.js","timestamp":1500822954960}
{"from":"mafintosh","message":"creationix: only virtual it seems","timestamp":1500823016638}
{"from":"creationix","message":"because I've got a 16-core rackmount server in my lower floor. I might be able to host my own (just with crappy bandwidth)","timestamp":1500823047922}
{"from":"mafintosh","message":"nice","timestamp":1500823176819}
{"from":"mafintosh","message":"creationix: one our first dat-as-a-service things could be a prebuild service then on your rack","timestamp":1500823207020}
{"from":"creationix","message":"streaming seems improved in the latest dat cli","timestamp":1500823292454}
{"from":"creationix","message":"I did a `dat clone --sparse --http ...`","timestamp":1500823312723}
{"from":"creationix","message":"poor mans beaker ;)","timestamp":1500823321272}
{"from":"pfrazee","message":"beaker is already the poor man's beaker","timestamp":1500823351248}
{"from":"mafintosh","message":"creationix: sweet, so it worked?","timestamp":1500823371740}
{"from":"mafintosh","message":"prettty embarrasing i didn't have a test for it","timestamp":1500823386072}
{"from":"creationix","message":"mostly. I had trouble with one video, but most of them worked right away without a hitch","timestamp":1500823389146}
{"from":"mafintosh","message":"i accidentally broke it a week ago","timestamp":1500823396761}
{"from":"mafintosh","message":"or two","timestamp":1500823399078}
{"from":"creationix","message":"ahh","timestamp":1500823401623}
{"from":"creationix","message":"yeah, before it was telling me all my video files were corrupt","timestamp":1500823411940}
{"from":"mafintosh","message":"yea","timestamp":1500823418881}
{"from":"creationix","message":"but nginx could serve/stream them just fine","timestamp":1500823419977}
{"from":"mafintosh","message":"creationix: please let me know if you experience more issues like this","timestamp":1500823435898}
{"from":"creationix","message":"ok","timestamp":1500823439998}
{"from":"creationix","message":"I wanted to dig in myself, but I got lost in the code","timestamp":1500823458026}
{"from":"mafintosh","message":"ya that part is a bit none trivial","timestamp":1500823496799}
{"from":"mafintosh","message":"wooot, my sodium prebuild for arm64 seems to work","timestamp":1500823563567}
{"from":"mafintosh","message":"used dat to transfer it back to my machine","timestamp":1500823573259}
{"from":"mafintosh","message":"full circle","timestamp":1500823575599}
{"from":"creationix","message":"lol","timestamp":1500823635035}
{"from":"creationix","message":"can you see this repo https://git.daplie.com/creationix/privtree/blob/master/libs/fuse.js","timestamp":1500823639425}
{"from":"creationix","message":"this was the thing I wrote before digging deeply into dat","timestamp":1500823654896}
{"from":"creationix","message":"in particular, my streaming support https://git.daplie.com/creationix/privtree/blob/master/libs/fuse.js#L75-105","timestamp":1500823705730}
{"from":"mafintosh","message":"ah nice","timestamp":1500823746359}
{"from":"mafintosh","message":"looks pretty straight forward","timestamp":1500823756467}
{"from":"mafintosh","message":"creationix: we need fuse for dat :)","timestamp":1500823843887}
{"from":"creationix","message":"yep, I'd love to write it, I'm just not sure the hyperdrive api is powerful enough","timestamp":1500823872257}
{"from":"creationix","message":"but I don't know how to get at the raw objects in dat","timestamp":1500823885870}
{"from":"mafintosh","message":"creationix: we mostly just need to land https://github.com/mafintosh/hyperdrive/pull/169","timestamp":1500823952902}
{"from":"mafintosh","message":"nothing blocking it","timestamp":1500823959258}
{"from":"creationix","message":"yeah, that might do it","timestamp":1500823971677}
{"from":"creationix","message":"you don't have to implement *all* of fuse at first","timestamp":1500823983272}
{"from":"mafintosh","message":"true","timestamp":1500823990318}
{"from":"mafintosh","message":"but the read api is pretty essential","timestamp":1500823999738}
{"from":"creationix","message":"read-only is a lot easier than writing","timestamp":1500824001479}
{"from":"mafintosh","message":"ya","timestamp":1500824009597}
{"from":"mafintosh","message":"i'll try and land that soon and give you a ping then","timestamp":1500824030338}
{"from":"creationix","message":"cool","timestamp":1500824033968}
{"from":"creationix","message":"though I guess basic writing shouldn't be too hard. Just buffer somewhere and append to the log on file close?","timestamp":1500824059788}
{"from":"creationix","message":"though I guess basic writing shouldn't be too hard. Just buffer somewhere and append to the log on file descriptor close?","timestamp":1500824080671}
{"from":"mafintosh","message":"yea exactly","timestamp":1500824089901}
{"from":"mafintosh","message":"utp-native now also has prebuilds for arm64","timestamp":1500824119378}
{"from":"pfrazee","message":"creationix: you *might* find some of this useful https://github.com/pfrazee/hyperdrive-staging-area","timestamp":1500824129604}
{"from":"mafintosh","message":"oh yea","timestamp":1500824395859}
{"from":"creationix","message":"pfrazee: where does it store them?","timestamp":1500824549433}
{"from":"pfrazee","message":"creationix: the hypercore/drive data?","timestamp":1500824867575}
{"from":"mafintosh","message":"muhaha","timestamp":1500825667259}
{"from":"mafintosh","message":"can now build prebuilds on my pixel","timestamp":1500825675744}
{"from":"mafintosh","message":"so nice dat runs without any native code","timestamp":1500825694398}
{"from":"mafintosh","message":"so you can use it to bootstrap a build process that builds its native deps","timestamp":1500825706938}
{"from":"creationix","message":"pfrazee: the data in the staging area","timestamp":1500825910220}
{"from":"pfrazee","message":"creationix: the staging area itself is the target folder","timestamp":1500825937379}
{"from":"pfrazee","message":"it's not like gi","timestamp":1500825945539}
{"from":"pfrazee","message":"git*","timestamp":1500825947020}
{"from":"pfrazee","message":"the \"staging area\" is the active folder, and then there's the published hyperdrive. No in-between","timestamp":1500825962776}
{"from":"creationix","message":"ok, you just somehow flag it to not import it","timestamp":1500825970184}
{"from":"pfrazee","message":"you do your publishes explicitly","timestamp":1500825982893}
{"from":"pfrazee","message":"staging.commit()","timestamp":1500825986473}
{"from":"cblgh","message":"mafintosh: getting ready to make my mud repo public","timestamp":1500825988672}
{"from":"cblgh","message":"buggy af atm though","timestamp":1500825993184}
{"from":"creationix","message":"pfrazee: I mean if there is a `dat sync` running in the folder and you write a file to it, it gets published","timestamp":1500826012240}
{"from":"mafintosh","message":"cblgh: cool. buggy cause of hyperdb?","timestamp":1500826029880}
{"from":"mafintosh","message":"dat://7d6f0686628bdd0464a12334c23f0c6dbb4119c147f378cc5ff0288fa83f7094","timestamp":1500826033261}
{"from":"cblgh","message":"mafintosh: ya haha","timestamp":1500826035842}
{"from":"mafintosh","message":"cblgh: good :) then i'll make it better","timestamp":1500826046559}
{"from":"cblgh","message":"awyisssss","timestamp":1500826053741}
{"from":"mafintosh","message":"ignore that dat link","timestamp":1500826071960}
{"from":"cblgh","message":"weird stuff happens when peer A starts a server and peer B joins it for the first time","timestamp":1500826074221}
{"from":"mafintosh","message":"just so i can transfer files from my pixel to my laptop hehe","timestamp":1500826084114}
{"from":"cblgh","message":"peer A will crash (because it doesn't know who the fuck B is)","timestamp":1500826088884}
{"from":"mafintosh","message":"ah yes","timestamp":1500826097563}
{"from":"cblgh","message":"but currently using peer-network to at least make sure that everyone that connects will get the same order of feeds into hyperdb","timestamp":1500826129678}
{"from":"mafintosh","message":"cblgh: i have a cross atlantic flight on tue. hope to fix a bunch of issues there","timestamp":1500826130159}
{"from":"cblgh","message":"ohhhhhh yes","timestamp":1500826135131}
{"from":"creationix","message":"cblgh: are you sure that's the right dat link? I'm getting \"key required to clone\"","timestamp":1500826170427}
{"from":"creationix","message":"oh, no it was mafintosh that shared the dat link","timestamp":1500826220845}
{"from":"mafintosh","message":"ya","timestamp":1500826236830}
{"from":"mafintosh","message":"creationix: my android builds","timestamp":1500826245281}
{"from":"mafintosh","message":"was easier to paste the link here to get it to my laptop","timestamp":1500826256046}
{"from":"creationix","message":"wonder why my dat won't take it","timestamp":1500826267907}
{"from":"creationix","message":"nevermind, doesn't matter","timestamp":1500826363528}
{"from":"pfrazee","message":"creationix: right, with the staging you can write a cli that doesnt autopublish, but instead shows a diff","timestamp":1500826697674}
{"from":"creationix","message":"so in the case of fuse mount, that's almost always going to be read-only just because the one machine with write access, probably already has the files on it's real filesystem.","timestamp":1500826802353}
{"from":"creationix","message":"I guess things will get more interesting once we get multi done","timestamp":1500826825429}
{"from":"cblgh","message":"mafintosh: https://github.com/cblgh/hyperdungeon","timestamp":1500827297915}
{"from":"cblgh","message":"i'm kinda getting back into the swing of things with node, so structure is a bit messy","timestamp":1500827358281}
{"from":"mafintosh","message":"creationix: agreed","timestamp":1500827436828}
{"from":"creationix","message":"millette: what is cdns?","timestamp":1500827769864}
{"from":"millette","message":"content delivery networkS","timestamp":1500827790124}
{"from":"creationix","message":"oh right, I was hoping it was some new fancy distributed dns","timestamp":1500827917915}
{"from":"creationix","message":"would love a way to have domain-like names for dats without having to use dns or https","timestamp":1500827967911}
{"from":"creationix","message":"not really a fan of things like filecoin though","timestamp":1500827980096}
{"from":"creationix","message":"or namecoin or whatever it was","timestamp":1500827991089}
{"from":"millette","message":"fixed the capitalisation in https://github.com/beakerbrowser/beaker/issues/552#issuecomment-317212747 to make CDNs more obvious.","timestamp":1500828054629}
{"from":"creationix","message":"heh, thanks","timestamp":1500828105374}
{"from":"millette","message":"just war-dial dat hashes until you find something interesting :-)","timestamp":1500828161038}
{"from":"creationix","message":"like that's feasable :P","timestamp":1500828279433}
{"from":"dat-gitter","message":"(lukeburns) pfrazee: re yesterday's conversation: I'm going back on what I said -- i think there *is* a difference between scenario 1 and 2 -- say you've figured out scenario 1 e.g. with a feed exchange on discovery channel using the shared secret -- scenario 2 is the same as scenario 1 insofar as it's just the case with two peers, but it's different in that you still need to communicate to the peer you are trying to communicate w","timestamp":1500828384545}
{"from":"dat-gitter","message":"channel) which presumably already occurred in scenario 1.","timestamp":1500828384653}
{"from":"pfrazee","message":"@lukeburns yeah scenario 2 is basically trying to guarantee your recipients via keypair sigs","timestamp":1500828430444}
{"from":"pfrazee","message":"presumably you could implement scenario 2 on top of scenario 1, as an authenticated method to exchange a shared secret","timestamp":1500828462027}
{"from":"pfrazee","message":"whereas scenario 1 just expects users to manually copy/paste the shared secret to each other","timestamp":1500828482787}
{"from":"pfrazee","message":"which is what we already do for eg webrtc video chat rooms","timestamp":1500828497438}
{"from":"pfrazee","message":"talky.io/reasonably-unguessable-roomname-which-acts-as-the-shared-secret","timestamp":1500828521368}
{"from":"dat-gitter","message":"(lukeburns) Hmm 🤔","timestamp":1500828574509}
{"from":"dat-gitter","message":"(lukeburns) Is a peer's public key a hypercore feed or a hyperdrive key?","timestamp":1500828756851}
{"from":"pfrazee","message":"peers dont have public keys yet","timestamp":1500828772111}
{"from":"dat-gitter","message":"(lukeburns) Thought scenario 2 was you have a peer's pub key","timestamp":1500828801612}
{"from":"pfrazee","message":"yeah","timestamp":1500828861075}
{"from":"pfrazee","message":"I may have misunderstood what you meant but my point is, there's not a pubkey system for peers *yet* so it could be anything","timestamp":1500828899749}
{"from":"pfrazee","message":"a pubkey for a drive, or a core, or for nothing but the connection","timestamp":1500828917861}
{"from":"dat-gitter","message":"(lukeburns) Oh oh gotcha","timestamp":1500828976305}
{"from":"dat-gitter","message":"(lukeburns) One thought is to give each peer a feed and have replicators coordinate to funnel up messages to the writer of the feed, who can then publish a receipt on the feed","timestamp":1500829115968}
{"from":"dat-gitter","message":"(lukeburns) As a way to solve scenario 2 without relying on scenario 1","timestamp":1500829190104}
{"from":"dat-gitter","message":"(lukeburns) to establish a direct secure channel","timestamp":1500829251077}
{"from":"dat-gitter","message":"(lukeburns) (via messaging on hypercore-protocol level)","timestamp":1500829474813}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh, are those Arm prebuilds you mentioned perhaps a fix for the issue with offline Dat discovery on Android?","timestamp":1500830414542}
{"from":"dat-gitter","message":"(rjsteinert) I'd be super happy to try them out :)","timestamp":1500830434471}
{"from":"mafintosh","message":"@rjsteinert ya the utp-native might help with that","timestamp":1500830460067}
{"from":"mafintosh","message":"are you using termux?","timestamp":1500830464114}
{"from":"dat-gitter","message":"(rjsteinert) yup","timestamp":1500830471812}
{"from":"dat-gitter","message":"(rjsteinert) And running an ssh server on termux, going in through my laptop, reduces eye strain ;)","timestamp":1500830490439}
{"from":"dat-gitter","message":"(rjsteinert) Btw, I'm fairly command line literate in case that helps with explaining how I can test things","timestamp":1500830541704}
{"from":"mafintosh","message":"@rjsteinert whoooa!!! how do i do that?","timestamp":1500830588749}
{"from":"mafintosh","message":"the ssh server","timestamp":1500830591689}
{"from":"dat-gitter","message":"(rjsteinert) getting link...","timestamp":1500830609282}
{"from":"dat-gitter","message":"(rjsteinert) https://oliverse.ch/tech/2015/11/06/run-an-ssh-server-on-your-android-with-termux.html","timestamp":1500830630569}
{"from":"dat-gitter","message":"(rjsteinert) Followed that other than generating public keys. Already have them a quick curl >> ~/.ssh/authorized_keys away","timestamp":1500830674123}
{"from":"mafintosh","message":"worked","timestamp":1500830842172}
{"from":"mafintosh","message":"curl https://github.com/mafintosh.keys >> ~/.ssh/authorized_keys","timestamp":1500830861706}
{"from":"mafintosh","message":"#protip","timestamp":1500830867707}
{"from":"creationix","message":"yep, do that all the time","timestamp":1500830877340}
{"from":"mafintosh","message":"how cool is this","timestamp":1500830911260}
{"from":"creationix","message":"I even use that API for validate published to lit.luvit.io (people publish by signing with their ssh private key instead of me needing a login for them)","timestamp":1500830916079}
{"from":"mafintosh","message":"feel like someone needs to write a simple \"android for node people\" blog post","timestamp":1500830933467}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh thanks for reminding me about that endpoint","timestamp":1500831082717}
{"from":"dat-gitter","message":"(rjsteinert) (github keys endpoint)","timestamp":1500831117075}
{"from":"dat-gitter","message":"(rjsteinert) @creationix I was poking in the dark with some concepts throwing down some google foo and came across this https://medium.freecodecamp.org/building-a-node-js-application-on-android-part-1-termux-vim-and-node-js-dfa90c28958f","timestamp":1500831170694}
{"from":"dat-gitter","message":"(rjsteinert) I'm starting to get excited about Termux. It's GPLv3 and has a lot of packages you can install with apt.","timestamp":1500831215534}
{"from":"dat-gitter","message":"(rjsteinert) Oh woops, that was supposed to be @mafintosh, I thought creationix said the thing about \"android for node people\". Talking with folks on IRC through gitter is not ideal.","timestamp":1500831299202}
{"from":"mafintosh","message":"@rjsteinert yea that's what i'm using as well. works so good","timestamp":1500831339007}
{"from":"dat-gitter","message":"(rjsteinert) Y'all have any IRC clients you would recommend for Mac and Linux?","timestamp":1500831340249}
{"from":"mafintosh","message":"irccloud is great if you don't wanna deal with it","timestamp":1500831357754}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh, thanks for recommendation. I remember seeing that service right before I sucked into the black hole of slack.","timestamp":1500831449085}
{"from":"cblgh","message":"mafintosh: wishlist for plane fixes https://gist.github.com/cblgh/84aab540cd5ecf4ec79c97779e060e9b","timestamp":1500831469351}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh If I could demo two androids on an offline LAN syncing dat archives, it would be a very compelling case at my job to start pointing our efforts at work towards working with Dat.","timestamp":1500831984596}
{"from":"mafintosh","message":"@rjsteinert that doesn't work now?","timestamp":1500832006926}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh Oh, does it just not work between an android and a mac?","timestamp":1500832058609}
{"from":"dat-gitter","message":"(rjsteinert) (when using offline lan)","timestamp":1500832067377}
{"from":"mafintosh","message":"@rjsteinert that works for me as well","timestamp":1500832075566}
{"from":"mafintosh","message":"using my google pixel and macbook","timestamp":1500832081688}
{"from":"dat-gitter","message":"(rjsteinert) That's great!","timestamp":1500832123432}
{"from":"dat-gitter","message":"(rjsteinert) I wonder what my problem is https://github.com/datproject/dat/issues/829#issue-244893620","timestamp":1500832130650}
{"from":"mafintosh","message":"@rjsteinert it could just be mdns on your local network that is weird","timestamp":1500832195708}
{"from":"dat-gitter","message":"(rjsteinert) Definitely something weird going on. The peers connect and disconnect over and over, no files ever transferred.","timestamp":1500832322872}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh, Android does not support mDNS in the avahi-daemon way where it \"just works\". My suspicion is that this is why Dat was choking","timestamp":1500832444966}
{"from":"mafintosh","message":"@rjsteinert dat has its own embedded mdns stack","timestamp":1500832476741}
{"from":"mafintosh","message":"if it connects and then disconnects it sounds like a differnet issue actually","timestamp":1500832489567}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh, if the mdns stack in dat can just work on Android I would be ecstatic.","timestamp":1500832795942}
{"from":"mafintosh","message":"@rjsteinert it does work for me","timestamp":1500832816867}
{"from":"dat-gitter","message":"(rjsteinert) +1","timestamp":1500832823481}
{"from":"mafintosh","message":"@rjsteinert do you know any projects for android that bundle node and a webview?","timestamp":1500832935671}
{"from":"mafintosh","message":"electron style","timestamp":1500832938832}
{"from":"dat-gitter","message":"(rjsteinert) My coworker Chris Kelley is trying to figure that out right at this moment","timestamp":1500832968246}
{"from":"mafintosh","message":"nice","timestamp":1500833070686}
{"from":"mafintosh","message":"@rjsteinert i'm thinking its probably simple enough to build that actually","timestamp":1500833086926}
{"from":"mafintosh","message":"super minimal","timestamp":1500833092827}
{"from":"dat-gitter","message":"(rjsteinert) mafintosh, Cordova (aka PhoneGap) is kind of the go to for putting apps in a webview on Android","timestamp":1500833217928}
{"from":"mafintosh","message":"@rjsteinert but that does give me the node api does it?","timestamp":1500833240846}
{"from":"mafintosh","message":"doens't","timestamp":1500833245167}
{"from":"dat-gitter","message":"(rjsteinert) Exactly","timestamp":1500833251968}
{"from":"dat-gitter","message":"(rjsteinert) It does not","timestamp":1500833257831}
{"from":"dat-gitter","message":"(rjsteinert) It's really just a question of how to sustainably get node into an APK in general","timestamp":1500833281653}
{"from":"dat-gitter","message":"(rjsteinert) Then something else handles the webview magic. Cordova tooling is arguably overkill, but it comes with one nice plugin called \"Crosswalk\" that packages chrome into your APK so that you have a consistent rendering web view across Android Versions.","timestamp":1500833345395}
{"from":"dat-gitter","message":"(rjsteinert) A proof of concept beaker browser might be a fork of Termux with a webview","timestamp":1500833471145}
{"from":"dat-gitter","message":"(rjsteinert) *proof of concept beaker browser for android","timestamp":1500833482691}
{"from":"dat-gitter","message":"(chrisekelley) @ mafintosh I'm trying to chase down the node-on-Android feature too. My current plan is to see what it would take to make a Cordova plugin for J2V8. But this may have already been done by the folks who make Tabris - https://github.com/eclipsesource/tabris-js","timestamp":1500833512064}
{"from":"mafintosh","message":"@chrisekelley would building node as a shared lib not work?","timestamp":1500833559981}
{"from":"millette","message":"hmm https://github.com/eclipsesource/J2V8 Java Bindings for V8","timestamp":1500833672375}
{"from":"dat-gitter","message":"(rjsteinert) @mafintosh by \"build as a shared lib\" do you mean build it as a binary that can be used in a shell like termux or as in a node binary packaged up in a Java library?","timestamp":1500833680731}
{"from":"millette","message":"and article https://eclipsesource.com/blogs/2016/07/20/running-node-js-on-the-jvm/","timestamp":1500833697202}
{"from":"mafintosh","message":"@rjsteinert the last part","timestamp":1500833745940}
{"from":"dat-gitter","message":"(rjsteinert) @chrisekelley You mentioned to me earlier that J2V8 might not be able to run on more recent versions of Android?","timestamp":1500833765415}
{"from":"cblgh","message":"staltz is working on similar stuff with his ssb android client https://github.com/staltz/mmmmm-mobile","timestamp":1500833769632}
{"from":"millette","message":"https://github.com/dna2github/NodeBase is recent too","timestamp":1500833769742}
{"from":"mafintosh","message":"build an .so file we bind against","timestamp":1500833773639}
{"from":"cblgh","message":"from readme","timestamp":1500833781930}
{"from":"cblgh","message":" Which in turn uses NodeBase (node.js v7 compiled for android arm devices)","timestamp":1500833782036}
{"from":"mafintosh","message":"nice","timestamp":1500834100499}
{"from":"dat-gitter","message":"(rjsteinert) @cblgh I don't yet grok what NodeBase APK is intended to do but it looks like it's an example of using node in an APK which is great. Wonder if it's using J2V8...","timestamp":1500834112589}
{"from":"mafintosh","message":"just cloned the node source on my pixel","timestamp":1500834176861}
{"from":"mafintosh","message":"will try and build a shared lib and see what happens hehe","timestamp":1500834186764}
{"from":"dat-gitter","message":"(rjsteinert) I've gotta go move some file cabinets. Carbon copies, ugh.","timestamp":1500834194564}
{"from":"ralphtheninja[m]","message":"mafintosh++","timestamp":1500834213495}
{"from":"dat-gitter","message":"(rjsteinert) You're a rockstar mafintosh","timestamp":1500834217813}
{"from":"dat-gitter","message":"(chrisekelley) @mafintosh looks like there is better support in node for building as a shared lib - https://github.com/nodejs/build/issues/359","timestamp":1500834304772}
{"from":"mafintosh","message":"@chrisekelley ya saw that the other day as well :)","timestamp":1500834486363}
{"from":"mafintosh","message":"got it configured!","timestamp":1500834510947}
{"from":"mafintosh","message":"now running make","timestamp":1500834513175}
{"from":"dat-gitter","message":"(chrisekelley) @rjsteinert yeah, about J2V8-on-Android 5: we might run into the same issue that the old couchbase mobile client faced, when the something changed in the Android API that prevented the Erlang interpreter from running. I'll check that out...","timestamp":1500834518430}
{"from":"mafintosh","message":"haha totally working","timestamp":1500834535000}
{"from":"mafintosh","message":"now building v8 on the pixel lol","timestamp":1500834540859}
{"from":"dat-gitter","message":"(chrisekelley) @mafintosh that is great!","timestamp":1500834596870}
{"from":"cblgh","message":"lol","timestamp":1500834631549}
{"from":"cblgh","message":"hackintosh","timestamp":1500834638461}
{"from":"mafintosh","message":"@chrisekelley @rjsteinert is the webview rendered by different engines on android depending on phones?","timestamp":1500834805731}
{"from":"pfrazee","message":"https://github.com/brave/browser-android-tabs","timestamp":1500834861609}
{"from":"pfrazee","message":"https://github.com/brave/browser-ios","timestamp":1500834867141}
{"from":"pfrazee","message":"should check into how theyre doing things","timestamp":1500834881638}
{"from":"dat-gitter","message":"(chrisekelley) @mafintosh yeah, webview versions are all over the map. With cordova, you have 2 browser views - webview, which is tied to the specific android webview shipped with the phone - and is frozen in time - and chromeview, which is updated by Google regularly. This is why we started using crosswalk plugin. Here are the versions that ship w/ crosswalk:: https://github.com/crosswalk-project/crosswalk-website/wiki/Release-da","timestamp":1500835051845}
{"from":"dat-gitter","message":"longer being developed - stopped at crosswalk 23/chrome 54","timestamp":1500835051950}
{"from":"dat-gitter","message":"(ryanwarsaw) that sucks, cross walk was a dream to work with :rainbow:","timestamp":1500835246002}
{"from":"ralphtheninja[m]","message":"why isn't it no longer maintained?","timestamp":1500835296546}
{"from":"ralphtheninja[m]","message":"just strange observation if it's that useful I mean","timestamp":1500835332181}
{"from":"ralphtheninja[m]","message":"and my grammar blows :)","timestamp":1500835379429}
{"from":"dat-gitter","message":"(ryanwarsaw) https://crosswalk-project.org/blog/crosswalk-final-release.html","timestamp":1500835389839}
{"from":"dat-gitter","message":"(chrisekelley) @ralphtheninja[m] Intel was the main sponsor of Crosswalk; I guess they decided that PWA's are going to be more common. But yeah, it's a pity they won't continue developing it.","timestamp":1500835401829}
{"from":"dat-gitter","message":"(ryanwarsaw) it had some issues of its own tbh","timestamp":1500835444232}
{"from":"dat-gitter","message":"(lukeburns) does dat have a page of published archives like https://archives.ipfs.io/?","timestamp":1500835489323}
{"from":"dat-gitter","message":"(ryanwarsaw) mainly just weird scenarios where things wouldn’t function properly, or how not light-weight it tended to be","timestamp":1500835499263}
{"from":"dat-gitter","message":"(ryanwarsaw) 5.0^ the integrated web views aren’t that bad, but the APK file once everything was packed up on cross walk was like pretty huge","timestamp":1500835536488}
{"from":"ralphtheninja[m]","message":"it would be nice to run electron directly on Android","timestamp":1500835595373}
{"from":"pfrazee","message":"@lukeburns there are some datasets on https://hashbase.io","timestamp":1500835612772}
{"from":"ralphtheninja[m]","message":"or simply just write the whole UI for the OS in electron","timestamp":1500835635640}
{"from":"dat-gitter","message":"(ryanwarsaw) is this for dat apps? Cause dat’s pretty simple to the point where it’s probably easier to just make a native app","timestamp":1500835733280}
{"from":"cblgh","message":"put up these examples which show how to use hypercore with hyperdiscovery https://github.com/cblgh/hypercore-examples/","timestamp":1500835920618}
{"from":"ralphtheninja[m]","message":"cool","timestamp":1500835934151}
{"from":"cblgh","message":"really simple, but i didn't understand how they worked together at first","timestamp":1500835951633}
{"from":"ralphtheninja[m]","message":"ryan asking me? I dunno .. I was just rambling :)","timestamp":1500835966666}
{"from":"dat-gitter","message":"(lukeburns) pfrazee: am wondering more about archives that demo dat's capacity for large datasets","timestamp":1500836029918}
{"from":"pfrazee","message":"karissa might be able to answer that","timestamp":1500836056672}
{"from":"karissa","message":"Lukeburns check out datproject.org and scroll down to see three examples","timestamp":1500837604545}
{"from":"karissa","message":"lukeburns you can also click \"explore\" for user generated ones","timestamp":1500837622294}
{"from":"dat-gitter","message":"(lukeburns) karissa: thx!","timestamp":1500837830330}
{"from":"dat-gitter","message":"(lukeburns) wasn't sure if there was something else hiding somewhere","timestamp":1500837855598}
{"from":"pfrazee","message":"@rj not sure if you're here (dat-gitter reveals no secrets) but I fixed the video streaming on https hashbase","timestamp":1500840954707}
{"from":"larpanet","message":"@C6fAmdXgqTDbmZGAohUaYuyKdz3m6GBoLLtml3fUn+o=.ed25519:... project i've been working on: hyperdungeon it's an experiment in using the hyper* tech that powers ... http://wx.larpa.net:8807/%25FuEII4iyL5zQQwFTtT3%2FYfIRaBvYnMzrOD9sw5YL3rQ%3D.sha256","timestamp":1500841355135}
{"from":"pfrazee","message":"cblgh: hyperdungeon looks cool","timestamp":1500842259285}
{"from":"cblgh","message":"pfrazee: thanks!","timestamp":1500842775373}
{"from":"pfrazee","message":"creationix: Ive updated dathttpd","timestamp":1500843193392}
{"from":"ralphtheninja","message":"pfrazee: is it possible to share things between multiple instances of beaker? e.g. the library etc","timestamp":1500843605484}
{"from":"ralphtheninja","message":"bookmarks etc","timestamp":1500843613043}
{"from":"pfrazee","message":"ralphtheninja: not right now","timestamp":1500843616391}
{"from":"ralphtheninja","message":"would be a nice feature though","timestamp":1500843623565}
{"from":"pfrazee","message":"ralphtheninja: yeah that's the kind of feature we need to do after we've got product market fit","timestamp":1500843697710}
{"from":"ralphtheninja","message":"I've always had this problem of syncing my bookmarks :)","timestamp":1500843754597}
{"from":"pfrazee","message":"yeah :) we're thinking about doing installable web apps in the next version of beaker","timestamp":1500843772670}
{"from":"ralphtheninja","message":"using mozillas sync in firefox atm, works fine, but still would be nicer p2p of course","timestamp":1500843774455}
{"from":"pfrazee","message":"so we might be able to do the sync in userland","timestamp":1500843834120}
{"from":"ralphtheninja","message":"pfrazee: what do you mean by installable web apps? I thought it supported that already?","timestamp":1500843872353}
{"from":"ralphtheninja","message":"better packaged in the ui?","timestamp":1500843886640}
{"from":"pfrazee","message":"ralphtheninja: check out the specs under https://github.com/beakerbrowser/beaker#documentation","timestamp":1500843907039}
{"from":"pfrazee","message":"proposed specs","timestamp":1500843908928}
{"from":"pfrazee","message":"starting with installable web apps","timestamp":1500843912207}
{"from":"ralphtheninja","message":"thanks!","timestamp":1500843918499}
{"from":"pfrazee","message":"it's basically a way to get special perms","timestamp":1500843920055}
{"from":"pfrazee","message":"sure","timestamp":1500843921979}
{"from":"cblgh","message":"ralphtheninja: i use xmarks with firefox","timestamp":1500844479303}
{"from":"cblgh","message":"it's great","timestamp":1500844497639}
{"from":"ralphtheninja[m]","message":"pfrazee: brillant docs and specs .. a pleasure reading :)","timestamp":1500845539161}
{"from":"pfrazee","message":"ralphtheninja[m]: thanks!","timestamp":1500845555925}
{"from":"pfrazee","message":"we should be able to pass more responsibility into userland with IWAs (installable web apps)","timestamp":1500845590829}
{"from":"pfrazee","message":"which will be a good thing","timestamp":1500845598103}
{"from":"pfrazee","message":"then you can do bookmark sync however you like","timestamp":1500845659561}
{"from":"ralphtheninja[m]","message":"I get the feeling you want to stay close to the current browser world, with already existing standards and what not","timestamp":1500845680716}
{"from":"pfrazee","message":"to a degree that's true","timestamp":1500845694446}
{"from":"pfrazee","message":"what Im suggesting with bookmark sync is already true via web extensions","timestamp":1500845736228}
{"from":"ralphtheninja[m]","message":"yep","timestamp":1500845784101}
{"from":"pfrazee","message":"so what we're basically trying to do is demonstrate: with a web-friendly package format like dat, you can have the actual apps do more powerful things","timestamp":1500845801037}
{"from":"ralphtheninja[m]","message":"yeah it's supercool :)","timestamp":1500845835408}
{"from":"pfrazee","message":":) yeah hopefully people will feel like it's all a set of sensible ideas for standards","timestamp":1500845917807}
{"from":"ralphtheninja[m]","message":"also, you can look at beaker as an electron \"shell\" that's already there for you, so it will basically be like an electron app store, which means beaker sort of does refactor on common electron windows code","timestamp":1500845922434}
{"from":"ralphtheninja[m]","message":"if that makes sense","timestamp":1500845925934}
{"from":"pfrazee","message":"yeah that does make sense","timestamp":1500845934651}
{"from":"pfrazee","message":"we did consider doing just an app platform, not a web browser, at one point - same thinking","timestamp":1500845948475}
{"from":"ralphtheninja[m]","message":"it is an app platform :)","timestamp":1500845982859}
{"from":"pfrazee","message":"ya","timestamp":1500845985533}
{"from":"ralphtheninja[m]","message":"but more than that I guess","timestamp":1500845987758}
{"from":"pfrazee","message":"the difference wouldve been things like, whether to do a browser chrome, whether to show the url bar","timestamp":1500846011681}
{"from":"ralphtheninja[m]","message":"aah right","timestamp":1500846028314}
{"from":"ralphtheninja[m]","message":"pfrazee: you still need to do some packaging of your js, e.g. browserify etc if you want to write node style apps, or can beaker help with that as well?","timestamp":1500846289711}
{"from":"pfrazee","message":"ralphtheninja[m]: that's an interesting situation. We could easily start to do es6 import/export, once support for it is 100% in chromium, but...","timestamp":1500846369514}
{"from":"pfrazee","message":"personally I prefer to do a precompiled bundle because it improves load time","timestamp":1500846383610}
{"from":"ralphtheninja[m]","message":"we need dominic to help us remove the trade off :P","timestamp":1500846458809}
{"from":"pfrazee","message":"oh?","timestamp":1500846468456}
{"from":"ralphtheninja[m]","message":"we were discussing trade offs in the channel yesterday, you have to scroll up a bit :)","timestamp":1500846508752}
{"from":"pfrazee","message":"oh hah right","timestamp":1500846516442}
{"from":"pfrazee","message":"well the precompiled bundle isnt so bad because you can still include the source","timestamp":1500846534012}
{"from":"ralphtheninja[m]","message":"it would be really nice to not having to browserify in the first place and have fast load times I mean","timestamp":1500846546826}
{"from":"ralphtheninja[m]","message":"true","timestamp":1500846551748}
{"from":"pfrazee","message":"yeah. Well for development, the es6 imports should be pretty quick because your stuff will be cached","timestamp":1500846573881}
{"from":"pfrazee","message":"es6 imports *might* already work in beaker, I havent tested recently","timestamp":1500846583941}
{"from":"pfrazee","message":"let me try it real quick","timestamp":1500846588150}
{"from":"ralphtheninja[m]","message":"make a video if successful :D","timestamp":1500846638982}
{"from":"pfrazee","message":"will do :)","timestamp":1500846703896}
{"from":"pfrazee","message":"ok not yet","timestamp":1500846801142}
{"from":"ralphtheninja[m]","message":"know anything about the time scale / road map for that?","timestamp":1500846836097}
{"from":"ralphtheninja[m]","message":"support for import in chromium that is","timestamp":1500846853133}
{"from":"pfrazee","message":"it just got added to canary","timestamp":1500846999706}
{"from":"pfrazee","message":"so, for beaker, thatd be like... 3 months prob","timestamp":1500847013242}
{"from":"ralphtheninja[m]","message":"nice","timestamp":1500847091466}
{"from":"dat-gitter","message":"(ryanwarsaw) hey… does anyone know why when I’m working on dat-desktop the top bar doesn’t appear sometimes?","timestamp":1500847773115}
{"from":"dat-gitter","message":"(ryanwarsaw) Haven’t made any changes, just cloned the repo and ran the install, bundle, start commands","timestamp":1500847827203}
{"from":"yoshuawuyts","message":"ryanwarsaw: might be because desktop crashes midway initialization, before it can get to render the top bar - it only starts rendering after dat is connected to the network and ready to use","timestamp":1500847831428}
{"from":"yoshuawuyts","message":"ryanwarsaw: s/connected to the network //","timestamp":1500847869848}
{"from":"yoshuawuyts","message":"it's just after boot is done","timestamp":1500847875195}
{"from":"dat-gitter","message":"(ryanwarsaw) huh. I’m definitely connected to the network so I’m not sure what could be the cause","timestamp":1500847888818}
{"from":"yoshuawuyts","message":"ryanwarsaw odd. Maybe it's a UI lib problem then. We're currently rewriting some of 'em because they were flaky, WIP PR here https://github.com/choojs/nanocomponent/pull/41#issuecomment-317285646","timestamp":1500847947030}
{"from":"dat-gitter","message":"(ryanwarsaw) https://i.imgur.com/lenupuj.png","timestamp":1500848145353}
{"from":"dat-gitter","message":"(ryanwarsaw)","timestamp":1500848145459}
{"from":"dat-gitter","message":"(ryanwarsaw) I readjusted the window height but the black bar at the bottom is also stuck at that height so it’s a bit odd","timestamp":1500848218998}
{"from":"dat-gitter","message":"(ryanwarsaw) I just ran the tests and it about a good way through it sorted itself out so that’s weird","timestamp":1500848839227}
{"from":"dat-gitter","message":"(rjsteinert) mafintosh, do you think my mdns issue might be due to the router? Not sure how to get to the bottom of this fail of syncing on Android when offline.","timestamp":1500851280275}
{"from":"ralphtheninja[m]","message":"pfrazee: how much control over chromium do you have using electron for beaker? would it be possible to block e.g. script-tags in html etc?","timestamp":1500853041950}
{"from":"pfrazee","message":"ralphtheninja[m]: I could definitely do that via csp. There are some other means for doing that","timestamp":1500853069576}
{"from":"pfrazee","message":"ralphtheninja[m]: and then if I had a good C++ dev I could make mods to electron itself","timestamp":1500853082575}
{"from":"ralphtheninja[m]","message":"oh nice","timestamp":1500853110829}
{"from":"ralphtheninja[m]","message":"the csp is some sort of dsl right? I don't really understand all the parts","timestamp":1500853182270}
{"from":"pfrazee","message":"yeah it is","timestamp":1500854316017}
{"from":"pfrazee","message":"https://content-security-policy.com/","timestamp":1500854320371}
{"from":"ralphtheninja[m]","message":"hehe a new world is opening up here :)","timestamp":1500854448906}
{"from":"pfrazee","message":":D","timestamp":1500855535214}
{"from":"ralphtheninja[m]","message":"https://sha2017.org/","timestamp":1500897880789}
{"from":"ralphtheninja[m]","message":"200 something tickets left if anyone is interested","timestamp":1500897891291}
{"from":"cblgh","message":"haha their kartents look cool","timestamp":1500901247775}
{"from":"cblgh","message":"http://kartent.com/en-uk/1/kartent","timestamp":1500901249106}
{"from":"ralphtheninja[m]","message":"yes :)","timestamp":1500901640861}
{"from":"pfrazee","message":"texas never has any of the cool confs!","timestamp":1500908248753}
{"from":"dat-gitter","message":"(rjsteinert) @pfrazee SXSW?","timestamp":1500912510373}
{"from":"dat-gitter","message":"(rjsteinert) :)","timestamp":1500912511465}
{"from":"pfrazee","message":"mehhh :)","timestamp":1500912536131}
{"from":"dat-gitter","message":"(rjsteinert) ;)","timestamp":1500914751303}
{"from":"TheLink","message":"pfrazee: http://raneto.com/ might also be a nice use case for beaker","timestamp":1500921306488}
{"from":"ralphtheninja[m]","message":"pfrazee: you should come to europe more often (or perhaps you do already)","timestamp":1500921361253}
{"from":"pfrazee","message":"TheLink: oh that's nice","timestamp":1500921369099}
{"from":"pfrazee","message":"ralphtheninja[m]: we have no travel budget! I can only take sponsored trips","timestamp":1500921380195}
{"from":"TheLink","message":"fresh from HN :P","timestamp":1500921402167}
{"from":"ralphtheninja[m]","message":"gotcha .. we'll have to fix that then :)","timestamp":1500921480348}
{"from":"pfrazee","message":"yeah :)","timestamp":1500921557621}
{"from":"ralphtheninja[m]","message":"TheLink: so use raneto for beaker knowledge base, then serve it over dat and display inside beaker?","timestamp":1500921781387}
{"from":"TheLink","message":"I rather meant serverless hosting this app in beaker like this other wiki demo","timestamp":1500921881604}
{"from":"ralphtheninja[m]","message":"aaah even better :)","timestamp":1500921903141}
{"from":"ogd","message":" haha i like we're calling it 'serverless'","timestamp":1500921933495}
{"from":"ogd","message":"gotta take that term back :)","timestamp":1500921939626}
{"from":"TheLink","message":"I'm just a tool :B","timestamp":1500921954970}
{"from":"pfrazee","message":"lol","timestamp":1500923714220}
{"from":"bret","message":"ogd: do you have zcash>","timestamp":1500932119109}
{"from":"bret","message":"?","timestamp":1500932119968}
{"from":"ogd","message":"bret: nope","timestamp":1500932125717}
{"from":"bret","message":"I forget who coined \"actually serverless\"","timestamp":1500932135331}
{"from":"bret","message":"I just listened to that radiolab","timestamp":1500932155778}
{"from":"cblgh","message":"+1 for actually serverless","timestamp":1500932479981}
{"from":"cblgh","message":"lol","timestamp":1500932480700}
{"from":"millette","message":"it's due time for cloudless computing","timestamp":1500934752631}
{"from":"pfrazee","message":"https://usercontent.irccloud-cdn.com/file/OVPgTpzL/Screen%20Shot%202017-07-24%20at%205.34.45%20PM.png","timestamp":1500935711409}
{"from":"pfrazee","message":"dat app in progress (nexus)","timestamp":1500935719140}
{"from":"ralphtheninja","message":"bret: radiolab!","timestamp":1500935748672}
{"from":"ralphtheninja","message":"love radiolab","timestamp":1500935753501}
{"from":"pfrazee","message":"bret: that was me :D","timestamp":1500935761169}
{"from":"pfrazee","message":"https://pfrazee.hashbase.io/blog/actually-serverless","timestamp":1500935775000}
{"from":"bret","message":"pfrazee: nail'd it","timestamp":1500935777001}
{"from":"ralphtheninja","message":"hehe","timestamp":1500935781184}
{"from":"bret","message":"m'serverless","timestamp":1500935814129}
{"from":"bret","message":"https://www.youtube.com/watch?v=AsJ93s9CXEg","timestamp":1500935862586}
{"from":"bret","message":"oops wrong link","timestamp":1500935910923}
{"from":"pfrazee","message":"haha","timestamp":1500935946631}
{"from":"bret","message":"this is a better actually serverless theme song https://www.youtube.com/watch?v=eAbkh4TMRqg","timestamp":1500936237102}
{"from":"bret","message":"it even captures the competing technology rivalries","timestamp":1500936297000}
{"from":"ralphtheninja[m]","message":"pfrazee: are you going to put dat nexus app on hashbase?","timestamp":1500940377786}
{"from":"pfrazee","message":"ralphtheninja[m]: yeah soon as it's ready","timestamp":1500940424931}
{"from":"ralphtheninja[m]","message":"nice","timestamp":1500940436427}
{"from":"ralphtheninja[m]","message":"starting to like async/await","timestamp":1500940574069}
{"from":"pfrazee","message":"yeah I <3 it","timestamp":1500940601895}
{"from":"ralphtheninja[m]","message":"pfrazee: do you have any good reads for async/await? like what to think of etc, blog article","timestamp":1500941134391}
{"from":"bret","message":"ralphtheninja[m]: you see https://github.com/juliangruber/async-stream ?","timestamp":1500941139523}
{"from":"pfrazee","message":"ralphtheninja[m]: not off the top of my head but when I get back to my desk I can find something","timestamp":1500941189699}
{"from":"ralphtheninja[m]","message":"would appreciate it, thanks!","timestamp":1500941241337}
{"from":"dat-gitter","message":"(rjsteinert) @ralphtheninja I wrote this up a couple of weeks ago https://github.com/rjsteinert/end-callback-hell-using-async-and-await","timestamp":1500941272289}
{"from":"ralphtheninja[m]","message":"bret: nice, looks a bit like dominics pull streams","timestamp":1500941317930}
{"from":"dat-gitter","message":"(rjsteinert) I'm not an await/async expert, there are still amazing things I'm finding I can do all the time","timestamp":1500941320335}
{"from":"ralphtheninja[m]","message":"rjsteinert thanks!","timestamp":1500941342706}
{"from":"bret","message":"ralphtheninja[m]: yeah like an es6 approach to pull streams","timestamp":1500941375668}
{"from":"dat-gitter","message":"(rjsteinert) Using `await` in a `while` is awesome https://github.com/rjsteinert/end-callback-hell-using-async-and-await/blob/master/advanced-usage-ftw/src/index.js#L26","timestamp":1500941466969}
{"from":"pfrazee","message":"for await is going to be sweet for streams","timestamp":1500941655908}
{"from":"pfrazee","message":"when it lands","timestamp":1500941659645}
{"from":"ralphtheninja[m]","message":"hehe i was just thinking about the let + while combo .. aye need something better to make it really nice","timestamp":1500941813401}
{"from":"ralphtheninja[m]","message":"nicer* :)","timestamp":1500941824699}
{"from":"ralphtheninja[m]","message":"rjsteinert: do you need the extra parens around the await there?","timestamp":1500942137356}
{"from":"yoshuawuyts","message":"mafintosh: what's still left to do before the next release of hyperdb?","timestamp":1500942580302}
{"from":"yoshuawuyts","message":"mafintosh: saw the prod-ready branch was last updated 3 weeks ago - is the API stable enough to experiment with?","timestamp":1500942614929}
{"from":"dat-gitter","message":"(sdockray) yoshuawuyts - if i'm not mistaken he's fixing a bunch of stuff with it on a long flight today or tomorrow... cblgh wrote something up above: https://gist.github.com/cblgh/84aab540cd5ecf4ec79c97779e060e9b","timestamp":1500942983675}
{"from":"yoshuawuyts","message":"sdockray - ah cool, thanks! :D","timestamp":1500943024320}
{"from":"ralphtheninja","message":"pfrazee: added complete node docs to hashbase https://hashbase.io/ralphtheninja/node-821-docs","timestamp":1500952727149}
{"from":"pfrazee","message":"ralphtheninja: haha wow","timestamp":1500952752026}
{"from":"ralphtheninja","message":"would be really nice to see a link to hashbase on beaker://start/ page","timestamp":1500952784349}
{"from":"pfrazee","message":"yeah that slipped my mind, I was going to add a couple default bookmarks to 0.7.4","timestamp":1500952868231}
{"from":"pfrazee","message":"I might put out a 0.7.5 in the near term, and I can put them in then","timestamp":1500953134680}
{"from":"pfrazee","message":"Im thinking about a module for indexing and querying datarchives ... built on indexeddb","timestamp":1500953175591}
{"from":"pfrazee","message":"anybody have a favorite indexeddb wrapper/module?","timestamp":1500953194862}
{"from":"ralphtheninja","message":"pfrazee: there's maxs level-js https://github.com/maxogden/level.js#readme","timestamp":1500953405636}
{"from":"ralphtheninja","message":"you can also use https://github.com/Level/level-browserify which is on top of level-js","timestamp":1500953434100}
{"from":"pfrazee","message":"cool that makes sense","timestamp":1500953467354}
{"from":"creationix","message":"pfrazee: I have a wrapper I like, let me dig it up","timestamp":1500955238363}
{"from":"creationix","message":"found it https://github.com/jakearchibald/idb-keyval","timestamp":1500955277526}
{"from":"creationix","message":"I considered level.js, but I like keeping my dependencies lean and I only wanted/needed simple get/set/del/list","timestamp":1500955317288}
{"from":"creationix","message":"> idb-keyval is less than 500 bytes","timestamp":1500955365285}
{"from":"pfrazee","message":"cool","timestamp":1500955371479}
{"from":"creationix","message":"my tweaked version is here https://github.com/creationix/revision/blob/master/src/libs/idb-keyval.ts","timestamp":1500955535826}
{"from":"creationix","message":"none are cjs though, wouldn't be hard to adopt if needed","timestamp":1500955554022}
{"from":"G-Ray_","message":"What happens if two persons try to write in the same hypercore feed ? (they both have the private key)","timestamp":1500975436790}
{"from":"ralphtheninja","message":"quit","timestamp":1500991323998}
{"from":"ralphtheninja","message":"ups","timestamp":1500991325676}
{"from":"tmbdev","message":"I'm wondering what the largest data set sizes are people have successfully served with dat. Are 5 TB datasets OK?","timestamp":1501002414756}
{"from":"dat-gitter","message":"(tmbdev) Also, are there Python APIs for partial downloads, like there are for JavaScript?","timestamp":1501002685656}
{"from":"dat-gitter","message":"(tmbdev) Finally, is there a description of the privacy/security model? Does the dat://... name act as a cryptographic key? Is there any way of finding a dataset without the dat://... name?","timestamp":1501002766648}
{"from":"karissa","message":"tmbdev there could be. With dat sync --http, you're running a local server to connect to your dat. Could write a python lib that sends http requests to that server","timestamp":1501002804575}
{"from":"karissa","message":"tmbdev we have a minimal registry that doesn't back up your data, only gives your data shortnames. it's here: https://datproject.org/explore. if you want to backup your data, you can use hashbase.io","timestamp":1501002857821}
{"from":"tmbdev","message":"So, how does a \"dat clone\" talk to a \"dat share\" when both are behind firewalls?","timestamp":1501003017387}
{"from":"karissa","message":"tmbdev 'dat' the cli tool accepts any http url that follows any of the following conventions https://www.npmjs.com/package/dat-link-resolve","timestamp":1501003102820}
{"from":"karissa","message":"tmbdev for your above question about short links","timestamp":1501003114929}
{"from":"tmbdev","message":"OK, it sounds like everything published with \"dat share\" is then automatically public (or at least not secure). (That's, of course, OK for many datasets, but I just want to understand it.)","timestamp":1501003218724}
{"from":"karissa","message":"tmbdev its secure","timestamp":1501003251013}
{"from":"karissa","message":"tmbdev only people with the original link are able to connect to your dat","timestamp":1501003267729}
{"from":"pfrazee","message":"we're making some tshirts if anybody wants to get in on it https://teespring.com/beaker-p2p-web","timestamp":1501003276958}
{"from":"karissa","message":"tmbdev https://docs.datproject.org/faq#security-and-privacy","timestamp":1501003283891}
{"from":"ralphtheninja[m]","message":"T-shirts!!","timestamp":1501003300633}
{"from":"pfrazee","message":"whoop","timestamp":1501003318313}
{"from":"karissa","message":"nice!","timestamp":1501003326927}
{"from":"pfrazee","message":"theyre a bit pricier than we wanted but hey","timestamp":1501003347360}
{"from":"karissa","message":"pfrazee yeah they don't get cheap unless you ship them in bulk i think","timestamp":1501003376239}
{"from":"pfrazee","message":"yeah","timestamp":1501003382675}
{"from":"ralphtheninja[m]","message":"tmbdev it's like a bitcoin address, anyone can see the balance on that account, but only if you share that address","timestamp":1501003403254}
{"from":"tmbdev","message":"Ah, thanks, I had looked at the docs but missed the FAQ section.","timestamp":1501003413909}
{"from":"karissa","message":"tmbdev thinking maybe we should put security and privacy above the FAQ.","timestamp":1501003540386}
{"from":"tmbdev","message":"I'm still wondering how \"dat\" works through firewalls (it does seem to, I tried).","timestamp":1501003548228}
{"from":"karissa","message":"tmbdev ppl seem to miss it in there","timestamp":1501003549108}
{"from":"karissa","message":"tmbdev i think utp","timestamp":1501003558921}
{"from":"karissa","message":"tmbdev hole-punching","timestamp":1501003562336}
{"from":"pfrazee","message":"yeah actually I dont know how utp does hole punching either","timestamp":1501003585933}
{"from":"pfrazee","message":"need to read up on that","timestamp":1501003593811}
{"from":"pfrazee","message":"I've assumed it was unicorns","timestamp":1501003605114}
{"from":"pfrazee","message":"but now that doesnt sound right","timestamp":1501003611189}
{"from":"ralphtheninja[m]","message":"hehe","timestamp":1501003617967}
{"from":"tmbdev","message":"You mean UPNP?","timestamp":1501003678646}
{"from":"tmbdev","message":"That's disabled on all my routers; it can't punch a hole through the firewall.","timestamp":1501003692075}
{"from":"pfrazee","message":"no its UTP","timestamp":1501003692143}
{"from":"pfrazee","message":"UPNP does device/service discovery on the lan","timestamp":1501003706440}
{"from":"tmbdev","message":"UPnP is also used for talking to routers in order to open ports.","timestamp":1501003723248}
{"from":"pfrazee","message":"UTP is analogous to TCP; was originally built for bittorrent","timestamp":1501003724058}
{"from":"pfrazee","message":"oh is it?","timestamp":1501003729546}
{"from":"pfrazee","message":"interesting, I didnt know that, and now I think it's probably smart to disable it","timestamp":1501003739973}
{"from":"pfrazee","message":"of course that's what hole punching is anyway","timestamp":1501003751004}
{"from":"ralphtheninja[m]","message":"tmbdev: https://github.com/mafintosh/utp-native","timestamp":1501003760322}
{"from":"pfrazee","message":">_>","timestamp":1501003761810}
{"from":"jhand","message":"the hole punching is done similar to the udp mechanism in this post https://www.zerotier.com/blog/state-of-nat-traversal.shtml","timestamp":1501003854645}
{"from":"pfrazee","message":"jhand: thanks","timestamp":1501003882928}
{"from":"tmbdev","message":"Oh, I see, when two UDP servers start talking simultaneously to each other, the NATs simultaneously route incoming packets to each server so that they all can keep talking to each other.","timestamp":1501003927828}
{"from":"tmbdev","message":"OK, thanks for taking the time to answer my questions. I'm working on large scale deep learning. Dat looks useful for distributing datasets manually.","timestamp":1501004034089}
{"from":"karissa","message":"nice","timestamp":1501004057731}
{"from":"karissa","message":"tmbdev what kind of data?","timestamp":1501004073762}
{"from":"tmbdev","message":"The lack of a native Python API limits its ability to be used as a direct source of training data.","timestamp":1501004128669}
{"from":"tmbdev","message":"The biggest datasets are video (petabytes), but also images, printed documents, and audio.","timestamp":1501004160586}
{"from":"karissa","message":"since dat downloads to the fs, you could use fs operation calls from python. but that means you need to download dat separately","timestamp":1501004193992}
{"from":"cblgh","message":"karissa: are dat archives regarded as private until the key is shared?","timestamp":1501004224801}
{"from":"tmbdev","message":"For deep learning, the ability to integrate with the P2P layer is very useful, since we can use the training data as it comes off the wire.","timestamp":1501004249419}
{"from":"karissa","message":"cblgh: yeah. third parties also can't sniff what data you're sharing without the original key","timestamp":1501004255056}
{"from":"cblgh","message":"karissa: couldn't you canvas the entire DHT, gathering the keys that exist on the nodes?","timestamp":1501004275100}
{"from":"karissa","message":"tmbdev ah that makes sense.","timestamp":1501004275377}
{"from":"cblgh","message":"tought just popped into my head","timestamp":1501004285180}
{"from":"cblgh","message":"thought*","timestamp":1501004294293}
{"from":"karissa","message":"cblgh we share discovery keys not the original keys. discovery keys are hashes of the original keys. so you need the original to compute the discovery key","timestamp":1501004305429}
{"from":"cblgh","message":"ohhhhh","timestamp":1501004313201}
{"from":"cblgh","message":"cool!","timestamp":1501004314301}
{"from":"tmbdev","message":"Oh, btw, is \"dat\" going to be a supported package in any upcoming Ubuntu release?","timestamp":1501004350668}
{"from":"cblgh","message":"karissa: hm so how does a discovery key result in an archive?","timestamp":1501004386343}
{"from":"cblgh","message":"oh wait","timestamp":1501004420735}
{"from":"cblgh","message":"no","timestamp":1501004420786}
{"from":"cblgh","message":"i get it lol","timestamp":1501004423831}
{"from":"cblgh","message":"took me a while sorry","timestamp":1501004428339}
{"from":"cblgh","message":"you take hash(feed key) and use that to query the DHT","timestamp":1501004448182}
{"from":"cblgh","message":"i was thinking the discovery key was just removing the request by one step, so that you could use the discovery key + node ip to query for the actual key","timestamp":1501004476284}
{"from":"cblgh","message":"but no","timestamp":1501004479356}
{"from":"dat-gitter","message":"(scriptjs) @jhand On dat gif’s for terminal here https://github.com/datproject/dat is colorized in the terminal. I didn’t see any color in the sources.","timestamp":1501004767353}
{"from":"jhand","message":"@scriptjs its a bit hidden: https://github.com/datproject/dat/blob/2d643947e5bdc931b05c629de5d7e2f066067b57/src/ui/elements/version.js","timestamp":1501004818003}
{"from":"jhand","message":"https://github.com/datproject/dat/blob/2d643947e5bdc931b05c629de5d7e2f066067b57/src/ui/elements/key.js","timestamp":1501004825214}
{"from":"dat-gitter","message":"(scriptjs) @jhand thanks","timestamp":1501004869428}
{"from":"cblgh","message":"jhand: that nat article was really good, thanks","timestamp":1501006063044}
{"from":"jhand","message":"via https://github.com/mafintosh/awesome-p2p","timestamp":1501006116398}
{"from":"cblgh","message":"ohhh these all look great","timestamp":1501006369954}
{"from":"cblgh","message":"i like that there's a manageable amount of links","timestamp":1501006393013}
{"from":"noffle","message":"nice","timestamp":1501006405782}
{"from":"cblgh","message":"some awesome repos just go off the ledge with 300 links","timestamp":1501006408544}
{"from":"ralphtheninja[m]","message":"t-shirts ordered!","timestamp":1501006696568}
{"from":"ralphtheninja[m]","message":"cblgh: aye, just too much","timestamp":1501006705669}
{"from":"ralphtheninja[m]","message":"that feeling when reading a blob article with links in every sentence","timestamp":1501006729541}
{"from":"pfrazee","message":"ralphtheninja[m]: nice!","timestamp":1501008473399}
{"from":"pfrazee","message":"https://github.com/beakerbrowser/injestdb just a first pass on a readme","timestamp":1501008646252}
{"from":"pfrazee","message":"^ this is me building a fast / practical solution to the \"we need to be able to do aggregate queries on dats\" problem","timestamp":1501008688944}
{"from":"pfrazee","message":"building on indexeddb takes the pressure off since it's userland. If there's anything that we learn has to be done by beaker itself, we can do a v2","timestamp":1501008785290}
{"from":"karissa","message":"oh cool. was thinking about this. i wondered, also, if putting a sqlite file in a dat would work fine. not in browser but on fs..","timestamp":1501008834000}
{"from":"pfrazee","message":"karissa: I was thinking about that too, and until dat can deduplicate some updates, I think it's going to be somewhat inefficient","timestamp":1501008885305}
{"from":"pfrazee","message":"you'd have to open the sqlite db, do work, then commit a new file, and anybody receiving the db would have to download the entire thing fresh","timestamp":1501008910016}
{"from":"pfrazee","message":"whereas breaking your dataset up into files takes better advantage of how dat distributes data atm","timestamp":1501008944287}
{"from":"karissa","message":"right, i see how dedupe really helps","timestamp":1501008954488}
{"from":"pfrazee","message":"yeah. Though I have no idea how well a sqlite db would diff & dedupe","timestamp":1501008980922}
{"from":"millette","message":"pouchdb/couchdb sync ftw","timestamp":1501008994207}
{"from":"millette","message":"If you're going higher level than indexdb","timestamp":1501009021796}
{"from":"pfrazee","message":"millette: yeah but once dat has multiwriter, pouch/couch is the old school","timestamp":1501009077559}
{"from":"millette","message":"multiwriter, is that like the next Duke Nukem? ;-)","timestamp":1501009193958}
{"from":"millette","message":"yeah, can't wait for the possibilities it will open, seriously","timestamp":1501009221629}
{"from":"ralphtheninja","message":"s/blob/blog obviously","timestamp":1501009292570}
{"from":"pfrazee","message":"haha","timestamp":1501009760131}
{"from":"dat-gitter","message":"(tmbdev) Is there any way for static seeding/hosting of dat repositories? Or do I have to actively run a \"dat share\" for every repository? What about hosting data on Amazon S3?","timestamp":1501017595154}
{"from":"ogd","message":"tmbdev theres an experimental http transport which would let you use any http server as the backend, but for now you have to run 'dat share' for everything. or use another tool like hypercore-archiver","timestamp":1501017715743}
{"from":"ogd","message":"https://github.com/mafintosh/hypercore-archiver https://github.com/mafintosh/hypercore-archiver-bot https://github.com/datproject/dat-http","timestamp":1501017738989}
{"from":"dat-gitter","message":"(tmbdev) Is there support for registering such data sources with the current registry? Something like \"dat share http://myserver.org/mydata\" or \"dat share s3://user/bucket\"? Or do I need to run something somewhere?","timestamp":1501017898191}
{"from":"ogd","message":"tmbdev right now you have to have a copy of the data on a normal filesystem in order to import it, but i was just talking to jhand about the idea of importing from a set of urls or auto-crawling http to import files","timestamp":1501018010391}
{"from":"dat-gitter","message":"(tmbdev) Or, to pop up a level, I have a few dozen large data repositories (many are several TB large) that I would like to be able to host on S3. What's a good way of doing that?","timestamp":1501018019611}
{"from":"ogd","message":"tmbdev thats a common use case we have heard, i think you'd wanna wait for the http transport to get into the main dat-node distribution. what we do now is use large disk volumes attached to cloud compute nodes","timestamp":1501018066987}
{"from":"dat-gitter","message":"(tmbdev) Keeping a local machine or an EC2 job running just to keep rarely accessed data available via \"dat\" seems both costly and deleterious to uptime.","timestamp":1501018072361}
{"from":"ogd","message":"tmbdev but block storage is cheaper","timestamp":1501018074950}
{"from":"dat-gitter","message":"(tmbdev) ^uptime^availability","timestamp":1501018088705}
{"from":"ogd","message":"tmbdev yea the downside is s3 only allows access over http, so we had to do quite a bit of work to make the dat protocol pure http friendly (rather than running an interactive server process e.g. how git works), but we're close to that now","timestamp":1501018124521}
{"from":"dat-gitter","message":"(tmbdev) I (we) use S3 for a lot of other purposes; it's a good protocol for big data and deep learning.","timestamp":1501018162915}
{"from":"dat-gitter","message":"(tmbdev) S3 also supports BitTorrent, by the way.","timestamp":1501018172771}
{"from":"ogd","message":"oh yea cool","timestamp":1501018218012}
{"from":"dat-gitter","message":"(tmbdev) Maybe in the short term, I will write a small script that automatically runs \"dat share\" for every subdirectory in a directory and starts new jobs if necessary. Of course, if that were built in (\"dat shareall\") other people might find it a useful short term solution as well.","timestamp":1501018288467}
{"from":"dat-gitter","message":"(tmbdev) BTW, block store on Amazon (EBS) is limited to 20TB in size.","timestamp":1501018379931}
{"from":"ogd","message":"tmbdev ooh that script sounds interesting. our current workflow is to have a list of dats to run 'dat share' in that we manually update","timestamp":1501018518729}
{"from":"ogd","message":"tmbdev we use https://github.com/mafintosh/lil-pids for the config file, it handles the process monitoring","timestamp":1501018561524}
{"from":"ogd","message":"karissa: dat login localhost:8080 not working for me w/ latest dat-cli and datproject.org, do you know if thats a known issue?","timestamp":1501018900342}
{"from":"cblgh","message":"pfrazee: nice with injest","timestamp":1501019171003}
{"from":"pfrazee","message":"cblgh: thanks","timestamp":1501019185242}
{"from":"cblgh","message":"intended to be basically be a multi-reader?","timestamp":1501019186269}
{"from":"cblgh","message":"-be, getting late here lol","timestamp":1501019195166}
{"from":"pfrazee","message":"multi-reader?","timestamp":1501019202149}
{"from":"cblgh","message":"cf hyperdb being a multi-writer","timestamp":1501019241367}
{"from":"cblgh","message":"basically wondering if it is intended to for reading many feeds only","timestamp":1501019256878}
{"from":"pfrazee","message":"oh I see hah","timestamp":1501019263700}
{"from":"cblgh","message":"or if you also plan to add writing somehow","timestamp":1501019265002}
{"from":"pfrazee","message":"it does have writing actually","timestamp":1501019270592}
{"from":"cblgh","message":"oh?","timestamp":1501019274548}
{"from":"cblgh","message":"how does that work?","timestamp":1501019306703}
{"from":"pfrazee","message":"yeah. It's mainly meant to abstract the FS so that aggregate querying is easier","timestamp":1501019309981}
{"from":"pfrazee","message":"each \"table\" has methods for adding, updating, and deleting records, and records have a way to map to files","timestamp":1501019338463}
{"from":"cblgh","message":"oh okay","timestamp":1501019516091}
{"from":"cblgh","message":"so i understand the name now lol","timestamp":1501019524929}
{"from":"karissa","message":"ogd: try localhost:8080/api/v1","timestamp":1501019564095}
{"from":"ogd","message":"karissa: hmm no luck. ill look into it","timestamp":1501019623479}
{"from":"karissa","message":"Ogd ok. and add http:// to the front?","timestamp":1501019697318}
{"from":"ogd","message":"karissa: oh yea that fixed it. ill add some tests","timestamp":1501019726811}
{"from":"karissa","message":"ogd: ah ok :) bad ux...should try to add http prob","timestamp":1501019760113}
{"from":"pfrazee","message":"cblgh: haha right","timestamp":1501019837425}
{"from":"pfrazee","message":"eats files -> ....emits tables","timestamp":1501019851025}
{"from":"yoshuawuyts","message":"pfrazee: what was the graph query thing you were investigating a while back? - want to brush up on some of my reading","timestamp":1501030489488}
{"from":"pfrazee","message":"yoshuawuyts: SPARQL. I looked around at some of the software. Didnt find any dropins for node/electron","timestamp":1501037425940}
{"from":"dat-gitter","message":"(tmbdev) I've experimented a bit with \"dat\" and it seems very slow. For example, for a 1.3G dataset consisting of 500000 files, it looks like it's going to take about 24h to publish with \"dat\"; tarring them up takes a few minutes. Is there any chance it's going to get speeded up in upcoming release?","timestamp":1501051806005}
{"from":"substack","message":"tmbdev: I published a 40G archive and it took as long, with 200000 files","timestamp":1501052326140}
{"from":"dat-gitter","message":"(tmbdev) It seems to me that \"dat\" really should be I/O limited, not compute limited.","timestamp":1501052469002}
{"from":"mafintosh","message":"@tmbdev ya an issue related to that is fixed in the next bigger release","timestamp":1501053222126}
{"from":"substack","message":"mafintosh: hey what are you doing in america","timestamp":1501054900074}
{"from":"mafintosh","message":"substack: hanging out with max","timestamp":1501055370694}
{"from":"substack","message":"cool thing: https://substack.neocities.org/mixmap/demos/zoom.html","timestamp":1501055575175}
{"from":"substack","message":"will p2p that demo onto beaker and ipfs once I'm on fast net","timestamp":1501055733018}
{"from":"yoshuawuyts","message":"pfrazee thank you!","timestamp":1501061120399}
{"from":"creationix","message":"does dat never work again once you install zerotier?","timestamp":1501077649829}
{"from":"pfrazee","message":"creationix: I think you're the first to try it","timestamp":1501077716530}
{"from":"creationix","message":"also when creating a bare dat (`latest: false`), how do I import files?","timestamp":1501077806997}
{"from":"pfrazee","message":"mafintosh: had a thought yesterday. multi writer is AP and a must have for offline use, but suppose I had a reliable connection- wouldn't it be nice to have a CP mode, with support for transactions! so...","timestamp":1501077808207}
{"from":"creationix","message":"if I import using the node API, it creates the metadata, but the actual contents are dropped on the floow","timestamp":1501077826823}
{"from":"creationix","message":"*floor","timestamp":1501077832655}
{"from":"pfrazee","message":"creationix: the file content isn't written locally?","timestamp":1501077900238}
{"from":"pfrazee","message":"mafintosh: so what if we simply cooked up a remote-dat rpc interface over a websocket, and then used a cloud service to coordinate the writers","timestamp":1501077981042}
{"from":"pfrazee","message":"mafintosh: signing keys stay on the users devices but all writes are centrally coordinated","timestamp":1501078020697}
{"from":"pfrazee","message":"creationix: oh by import you mean, download?","timestamp":1501078047047}
{"from":"pfrazee","message":"mafintosh: to interface with this, in beaker, the dev would simply specify whether they want CP or AP. if you choose AP, you have to handle conflicts but it works offline","timestamp":1501078222323}
{"from":"creationix","message":"import https://gist.github.com/creationix/863d2a6e268a0d08170f70a07c49b044#file-import-js-L17","timestamp":1501078847409}
{"from":"creationix","message":"when I later try to read the files using hyperdrive (the .archive object), it can read directories just fine","timestamp":1501078893531}
{"from":"creationix","message":"but file contents cause it to crash","timestamp":1501078898976}
{"from":"creationix","message":"looking at the files, the content log is basically empty","timestamp":1501078912145}
{"from":"creationix","message":"`content.data` is 0 bytes","timestamp":1501078932322}
{"from":"cblgh","message":"pfrazee: AP?","timestamp":1501079676550}
{"from":"bret","message":"mafintosh ogd in Pdx?","timestamp":1501083955836}
{"from":"karissa","message":"regarding cp and ap https://martin.kleppmann.com/2015/05/11/please-stop-calling-databases-cp-or-ap.html","timestamp":1501084568184}
{"from":"karissa","message":"found that post randomly","timestamp":1501084628624}
{"from":"karissa","message":"lots of relevant definitions in there","timestamp":1501084662550}
{"from":"dat-gitter","message":"(scriptjs) @cblgh I think what pfrazee is getting at is whether the database is transactional or eventually consistent. CouchDB is an example of the former where you need to be concerned with handling conflicts by whatever strategy you wish to employ. SLEEP used in hypercore was originally derived from CouchDB’s replication strategy and it makes great sense that @mafintosh is working on a solution to coordinate multiple writes.","timestamp":1501085367735}
{"from":"pfrazee","message":"@scriptjs @cblgh karissa yes karissa and scriptjs are right","timestamp":1501085473791}
{"from":"pfrazee","message":"CP - makes sure that the data is always consistent and doesnt conflict. AP - makes sure that the data is always writable and can be accessed offline. The tradeoffs are pretty direct: CP requires a good network link with the other computers, and AP requires the application to handle conflict-states","timestamp":1501085543993}
{"from":"pfrazee","message":"CP is conceptually (and practically) easier for programmers. Conflicts are a PITA. But then, offline usability is really important in certain usecases","timestamp":1501085618016}
{"from":"pfrazee","message":"this usually ends up being a question about your requirements","timestamp":1501085675560}
{"from":"pfrazee","message":"if you're building hospital software for rural areas or developing nations (I forget the name of that project) then you need AP, because the network is not reliable","timestamp":1501085709576}
{"from":"pfrazee","message":"that requirement ends up determining whether you're going to use CouchDB, basically, because it's the best AP DB on the market right now. And as much as I love Couch, the truth is that you almost *never* use it unless you have an offline requirement","timestamp":1501085781949}
{"from":"pfrazee","message":"so if a CP-dat mode works, then we get a pretty unique opportunity to support both AP and CP, which would be really cool because you wouldn't have to change your stack based on the offline requirement","timestamp":1501085958200}
{"from":"pfrazee","message":"though I think couchdb is similar, actually-- I think you can RPC out to couch server to get CP","timestamp":1501086052580}
{"from":"pfrazee","message":"yes actually you definitely can","timestamp":1501086065341}
{"from":"dat-gitter","message":"(scriptjs) @phrazee yes that would be cool and in the short term allow hyperdb some time to grow. It’d be quite important to coordinate on an API that makes sense.","timestamp":1501086186941}
{"from":"dat-gitter","message":"(scriptjs) @pfrazee Are you starting something or is this a proposal at this time?","timestamp":1501086406658}
{"from":"mafintosh","message":"pfrazee: funny was just thinking about this yesterday for a chat app","timestamp":1501086419580}
{"from":"creationix","message":"heh, I very much care about the \"hospital software for rural areas or developing nations\" use case","timestamp":1501086421613}
{"from":"pfrazee","message":"creationix: haha relevant to your interests","timestamp":1501086438658}
{"from":"creationix","message":"and just supporting crappy networks in general","timestamp":1501086441198}
{"from":"pfrazee","message":"@scriptjs I'm thinking pretty seriously about it","timestamp":1501086462634}
{"from":"pfrazee","message":"mafintosh: yeah it should be pretty straightforward to get an RPC interface done over websockets. I'd need to read up on the perils of remote transactions, because I'd like to have my file-locking transactions API","timestamp":1501086499319}
{"from":"creationix","message":"have we talked about arbitrary RPC commands between machines for general application use (not to support file sharing)?","timestamp":1501086566063}
{"from":"creationix","message":"I think the model I need is a first layer of rpc (or any control protocol) where file sharing/syncing is a specific implementation on top of that","timestamp":1501086625843}
{"from":"pfrazee","message":"that's roughly == to the conversation about setting up messaging channels via dat infrastructure","timestamp":1501086627585}
{"from":"creationix","message":"cool, just wanted to clarify","timestamp":1501086644294}
{"from":"pfrazee","message":"creationix: here's how I'd look at that: we have websockets right now, and dat-rpc would work fine over websockets for the CP usecase","timestamp":1501086659120}
{"from":"creationix","message":"so I'm willing to test/prototype this idea now if I can","timestamp":1501086659471}
{"from":"pfrazee","message":"the p2p messaging channels conversation is about getting a reliable p2p websocket","timestamp":1501086678087}
{"from":"creationix","message":"by p2p websocket you just mean \"send a message to an address, and it gets it eventually\"","timestamp":1501086733461}
{"from":"creationix","message":"and it can send messages back to you","timestamp":1501086745722}
{"from":"pfrazee","message":"I'd prefer that it gets there reliably and acks","timestamp":1501086749600}
{"from":"pfrazee","message":"yeah, I'd like a TCP-like connection","timestamp":1501086757130}
{"from":"pfrazee","message":"so yes","timestamp":1501086767958}
{"from":"pfrazee","message":"abd we either need to create a federated signalling service for webrtc, or we need to add some capabilities to hyperdiscovery to bootstrap these channels over UTP/TCP","timestamp":1501086797298}
{"from":"creationix","message":"so you want ack at the network level, not depend on application protocols to implement acks","timestamp":1501086797450}
{"from":"pfrazee","message":"I think that'd be wise yeah","timestamp":1501086805901}
{"from":"creationix","message":"I'm concerned about the cost of that ack and the false confidence it would bring. What if the remote party was just a proxy to another service and there was a problem there","timestamp":1501086841665}
{"from":"dat-gitter","message":"(scriptjs) so this is a small messaging layer on hypercore","timestamp":1501086850070}
{"from":"creationix","message":"the proxy would have to integrate at the network level somehow or you'd never know","timestamp":1501086858498}
{"from":"pfrazee","message":"@scriptjs *possibly*, using hypercore is one of our options but not a given","timestamp":1501086864942}
{"from":"pfrazee","message":"creationix: if the proxy lies about acks then it's a bad proxy","timestamp":1501086874092}
{"from":"creationix","message":"mmm, federated webrtc signalizing. Can't we just use a dht like webtorrent does? (or at least I assume it does)","timestamp":1501086913683}
{"from":"pfrazee","message":"thing is, making apps implement acks seems like premature optimization: the only reason an app wouldnt want acks builtin is because they need to stream large volumes of time-sensitive data (eg a video game) and I grant that's a valid usecase, but not the main usecase for web","timestamp":1501086918985}
{"from":"pfrazee","message":"mafintosh has had some ideas for a DHT that might be able to support that, but we'd need a new DHT","timestamp":1501086938123}
{"from":"dat-gitter","message":"(scriptjs) @pfrazee I think @mafintosh made one","timestamp":1501086959094}
{"from":"creationix","message":"lol, I made a new word \"signalizing\"","timestamp":1501086960894}
{"from":"dat-gitter","message":"(scriptjs) that might be used","timestamp":1501086978264}
{"from":"pfrazee","message":"lol","timestamp":1501086978535}
{"from":"dat-gitter","message":"(scriptjs) let me see if i can find it","timestamp":1501086992394}
{"from":"dat-gitter","message":"(scriptjs) hehe","timestamp":1501086993531}
{"from":"pfrazee","message":"@scriptjs I think I remember that. FWIW a DHT solution will need more time to mature than federated services would","timestamp":1501087015635}
{"from":"dat-gitter","message":"(scriptjs) https://github.com/mafintosh/hyperdht","timestamp":1501087017087}
{"from":"pfrazee","message":"yeah that's it","timestamp":1501087032361}
{"from":"creationix","message":"for small networks, simple gossip would work right?","timestamp":1501087051321}
{"from":"creationix","message":"but how do you get introduced is the problem I guess","timestamp":1501087060447}
{"from":"pfrazee","message":"well you can intro via multicast announcements","timestamp":1501087070303}
{"from":"pfrazee","message":"if by small networks you mean a LAN","timestamp":1501087077208}
{"from":"pfrazee","message":"gossip only works for log distribution","timestamp":1501087085091}
{"from":"creationix","message":"no, I mean small number of participants, on diverse physical networks","timestamp":1501087093796}
{"from":"pfrazee","message":"yeah introduction is the challenge","timestamp":1501087103519}
{"from":"creationix","message":"so my personal devices and those of my direct family","timestamp":1501087118643}
{"from":"creationix","message":"(granted, my home wifi has 50+ devices at times, I'm atypical)","timestamp":1501087136610}
{"from":"pfrazee","message":"it's looking like establishing p2p sockets is pretty high priority, but there's enough uncertainty about how to build the stack that it's hard to make it \"next priority\"","timestamp":1501087216377}
{"from":"pfrazee","message":"I think the next tech push will be stabilizing multiwriter","timestamp":1501087236665}
{"from":"pfrazee","message":"and I'm going to explore dat-rpc at the same time","timestamp":1501087248554}
{"from":"creationix","message":"I don't mind taking that on (at least exploring and prototyping outside of beaker)","timestamp":1501087251707}
{"from":"pfrazee","message":"creationix: that'd be smart","timestamp":1501087274792}
{"from":"dat-gitter","message":"(scriptjs) I like the idea of doing this on hypercore. @pfrazee what were your other options besides DHT","timestamp":1501087287382}
{"from":"bret","message":"anyone know of a module that either links or copies, depending on if the two paths live on different devices?","timestamp":1501087301967}
{"from":"creationix","message":"so ZeroTier has this (ironically) tiered approach","timestamp":1501087309186}
{"from":"bret","message":"https://github.com/mafintosh/content-addressable-blob-store/pull/13","timestamp":1501087311915}
{"from":"pfrazee","message":"ok I have to go afk, but creationix ping me later if you want to do some prototyping","timestamp":1501087325623}
{"from":"creationix","message":"pfrazee: sure thing","timestamp":1501087333776}
{"from":"mafintosh","message":"bret: copy wouldn't work in content-...","timestamp":1501087363308}
{"from":"mafintosh","message":"bret: then its not atomic","timestamp":1501087377156}
{"from":"mafintosh","message":"Unless you copy to a local tmp file","timestamp":1501087407103}
{"from":"creationix","message":"mafintosh: I can prototype message queues using dat infra. right?","timestamp":1501087419964}
{"from":"creationix","message":"what is the missing piece, do we want first-class browser support?","timestamp":1501087449212}
{"from":"bret","message":"mafintosh: oh good poin","timestamp":1501087457354}
{"from":"bret","message":"mafintosh: should we fallback to a local tempdir if the OS one is on a different device?","timestamp":1501087486119}
{"from":"mafintosh","message":"bret: or just always use a local one","timestamp":1501087578753}
{"from":"mafintosh","message":"./tmp inside the cabs folder","timestamp":1501087596293}
{"from":"bret","message":"thats basically what it was doing before","timestamp":1501087609837}
{"from":"creationix","message":"the last rpc over message channel protocol I designed is https://github.com/virgo-agent-toolkit/super-agent#websocket-client","timestamp":1501087638876}
{"from":"mafintosh","message":"bret: ah, then yea use a local one if the other one is bad","timestamp":1501087654485}
{"from":"mafintosh","message":"bret: you can check by trying to rename a file","timestamp":1501087671339}
{"from":"bret","message":"mafintosh: \"Comparing fs.stat device ids for each path should tell you whether they are on the same device.\"","timestamp":1501087692574}
{"from":"bret","message":"that too?","timestamp":1501087699628}
{"from":"mafintosh","message":"bret: even better","timestamp":1501087704502}
{"from":"bret","message":"sweet","timestamp":1501087707703}
{"from":"bret","message":"ok ill make an issue","timestamp":1501087713178}
{"from":"bret","message":"mafintosh: how long are you in Pdx?","timestamp":1501087731284}
{"from":"mafintosh","message":"bret: and both me and ogd are in pdx ya","timestamp":1501087733000}
{"from":"mafintosh","message":"2.5 weeks!","timestamp":1501087741175}
{"from":"mafintosh","message":"pfrazee: if you have free time you should come hack :D","timestamp":1501087773545}
{"from":"bret","message":"oh hell ya","timestamp":1501087832567}
{"from":"bret","message":"pfrazee: come to pdx1","timestamp":1501087838613}
{"from":"bret","message":"!!!","timestamp":1501087839862}
{"from":"bret","message":"hack hack hack","timestamp":1501087852689}
{"from":"creationix","message":"If I didn't have 5 kids I'd be booking a flight there :P But my wife would kill me for leaving here alone","timestamp":1501087953246}
{"from":"bret","message":"bring them all!","timestamp":1501088023882}
{"from":"bret","message":"hey kids, ever publish a module?","timestamp":1501088042037}
{"from":"bret","message":"creationix: you live/work in a remote area? I'm thinking of moving out of the city soon","timestamp":1501088246909}
{"from":"creationix","message":"if you're paying, we'll be there!","timestamp":1501088266512}
{"from":"creationix","message":"6 plane tickets (one is free) isn't cheap","timestamp":1501088276040}
{"from":"creationix","message":"yeah, I live at the edge of a small town. It's great!","timestamp":1501088292951}
{"from":"creationix","message":"just don't get to many phyiscal meetups with other programmers","timestamp":1501088304121}
{"from":"pfrazee","message":"yeah Im in creationix's boat, no travel budget","timestamp":1501088316212}
{"from":"creationix","message":"I did once take my oldest to Paris to co-present a talk on teaching kids to program using robots. It was epic","timestamp":1501088339778}
{"from":"creationix","message":"pfrazee: at least you live near an airport. But still, it can be expensive","timestamp":1501088386410}
{"from":"pfrazee","message":"creationix: true","timestamp":1501088394530}
{"from":"pfrazee","message":"ok Im going heads down for some work on injestdb. back later","timestamp":1501088403259}
{"from":"creationix","message":"nearest real airport to me is DFW","timestamp":1501088406980}
{"from":"bret","message":"teleportation can't come soon enough","timestamp":1501088506761}
{"from":"creationix","message":"I'll settle for bullet trains","timestamp":1501088886314}
{"from":"creationix","message":"domestic flying is torture with kids","timestamp":1501088899884}
{"from":"noffle","message":"any dat folks coming down to the bay area?","timestamp":1501090332373}
{"from":"pfrazee","message":"noffle: are you in the bay?","timestamp":1501090810757}
{"from":"noffle","message":"yes, in oakland","timestamp":1501090943701}
{"from":"pfrazee","message":"ah cool","timestamp":1501091042289}
{"from":"bret","message":"mafintosh: whats the idea behind doing 4.6.0 git commit subjects but v4.6.0 git tags for module releases ?","timestamp":1501091732784}
{"from":"ogd","message":"mafintosh: cool hashing thing blahah showed me http://ivory.idyll.org/blog/2016-sourmash.html","timestamp":1501092703063}
{"from":"bret","message":"neat pic http://ivory.idyll.org/blog/images/sourmash-urchin.png","timestamp":1501093957795}
{"from":"ogd","message":"ya science rulz","timestamp":1501094016720}
{"from":"pfrazee","message":"mafintosh: but can you put a price on that","timestamp":1501096914787}
{"from":"pfrazee","message":"the real plane ticket was the friends we made along the way","timestamp":1501096928141}
{"from":"creationix","message":"cel: how do I add the ssb helper to git?","timestamp":1501098206862}
{"from":"creationix","message":"nevermind, I should google before asking https://github.com/clehner/git-remote-ssb","timestamp":1501098251963}
{"from":"cel","message":"npm i -g git-ssb","timestamp":1501098273403}
{"from":"cel","message":"you will also need sbot server (or Patchwork) running","timestamp":1501098300560}
{"from":"cel","message":"or you can fetch from the HTTP url which is a gateway to git-ssb","timestamp":1501098360596}
{"from":"creationix","message":"substack: can I get a ssb invite, I assume you're still using that","timestamp":1501099384692}
{"from":"creationix","message":"my old profile is long gone","timestamp":1501099391235}
{"from":"creationix","message":"cel: btw, I got it installed over the https interface, but it reminded me to get patchwork up again","timestamp":1501099443993}
{"from":"cel","message":"creationix: cool","timestamp":1501099450759}
{"from":"creationix","message":"cel: what's a good pub to join","timestamp":1501099452427}
{"from":"cel","message":"https://github.com/ssbc/scuttlebot/wiki/Pub-servers#pub-has-website-which-gives-out-invites","timestamp":1501099457262}
{"from":"cel","message":"any of those should be good","timestamp":1501099479984}
{"from":"creationix","message":"cel: I'm @yEaK9okGIxoentGzOtN0QydDdn902lypmJkGh4j2paU=.ed25519","timestamp":1501099637805}
{"from":"creationix","message":"I guess this time I should backup my keys somewhere right?","timestamp":1501099648735}
{"from":"cel","message":"cool. yes, good idea","timestamp":1501099672327}
{"from":"yoshuawuyts","message":"Want to hear a fun joke?","timestamp":1501109214413}
{"from":"yoshuawuyts","message":"What do Python and Node have in common?","timestamp":1501109228731}
{"from":"yoshuawuyts","message":"Ecosystem schisms https://github.com/nodejs/node-eps/blob/master/002-es-modules.md#32-determining-if-source-is-an-es-module","timestamp":1501109243475}
{"from":"yoshuawuyts","message":":(","timestamp":1501109270520}
{"from":"mafintosh","message":"yoshuawuyts: there prob isnt another way to do it","timestamp":1501110307731}
{"from":"ralphtheninja","message":"yoshuawuyts: what kind of schisms are there in node land these days?","timestamp":1501120254676}
{"from":"ogd","message":"mafintosh: https://github.com/datproject/dat/issues/827#issuecomment-318236396","timestamp":1501120777395}
{"from":"ralphtheninja","message":"I'm not on twitter anymore, so in the dark :)","timestamp":1501121952916}
{"from":"substack","message":"ralphtheninja: a few times core has broken something that didn't need fixing but people's complaint were heard","timestamp":1501123233487}
{"from":"ralphtheninja[m]","message":"substack: aah, like readline in 8.1.something?","timestamp":1501123508603}
{"from":"ralphtheninja[m]","message":"I haven't been following core that much lately, maybe i should now that I have more time on my hands, or maybe I shouldn't because I don't like politics","timestamp":1501123571939}
{"from":"substack","message":"I've always been out on the fringes of userland","timestamp":1501124126414}
{"from":"karissa","message":"looks like I'll be speaking at nodeconf Argentina","timestamp":1501126896535}
{"from":"aldebrn","message":"pfrazee has mentioned Dat has content-addressable links a couple of times, any links that describe how that works?","timestamp":1501128505059}
{"from":"mafintosh","message":"karissa: wooooot! i was there last year. lots of fun","timestamp":1501129524530}
{"from":"mafintosh","message":"aldebrn: there is a paper","timestamp":1501129536858}
{"from":"mafintosh","message":"aldebrn: https://datproject.org/paper","timestamp":1501129598839}
{"from":"mafintosh","message":"ralphtheninja[m]: spend time here in userland instead of joining the bike shed","timestamp":1501129647611}
{"from":"aldebrn","message":"mafintosh: thanks, I will read it more carefully presently but quick question: I understood that (Rabin) chunks are content-addressed, is there any way to refer to a file by its hash in Dat?","timestamp":1501129840055}
{"from":"mafintosh","message":"aldebrn: the entire dat itself is content addressed","timestamp":1501129926415}
{"from":"noffle","message":"ralphtheninja[m]: quit twitter?","timestamp":1501129994273}
{"from":"aldebrn","message":"mafintosh: Wow, ok! I'll read carefully to understand. I was likely misled into a simplistic understanding because the same contents can get different Dat archive ids, but if I understand you, underneath that id is the content-addressed goodness.","timestamp":1501130065154}
{"from":"mafintosh","message":"aldebrn: the id of the dat is a public key signing the content","timestamp":1501130095570}
{"from":"mafintosh","message":"so its internally it is content addressed logs signed by a key pair","timestamp":1501130119207}
{"from":"mafintosh","message":"there is a static mode also, but we don't really use that","timestamp":1501130130909}
{"from":"aldebrn","message":"mafintosh: so peers in a Dat swarm are, quite like BitTorrent, asking for chunk hashes and getting the original chunks in exchange","timestamp":1501130239448}
{"from":"mafintosh","message":"aldebrn: yea exactly","timestamp":1501130267969}
{"from":"mafintosh","message":"except its live","timestamp":1501130273043}
{"from":"mafintosh","message":"so you can update the content","timestamp":1501130280430}
{"from":"aldebrn","message":"I ask all this because I was about to make a Dat blog article that said \"this dataset is obtained by running the script with hash X on the original data with hash Y\" and wanted to see if I could make X and Y more than just the text","timestamp":1501130351906}
{"from":"aldebrn","message":"It's not sufficient to refer to Dats hosting X and Y with their ids, because those could go away someday, even while the underlying data itself lived somewhere on the network.","timestamp":1501130404049}
{"from":"noffle","message":"is a \"dat\" synonymous with a \"hyperdrive\"? or is there other data around it?","timestamp":1501130811744}
{"from":"karissa","message":"noffle: yeah. there's a dat-storage module. Ir provides nice defaults for hyperdrive including key management in the home dir to prevent accidental key leakage","timestamp":1501132434437}
{"from":"karissa","message":"It*","timestamp":1501132434576}
{"from":"noffle","message":"I'll check that out","timestamp":1501134758206}
{"from":"G-Ray","message":"What happens if two persons try to update an hypercore/hyperdrive at the time? (they both have the private key)","timestamp":1501141330780}
{"from":"blahah","message":"karissa noffle also the peer discovery defaults, the remote storage, some extra metadata stuff for hyperdrives","timestamp":1501143333663}
{"from":"blahah","message":"dat is like a layer above hyperdrive that turns it into a user-facing set of tools and services","timestamp":1501143418721}
{"from":"ralphtheninja","message":"mafintosh: that's what I'm doing :)","timestamp":1501159370612}
{"from":"ralphtheninja","message":"noffle: yes but that was years ago","timestamp":1501159378034}
{"from":"dat-gitter","message":"(benrogmans) @mafintosh just out of curiosity, what kind of consensus algorithm are you using for the multi-writer feature of the Dat protocol?","timestamp":1501161808306}
{"from":"dat-gitter","message":"(benrogmans) And how's the progress ;) ?","timestamp":1501161827465}
{"from":"mafintosh","message":"@benrogmans it's eventual consistent","timestamp":1501162808916}
{"from":"mafintosh","message":"If two writable peers update a key st the same time the key has two values","timestamp":1501162859124}
{"from":"mafintosh","message":"Api wise get (key) always return an array","timestamp":1501162879563}
{"from":"mafintosh","message":"Up to you to pick the \"best\" one","timestamp":1501162902777}
{"from":"mafintosh","message":"If you put then the value overrides any existing ones deterministicly","timestamp":1501162965302}
{"from":"mafintosh","message":"@benrogmans i have one bug left and a bunch of docs :) had a very productive flight the other day","timestamp":1501163058417}
{"from":"dat-gitter","message":"(benrogmans) Ok, so it is pretty straight forward","timestamp":1501163073391}
{"from":"dat-gitter","message":"(benrogmans) That sounds promising :D","timestamp":1501163078342}
{"from":"dat-gitter","message":"(benrogmans) Have you considered more complex algorithms like Paxos / Raft?","timestamp":1501163100050}
{"from":"dat-gitter","message":"(benrogmans) I guess this article might answer my question: http://guide.couchdb.org/draft/consistency.html","timestamp":1501163604871}
{"from":"ralphtheninja","message":"cblgh: I guess that's what pfrazee talked about yesterday (AP/CP) ^","timestamp":1501170144127}
{"from":"mafintosh","message":"cblgh: lots of good plane hacking done btw :)","timestamp":1501170473765}
{"from":"dat-gitter","message":"(benrogmans) @ralphtheninja oh wow, I missed the discussion above... Have to scroll up and catch up on the chat","timestamp":1501170984405}
{"from":"pfrazee","message":"mafintosh: do you have a command like a linter that detects if you have es6 features in the code?","timestamp":1501174450015}
{"from":"mafintosh","message":"pfrazee: no","timestamp":1501174459495}
{"from":"mafintosh","message":"pfrazee: i use some es6 sometimes","timestamp":1501174465732}
{"from":"mafintosh","message":"never babel","timestamp":1501174468362}
{"from":"pfrazee","message":"ok","timestamp":1501174472552}
{"from":"mafintosh","message":"and never promises","timestamp":1501174626377}
{"from":"mafintosh","message":"haha","timestamp":1501174627345}
{"from":"pfrazee","message":":D","timestamp":1501174636039}
{"from":"jhand","message":"test","timestamp":1501175160149}
{"from":"jhand","message":"test2","timestamp":1501176021235}
{"from":"cblgh","message":"mafintosh: niiice","timestamp":1501177274177}
{"from":"cblgh","message":"ralphtheninja: ah yeah thanks","timestamp":1501177299352}
{"from":"ralphtheninja","message":"jhand: test ack","timestamp":1501179124140}
{"from":"jhand","message":"yay! think http://dat-chat.netlify.com/ is mostly working now","timestamp":1501179143215}
{"from":"jhand","message":"been trying to figure out why it won't let me set the domain to chat.datproject.org","timestamp":1501179163625}
{"from":"pfrazee","message":"nice, so that's a bot that's rebroadcasting chat events on a hypercore?","timestamp":1501179293651}
{"from":"jhand","message":"pfrazee: ya! so we have the hyperirc running on a server, collecting logs here. Then I have hypercored sharing it over websockets and it reads from that","timestamp":1501179353109}
{"from":"pfrazee","message":"very nice!","timestamp":1501179365233}
{"from":"jhand","message":"pfrazee: want to add the other channels we're tailing over hyperirc to that ui too!","timestamp":1501179392220}
{"from":"pfrazee","message":"jhand: yeah that'll be cool","timestamp":1501179416103}
{"from":"ogd","message":"jhand: http://chat.datproject.org/ works for me","timestamp":1501179754306}
{"from":"jhand","message":"ogd: ya just got it working. netlify wouldn't let me set the domain to chat.datproject.org because it \"wasn't unique\". so i set it to chat.datproject then changed it to the right one...","timestamp":1501179802648}
{"from":"ogd","message":"lol","timestamp":1501179867287}
{"from":"pfrazee","message":"gave a talk at bleeding edge austin last night and everybody said Dat was the coolest","timestamp":1501180267342}
{"from":"ogd","message":"woot","timestamp":1501180292339}
{"from":"ogd","message":"is that like straight edge","timestamp":1501180297428}
{"from":"ogd","message":"cause if those ppl like dat then we are def cool","timestamp":1501180307996}
{"from":"pfrazee","message":"yeah, but for web tech","timestamp":1501180308195}
{"from":"ogd","message":"nice so they only use gpl","timestamp":1501180317926}
{"from":"pfrazee","message":"exactly","timestamp":1501180326503}
{"from":"pfrazee","message":"it's a cool meetup, the organizer does a slideshow at the start showing what new features are in browsers since the last meetup","timestamp":1501180348292}
{"from":"ogd","message":"ohhh i see","timestamp":1501180355146}
{"from":"pfrazee","message":"yah","timestamp":1501180381129}
{"from":"ogd","message":"i would be interested in a meetup where the organizer did a summary of ways ppl have used blockchains to steal millions of dollars in the last month","timestamp":1501180383475}
{"from":"obensource","message":"lol!","timestamp":1501180473911}
{"from":"ogd","message":"obensource: i have an amplifier in my basement with your name on it","timestamp":1501180509498}
{"from":"ogd","message":"obensource: can i drop it off at your homestead some time?","timestamp":1501180522003}
{"from":"obensource","message":"ogd: aha!","timestamp":1501180527515}
{"from":"obensource","message":"ogd: yarp, but we just moved to the middle of nowhere to save money.","timestamp":1501180543228}
{"from":"obensource","message":"ogd: i'll have to come pick it up sometime","timestamp":1501180553812}
{"from":"ogd","message":"obensource: do u have goats","timestamp":1501180554431}
{"from":"obensource","message":"ogd: naw... I wish","timestamp":1501180570333}
{"from":"obensource","message":"ogd: with my folks in St. Helens","timestamp":1501180578590}
{"from":"obensource","message":"ogd: For up to a year to pay off debt, college etc, while we look for a place","timestamp":1501180606071}
{"from":"obensource","message":"ogd: have a garden tho! :D","timestamp":1501180626807}
{"from":"ogd","message":"obensource: nice. i believe parts of Twilight were filmed in st helens","timestamp":1501180647732}
{"from":"obensource","message":"ogd: they were! Actually in the lot next to the building where Liz/I got married!","timestamp":1501180670416}
{"from":"ogd","message":"lol","timestamp":1501180689248}
{"from":"obensource","message":"@ogd: I'm in pdx ~4 days a week, let's make it happen! I wanna hang with you dude! :D","timestamp":1501180714085}
{"from":"ogd","message":"another banger from dshr http://blog.dshr.org/2017/07/decentralizedt-long-term-preservation.html","timestamp":1501180782284}
{"from":"obensource","message":"That's rad","timestamp":1501180819048}
{"from":"obensource","message":"pfrazee: I'd love for you to give a remote talk about beaker or what you're most stoked on sometime soon at pdxnode. It would be ~8:30pm on a 2nd Thursday, TX time. Would you be up for something like that? :D","timestamp":1501180891308}
{"from":"obensource","message":"We use talky.io for our remote talks./","timestamp":1501180922284}
{"from":"pfrazee","message":"obensource: for sure","timestamp":1501180977807}
{"from":"obensource","message":"RAD","timestamp":1501180981575}
{"from":"pfrazee","message":":) when are you thinking?","timestamp":1501180994232}
{"from":"obensource","message":"pfrazee: Could you do September?","timestamp":1501181043759}
{"from":"pfrazee","message":"obensource: sure, sept 5?","timestamp":1501181123828}
{"from":"obensource","message":"pfrazee: September, 14th. :D","timestamp":1501181132054}
{"from":"pfrazee","message":"oh lol wow, calendars","timestamp":1501181146315}
{"from":"obensource","message":"lol","timestamp":1501181155681}
{"from":"pfrazee","message":"yeah that works, got it marked","timestamp":1501181158255}
{"from":"obensource","message":"AWESOME!","timestamp":1501181163285}
{"from":"obensource","message":"RAD","timestamp":1501181164386}
{"from":"beardicus","message":"can remote viewers join these remote talks?","timestamp":1501181166623}
{"from":"obensource","message":"beardicus: not yet, but I'm working on that with our videographer","timestamp":1501181181920}
{"from":"obensource","message":"So hopefully by September yeah","timestamp":1501181191236}
{"from":"beardicus","message":"sweet.","timestamp":1501181199886}
{"from":"obensource","message":"pfrazee: David Dias is going to be talking in person about IPFS, etc–should be an awesome night! :D","timestamp":1501181256217}
{"from":"pfrazee","message":"obensource: cool. How much time should I aim for?","timestamp":1501181275304}
{"from":"pfrazee","message":"beardicus: if tara and I recorded our talk, would you be interested in watching it on youtube?","timestamp":1501181312040}
{"from":"obensource","message":"pfrazee: Whatever you're comfortable with, I'd say ~30-35 minutes is perfect with a little time for questions at the end","timestamp":1501181324858}
{"from":"pfrazee","message":"obensource: yeah sounds good","timestamp":1501181362699}
{"from":"obensource","message":"pfrazee: awesome, so stoked! Thanks man! \\o/","timestamp":1501181386442}
{"from":"pfrazee","message":"obensource: you bet, thanks for asking","timestamp":1501181395149}
{"from":"pfrazee","message":"ogd: \"Blockchain is a technology in search of a problem to solve, being pushed by ideology into areas where the unsolved problems are not technological.\" those are some firey words","timestamp":1501181562054}
{"from":"pfrazee","message":"from dshr","timestamp":1501181568334}
{"from":"ogd","message":"real talk","timestamp":1501181589832}
{"from":"obensource","message":"beardicus: sorry I misunderstood your question I think. So if you have a question for the speaker or something you could joing the chat call and ask if you'd like. I thought you were asking about live streaming.","timestamp":1501181627654}
{"from":"beardicus","message":"i was thinking about watching the livestream, maybe being able to ask questions. but yeah, just watching a recording would be cool.","timestamp":1501181688822}
{"from":"obensource","message":"beardicus: nice. Yeah, definitely working on a livestream–but thanks for the inspiration for remote questions! Gonna roll with that. :D","timestamp":1501181730010}
{"from":"obensource","message":"beardicus: We also post all our videos up at https://www.youtube.com/channel/UCI8MIw5A7ALtIvNHsrYJbjg","timestamp":1501181760916}
{"from":"beardicus","message":"excellent, thank you for doing that. lots of great talks on youtube... good resource.","timestamp":1501182059348}
{"from":"obensource","message":"beardicus: of course, yeah! Glad we're doing video now.","timestamp":1501182139242}
{"from":"obensource","message":"Thanks!","timestamp":1501182146884}
{"from":"noffle","message":"jhand: is that with hyperirc?","timestamp":1501183123262}
{"from":"jhand","message":"noffle: yep! then we are hosting the irc hypercore feed over websockets.","timestamp":1501183221802}
{"from":"ogd","message":"mafintosh: jhand check out the bittorrent part haha https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4869913/","timestamp":1501184241472}
{"from":"creationix","message":"Is dat in the browser still a thing? https://docs.datproject.org/browser","timestamp":1501185283703}
{"from":"creationix","message":"could a server be implemented that shared dats over both webrtc and normal dat utp?","timestamp":1501185320026}
{"from":"creationix","message":"(I'm thinking something simple like a headless chrome talking to node over websockets)","timestamp":1501185357373}
{"from":"creationix","message":"does it have it's own DHT over webrtc or is there a different way to find peers?","timestamp":1501185648193}
{"from":"bret","message":"ogd: bittorrent sync is pretty damn good. I've switched my dropbox over to it","timestamp":1501186097572}
{"from":"ogd","message":"creationix: we ported most of it to wasm and have experimental support running webrtc in a service worker. mafintosh has more of an understanding of what parts work well right now. but you can browserify hyperdrive","timestamp":1501186171968}
{"from":"ogd","message":"creationix: webrtc on the server is difficult though, no good way to run it in node","timestamp":1501186185911}
{"from":"ogd","message":"creationix: peer discovery pretty much has to be centralized","timestamp":1501186198073}
{"from":"creationix","message":"but could a dht similar to bittorren't be built using webrtc or is the signaling too hard there?","timestamp":1501186259267}
{"from":"creationix","message":"I've never worked with data channel, so I don't know the APIs very well","timestamp":1501186277605}
{"from":"mafintosh","message":"creationix: no only reliable transport apis are available in webrtc","timestamp":1501186309551}
{"from":"mafintosh","message":"And we need a udp like interface scalable dhts","timestamp":1501186333442}
{"from":"cblgh","message":"such a tragedy that udp isn't easily available in browser, and webrtc isn't easily available outside it","timestamp":1501186361869}
{"from":"creationix","message":"how 'bout we all focus on getting web udp pushed through","timestamp":1501186380093}
{"from":"creationix","message":"if that's even feasable","timestamp":1501186386253}
{"from":"creationix","message":"if the spec is simple, the need is clearly defined and it's secure enough to convince browser vendors, it should be fairly easy to add","timestamp":1501186420821}
{"from":"creationix","message":"I'm worried that anything remotely useful won't be considered secure though","timestamp":1501186439190}
{"from":"ogd","message":"i commented on that web udp repo feross was sharing around about this if you wanna chime in https://github.com/Maksims/web-udp-public/issues/1","timestamp":1501186462522}
{"from":"ogd","message":"the ppl who run that repo dont really understand udp i think though","timestamp":1501186505039}
{"from":"creationix","message":"yeah, what he's talking about isn't the same thing at all","timestamp":1501186685257}
{"from":"creationix","message":"I just want something that allows P2P connections with hole-punching and implementing DHTs with zero external services","timestamp":1501186727024}
{"from":"creationix","message":"connecting to known bootstrap nodes in the DHT should be enough to get on the network","timestamp":1501186745846}
{"from":"creationix","message":"can you hole-punch using websockets or is TCP too hard?","timestamp":1501186774082}
{"from":"creationix","message":"I know UDP hole punch is a lot simpler","timestamp":1501186782777}
{"from":"creationix","message":"of course the other problem is browsers can't be websocket servers, just clients","timestamp":1501186824690}
{"from":"pfrazee","message":"creationix: yeah I'd say *that* is a different featureset than udp as well, and I agree with you","timestamp":1501187035451}
{"from":"pfrazee","message":"I'm cool w/the UDP stuff but my interest is reliable TCP-like channels btwn peers","timestamp":1501187086817}
{"from":"bret","message":"https://webkit.org/blog/7790/update-on-web-cryptography/","timestamp":1501187282694}
{"from":"creationix","message":"ogd: have you seen this https://github.com/Maksims/web-udp-public","timestamp":1501187315442}
{"from":"creationix","message":"pfrazee: web utp?","timestamp":1501187333212}
{"from":"ogd","message":"creationix: thats what i linked you 10 mins ago","timestamp":1501187360162}
{"from":"creationix","message":"is that the same one, hmmf","timestamp":1501187371115}
{"from":"pfrazee","message":"bret: oh cool","timestamp":1501187390341}
{"from":"pfrazee","message":"creationix: that might be the right idea. Or maybe something that does either UTP or TCP like dat already does","timestamp":1501187421341}
{"from":"ogd","message":"i just want quic bindings everywhere :)","timestamp":1501187451875}
{"from":"pfrazee","message":"ogd: webrtc has a transparent fallback to proxies via TURN, right?","timestamp":1501187501527}
{"from":"ogd","message":"yea if you have it in the config","timestamp":1501187511942}
{"from":"pfrazee","message":"yeah. That's pretty valuable too","timestamp":1501187520101}
{"from":"pfrazee","message":"mafintosh: wdyt of web-utp","timestamp":1501187552738}
{"from":"creationix","message":"maybe something higher-level like web-dht would be easier","timestamp":1501187570546}
{"from":"ogd","message":"i dont think we know how to build good dhts lol","timestamp":1501187612211}
{"from":"creationix","message":"scaling / performance issues?","timestamp":1501187633409}
{"from":"ogd","message":"sybil attacks mostly","timestamp":1501187673686}
{"from":"ogd","message":"lots of crypto questions in there that im not sure would a good idea to try to standardize","timestamp":1501187697477}
{"from":"creationix","message":"hmm","timestamp":1501187753235}
{"from":"mafintosh","message":"pfrazee: i'd like the so an api","timestamp":1501187947973}
{"from":"mafintosh","message":"pfrazee: but i like the idea of getting a udp-like transport in browsers","timestamp":1501187962479}
{"from":"mafintosh","message":"*so see","timestamp":1501187972085}
{"from":"mafintosh","message":"*to see","timestamp":1501187974363}
{"from":"mafintosh","message":"haha, gaaaah","timestamp":1501187978813}
{"from":"pfrazee","message":"yeah","timestamp":1501188011691}
{"from":"pfrazee","message":"https://filecoin.io/proof-of-replication.pdf this got released today","timestamp":1501191985537}
{"from":"mafintosh","message":"pfrazee: an interesting side effect of the encrypted files thing is that you always need to have local copies of the files you store on the network","timestamp":1501193323131}
{"from":"mafintosh","message":"i.e. you cannot back something up and then delete local data","timestamp":1501193336220}
{"from":"mafintosh","message":"cause then you cannot send challenges","timestamp":1501193348670}
{"from":"pfrazee","message":"mafintosh: Im not familiar enough with the challenges, what are they?","timestamp":1501193856760}
{"from":"mafintosh","message":"pfrazee: in the file coin paper","timestamp":1501193867437}
{"from":"pfrazee","message":"oh I see","timestamp":1501193875670}
{"from":"pfrazee","message":"yeah Ive gotta read that later","timestamp":1501193893112}
{"from":"ralphtheninja[m]","message":"pfrazee: noticed you had a screencast on making a rss reader","timestamp":1501195599994}
{"from":"ralphtheninja[m]","message":"but I guess I need to slow that speed down a bit while watching :)","timestamp":1501195601486}
{"from":"ralphtheninja[m]","message":"I'd love it if there were beaker workshops","timestamp":1501195602904}
{"from":"pfrazee","message":"ralphtheninja[m]: we're doing an educational push through august. I'll include some videos","timestamp":1501195657529}
{"from":"ralphtheninja[m]","message":"sweet","timestamp":1501195782407}
{"from":"ralphtheninja[m]","message":"gah, keep forgetting about the beaker channel","timestamp":1501195856386}
{"from":"pfrazee","message":"https://github.com/beakerbrowser/node-dat-archive just published","timestamp":1501199932724}
{"from":"dat-gitter1","message":"(rjsteinert) @pfrazee \"we're doing an educational push through august\" +1","timestamp":1501199946128}
{"from":"pfrazee","message":"@rjsteinert :) we're going to release a set of apps and do a blog post on each one explaining them","timestamp":1501200002999}
{"from":"pfrazee","message":"then some more tutorials","timestamp":1501200020486}
{"from":"dat-gitter1","message":"(rjsteinert) @pfrazee Very cool. Many different architecture ideas to be explained.","timestamp":1501200032615}
{"from":"pfrazee","message":"@rjsteinert yeah for sure. The more we've gone out and given talks, the more I've found that to be true","timestamp":1501200059984}
{"from":"pfrazee","message":"mafintosh: jhand: yall should know about this- https://github.com/beakerbrowser/node-dat-archive","timestamp":1501200099605}
{"from":"dat-gitter1","message":"(rjsteinert) @pfrazee Very nice to be able to work with those archives we create in Beaker in other contexts. Do you have a specific use case in mind?","timestamp":1501200148470}
{"from":"dat-gitter1","message":"(rjsteinert) Perhaps building it also into dat CLI so the two are cross compatible?","timestamp":1501200166908}
{"from":"pfrazee","message":"@rjsteinert yeah - though you can do that already with dat-node. The *main* reason I wrote node-dat-archive is so that I can write tests for https://github.com/beakerbrowser/injestdb that run in node","timestamp":1501200198906}
{"from":"dat-gitter1","message":"(rjsteinert) @pfrazee Listening to your Austin buddy Shakey Graves btw","timestamp":1501200207152}
{"from":"pfrazee","message":":)","timestamp":1501200214883}
{"from":"dat-gitter1","message":"(rjsteinert) @prfazee \"Abstracts over the DatArchive API to provide a simple database-like interface\" -- Oh wow. So perhaps a bunch of dat data you can query via Dat Archive API down the line? I was thinking in my apps I'd use a folder of files as a key value store but querying is a logical progression.","timestamp":1501200361409}
{"from":"pfrazee","message":"@rjsteinert exactly yeah","timestamp":1501200421389}
{"from":"pfrazee","message":"it's treating the files like a KV store, like you say","timestamp":1501200444125}
{"from":"pfrazee","message":"expects them to be .json","timestamp":1501200451885}
{"from":"pfrazee","message":"and indexes them for queries","timestamp":1501200468867}
{"from":"dat-gitter1","message":"(rjsteinert) @pfrazee That's nice to handle that on a lower level. In our offline apps we usually pack an android app with a big ol' JSON file and then on first boot load that into PouchDB where we can query it. That's duplication of data, that's a whole other system, that takes time away from boot.","timestamp":1501200579280}
{"from":"pfrazee","message":"@rjsteinert yeah though this will have some download-and-index time cost","timestamp":1501200623971}
{"from":"pfrazee","message":"any time you add a new remote dat to the dataset","timestamp":1501200637284}
{"from":"pfrazee","message":"prob not too bad though","timestamp":1501200649391}
{"from":"dat-gitter1","message":"(rjsteinert) Jamming on some application architecture ideas like with you would be great some time soon. Chris Kelley and I have this Tangerine v3 deadline in the meantime which is suckin our time but we're getting people excited about dat in browsers. Our deploy strategy will be folks creating content locally, publishing to dat, adding to hashbase, then our \"dumb\" tablets can get the PWA from hashbase's HTTPS \"bridge from the dat","timestamp":1501200774810}
{"from":"dat-gitter1","message":"world\"","timestamp":1501200774904}
{"from":"dat-gitter1","message":"(rjsteinert) Yet, we're still trying to find time to make these \"dumb\" tablets smarter with what we're calling Bunsen.","timestamp":1501200801589}
{"from":"dat-gitter1","message":"(rjsteinert) Not much there, but hey we have a logo ;) https://github.com/chrisekelley/bunsen/pull/2/files","timestamp":1501200821058}
{"from":"dat-gitter1","message":"(rjsteinert) Our next phase (starting next week? week after next?) is working on a UI for editing forms in a WYSIWYG thus my experimenting with Beaker and WYSIWYGs.","timestamp":1501200890970}
{"from":"dat-gitter1","message":"(rjsteinert) The architecture we're playing around with for the proof of concept bunsen is having folks start a node server in Termux on Android which publishes an HTTP API similar to hashbase. Then the cordova app we're working on talks to that HTTP API, looks like a web browser, but has features similar to Beaker.","timestamp":1501201015967}
{"from":"pfrazee","message":"@rjsteinert excuse the break, I stepped afk. I love that you called it bunsen","timestamp":1501201138682}
{"from":"dat-gitter1","message":"(rjsteinert) We're hoping we can figure out the issue with Dat issue on Android where it finds peers but never syncs data https://github.com/datproject/dat/issues/829","timestamp":1501201167013}
{"from":"dat-gitter1","message":"(rjsteinert) Chris came up with the name, it's a muppet reference https://upload.wikimedia.org/wikipedia/en/d/dd/Dr._Bunsen_Honeydew.jpg","timestamp":1501201228971}
{"from":"pfrazee","message":"yeah that sounds like a great pathway to mvp","timestamp":1501201285996}
{"from":"dat-gitter1","message":"(rjsteinert) Awesome, thanks for the vote of confidence :)","timestamp":1501201375064}
{"from":"dat-gitter1","message":"(rjsteinert) I have some Tangerine Forms to wrangle with now. Catch y'all later.","timestamp":1501201506788}
{"from":"mafintosh","message":"@rjsteinert woot","timestamp":1501203251435}
{"from":"mafintosh","message":"i'm taking another stab at node-as-a-shared-lib on android right now","timestamp":1501203274915}
{"from":"dat-gitter1","message":"(rjsteinert) @mafintosh That's great! Your help is very much appreciated!","timestamp":1501203494355}
{"from":"dat-gitter1","message":"(rjsteinert) @mafintosh If you want to get our attention, this issue is where we'll be making an effort to brain dump https://github.com/chrisekelley/bunsen/issues/1","timestamp":1501203571879}
{"from":"mafintosh","message":"ah sweet","timestamp":1501203586356}
{"from":"beardicus","message":" /join #beaker","timestamp":1501204531960}
{"from":"mafintosh","message":"@rjsteinert have a shared library now!","timestamp":1501208332838}
{"from":"creationix","message":"mafintosh: cool","timestamp":1501208931186}
{"from":"karissa","message":"mafintosh: whaaaat","timestamp":1501210508488}
{"from":"karissa","message":"mafintosh: clearly I need to get a shiny new android too...","timestamp":1501210527650}
{"from":"mafintosh","message":"karissa: yuh!!","timestamp":1501211989945}
{"from":"mafintosh","message":"karissa: only took 2-3h to compile lol","timestamp":1501212027034}
{"from":"dat-gitter1","message":"(HughIsaacs2) Coincidentally as I open this up and see this conversation, I was just reading about Node ChakraCore for iOS","timestamp":1501212229993}
{"from":"dat-gitter1","message":"(HughIsaacs2)","timestamp":1501212230117}
{"from":"dat-gitter1","message":"(HughIsaacs2) http://www.janeasystems.com/blog/node-js-meets-ios/","timestamp":1501212230221}
{"from":"TheLink","message":"mafintosh: is there any news regarding the --http problem with dat-cli (folder not getting removed)","timestamp":1501233863145}
{"from":"chrisekelley","message":"@pfrazee thanks for the tip on node-dat-archive; gonna try to keep compatible with the DatArchive API for bunsen.","timestamp":1501239377401}
{"from":"mafintosh","message":"TheLink: what was this issue again?","timestamp":1501245588339}
{"from":"TheLink","message":"mafintosh: metadata sync works correctly but if for example an index.html and dependencies like images are opened in a browser while the containing folder gets deleted, the folder and opened files don't get deleted in the client","timestamp":1501247063210}
{"from":"TheLink","message":"(client is run with the --http flag)","timestamp":1501247094607}
{"from":"mafintosh","message":"TheLink: and this was reproducible?","timestamp":1501247098738}
{"from":"TheLink","message":"mafintosh: I reproduced it this morning with current dat","timestamp":1501247118201}
{"from":"mafintosh","message":"TheLink: let me just try :)","timestamp":1501247196914}
{"from":"mafintosh","message":"TheLink: couldnt reproduce","timestamp":1501247681284}
{"from":"TheLink","message":"hmm","timestamp":1501247691638}
{"from":"mafintosh","message":"TheLink: any big files referenced by the index.html page?","timestamp":1501247709104}
{"from":"TheLink","message":"it's a quite big index.html with lots of resources linked","timestamp":1501247737113}
{"from":"mafintosh","message":"TheLink: can you share it with me?","timestamp":1501247767203}
{"from":"TheLink","message":"mafintosh: yes, moment - I'll zip the folder","timestamp":1501247788172}
{"from":"TheLink","message":"--> pm","timestamp":1501247903321}
{"from":"TheLink","message":"mafintosh: perhaps it's important that you delete the folder before sync completes","timestamp":1501247935402}
{"from":"mafintosh","message":"TheLink: ya thats what i think as well","timestamp":1501247985301}
{"from":"mafintosh","message":"will take a look, thx","timestamp":1501247993624}
{"from":"TheLink","message":"mafintosh: steps: 1) dat share the folder; 2) open the client synced folder in a web browser; 3) delete the folder on the seed before sync completes","timestamp":1501247999811}
{"from":"mafintosh","message":"TheLink: ah cool, this time i could repro :)","timestamp":1501248425666}
{"from":"TheLink","message":"yay \\o/","timestamp":1501248435209}
{"from":"mafintosh","message":"wait or could i","timestamp":1501248474943}
{"from":"mafintosh","message":"one sec","timestamp":1501248476695}
{"from":"mafintosh","message":"TheLink: can you confirm this for me?","timestamp":1501248585351}
{"from":"mafintosh","message":"TheLink: if you restart the client dat *after* the folder still exists, it gets deleted","timestamp":1501248602169}
{"from":"TheLink","message":"mafintosh: moment, I can test that","timestamp":1501248623454}
{"from":"mafintosh","message":"TheLink: ya, it seems for me that it breaks sync if i don't restart it actually","timestamp":1501248677761}
{"from":"mafintosh","message":"oh this time restart didn't work","timestamp":1501248745074}
{"from":"mafintosh","message":"anyway i can repro","timestamp":1501248748972}
{"from":"mafintosh","message":"thats the important thing :)","timestamp":1501248754227}
{"from":"TheLink","message":"mafintosh: yes, restart didn't help here","timestamp":1501248927207}
{"from":"TheLink","message":"but it's not always working the same","timestamp":1501248942958}
{"from":"TheLink","message":"I also had cases where the folder got deleted when I had the index open in the browser","timestamp":1501248984049}
{"from":"TheLink","message":"it's not reliable","timestamp":1501248985043}
{"from":"chrisekelley","message":"@mafintosh - Can you offer any pointers to making a shared lib of node that works on Android? I'm working w/ @rjsteinert on the bunsen Android client; it would be great to skip using termux and go direct to node on Android within our Cordova app (probably tying them together using a Cordova plugin).","timestamp":1501249042396}
{"from":"mafintosh","message":"chrisekelley: i built it simply by ssh'ing into turmux, install build deps,","timestamp":1501249095011}
{"from":"mafintosh","message":"then called ./configure with python configure --without-snapshot --openssl-no-asm --shared --dest-os=android","timestamp":1501249160888}
{"from":"mafintosh","message":"chrisekelley: i'm putting together a sample app today","timestamp":1501249247932}
{"from":"mafintosh","message":"chrisekelley: is your phone arm64?","timestamp":1501249255109}
{"from":"mafintosh","message":"oh and you have to comment out #define HAVE_GETSERVBYPORT_R 1 in deps/cares/config/android/ares_config.h","timestamp":1501249317683}
{"from":"mafintosh","message":"after that a simple 'make' will do the job","timestamp":1501249329812}
{"from":"mafintosh","message":"takes forever tho","timestamp":1501249333148}
{"from":"chrisekelley","message":"@mafintosh it's a nexus 5x, yeah, I'm pretty sure it is arm64","timestamp":1501249348419}
{"from":"chrisekelley","message":"@mafintosh yeah, Maybe this will be the thing that bricks my 5X for the second time ;-)","timestamp":1501249405314}
{"from":"chrisekelley","message":"@mafintosh and thanks a lot, looking forward to checking out your sample app!","timestamp":1501249434159}
{"from":"mafintosh","message":"chrisekelley: oh and i built node 8.2.1 (latest)","timestamp":1501249472623}
{"from":"creationix","message":"I've got a nexus 5x for testing as well","timestamp":1501249994412}
{"from":"mafintosh","message":"TheLink: seems reliable enough for me to test with :)","timestamp":1501250002953}
{"from":"TheLink","message":"great :)","timestamp":1501250015833}
{"from":"TheLink","message":"it's the only thing left atm that bugs me ;)","timestamp":1501250032687}
{"from":"TheLink","message":"only feature requests left otherwise :B","timestamp":1501250063287}
{"from":"mafintosh","message":"nice","timestamp":1501250228876}
{"from":"mafintosh","message":"TheLink: ill try and look into it today then :)","timestamp":1501251365770}
{"from":"TheLink","message":"mafintosh: no hurry, especially if you have more fun stuff to do","timestamp":1501251399051}
{"from":"TheLink","message":"I'll keep nagging ;)","timestamp":1501251408941}
{"from":"mafintosh","message":"TheLink: please do","timestamp":1501251466071}
{"from":"TheLink","message":":)","timestamp":1501251478195}
{"from":"mafintosh","message":"chrisekelley: also i can just share the lib i built with you if you want","timestamp":1501251483587}
{"from":"chrisekelley","message":"@mafintosh - that would be great!","timestamp":1501251515759}
{"from":"mafintosh","message":"chrisekelley: dat://45f6c4f6b7ef32497a98ef6566d8f7a7038a7d13ab1902d7c06a011e71708ab4","timestamp":1501252286934}
{"from":"mafintosh","message":"chrisekelley: clone it in termux","timestamp":1501252308074}
{"from":"mafintosh","message":"to run the test","timestamp":1501252319234}
{"from":"mafintosh","message":"otherwise just on your laptop to get the libs","timestamp":1501252327918}
{"from":"ralphtheninja[m]","message":"I so want to be able to click that link in mobile and open up but mobile version of beaker","timestamp":1501252354770}
{"from":"ralphtheninja[m]","message":"-but","timestamp":1501252370914}
{"from":"chrisekelley","message":"@mafintosh - thanks!","timestamp":1501253032610}
{"from":"chrisekelley","message":"Within termux, I chmod'd test, ran ./test, but got CANNOT LINK EXECUTABLE ./test\": library \"libnode.so.57\" not found","timestamp":1501253113015}
{"from":"mafintosh","message":"did you run make first?","timestamp":1501253142200}
{"from":"chrisekelley","message":"nope, here we go :-)","timestamp":1501253183174}
{"from":"chrisekelley","message":"Closer, I get \"make: 'embedded-node' is up to date\"., but when I run test I get the error as before.","timestamp":1501253512872}
{"from":"karissa","message":"poor archive bot","timestamp":1501253962809}
{"from":"karissa","message":"down again","timestamp":1501253965320}
{"from":"mafintosh","message":"chrisekelley: try rm embedded-node test","timestamp":1501254385648}
{"from":"mafintosh","message":"chrisekelley: and then make again","timestamp":1501254390370}
{"from":"mafintosh","message":"chrisekelley: i'm very close to having it work inside android studio now","timestamp":1501254402583}
{"from":"chrisekelley","message":"@mafintosh - didn't work, but no worries - focus on android studio. that is great news!","timestamp":1501254601265}
{"from":"mafintosh","message":"chrisekelley: works!","timestamp":1501264647385}
{"from":"chrisekelley","message":"@mafintosh Wow, that is great!","timestamp":1501268542788}
{"from":"mafintosh","message":"chrisekelley: gonna try and package it up","timestamp":1501270449779}
{"from":"mafintosh","message":"karissa: https://twitter.com/mafintosh/status/891019639571922945 <-- time to get an android :D","timestamp":1501271399355}
{"from":"substack","message":"mafintosh: inside an android app as in you have a single apk file?","timestamp":1501272031899}
{"from":"mafintosh","message":"substack: ya","timestamp":1501272039848}
{"from":"mafintosh","message":"substack: not spawning, i'm linking against libnode","timestamp":1501272050983}
{"from":"mafintosh","message":"that i built on android","timestamp":1501272054506}
{"from":"mafintosh","message":"jhand: http://10.1.10.170:8080 (on your nexus)","timestamp":1501276329477}
{"from":"mafintosh","message":"jafow1: http://hasselhoff.mafintosh.com:8080","timestamp":1501276402645}
{"from":"mafintosh","message":"jhand: o.","timestamp":1501276406753}
{"from":"mafintosh","message":"jhand: http://hasselhoff.mafintosh.com:8080","timestamp":1501276410604}
{"from":"mafintosh","message":"there we go","timestamp":1501276412379}
{"from":"bret","message":"neat","timestamp":1501277982525}
{"from":"dat-gitter1","message":"(scriptjs) I’d like to get some feedback on how others are testing their peer to peer apps with hypercore, hyperdrive, or dat with multiple peers on same machine. I am passing custom ports for discovery to do this presently.","timestamp":1501281623303}
{"from":"mafintosh","message":"substack, do you own a 32bit android phone by any chance?","timestamp":1501282080613}
{"from":"substack","message":"I think so","timestamp":1501282704608}
{"from":"substack","message":"it's 5 years old","timestamp":1501282711709}
{"from":"mafintosh","message":"substack: do you know if it is armv7?","timestamp":1501282838346}
{"from":"substack","message":"I think so?","timestamp":1501283277272}
{"from":"ralphtheninja[m]","message":"gradle .. shivers\u000f","timestamp":1501286316039}
{"from":"barnie","message":"I see development activities running NodeJS on android. found this on running node on iOS: http://www.janeasystems.com/blog/node-js-meets-ios/","timestamp":1501313586982}
{"from":"barnie","message":"But: Do you see running node on mobile as _the_ solution to get Dat running on phones, or should there be native / RN implementations instead?","timestamp":1501313685209}
{"from":"TheLink","message":"am I assuming correctly that seed and clients never can switch status because only the seed hast the \"master key\"?","timestamp":1501321754244}
{"from":"TheLink","message":"just had a curious case where I had to delete a file in the seed twice because after fist deletion it \"came back\" and the deletion wasn't synced to the client","timestamp":1501321809838}
{"from":"TheLink","message":"the 2nd time it worked as expected","timestamp":1501321828424}
{"from":"chrisekelley","message":"@mafintosh re: node on Android - this is fantastic! Gonna test it out now.","timestamp":1501323980024}
{"from":"chrisekelley","message":"@mafintosh got it to build and run on my nexus 5X. Whoo hoo! This is a big deal; many kudos mafintosh.","timestamp":1501324979705}
{"from":"barnie","message":"mafintosh: your node-on-android is super!!!","timestamp":1501330091918}
{"from":"barnie","message":"did you start from https://github.com/sixo/node-example ?","timestamp":1501331291735}
{"from":"barnie","message":"what are the plans on node-on-android wrt Dat? and iOS?","timestamp":1501331318745}
{"from":"barnie","message":"is this an improved awesome list layout? https://github.com/aschrijver/awesome-dat/blob/fresh/awesome/readme.md (only first part done)","timestamp":1501351345802}
{"from":"pfrazee","message":"barnie: I dig that","timestamp":1501351693400}
{"from":"dat-gitter1","message":"(rjsteinert) @barnie Nice","timestamp":1501351757094}
{"from":"barnie","message":"i created an image of the ecosystem. its confusing, thats why i added the icons on the awesome list","timestamp":1501351760942}
{"from":"barnie","message":"https://user-images.githubusercontent.com/5111931/28670190-d0b07094-72d7-11e7-9d06-7eb1aadb3c53.png","timestamp":1501351761826}
{"from":"dat-gitter1","message":"(rjsteinert) pfrazee: We moved our bunsen mvp issue to https://github.com/bunsenbrowser/bunsen/issues/1","timestamp":1501351820284}
{"from":"pfrazee","message":"@rjsteinert nice!","timestamp":1501351888965}
{"from":"pfrazee","message":"barnie: yeah I agree. I'm 100% in support of what you're doing there","timestamp":1501351911781}
{"from":"dat-gitter1","message":"(rjsteinert) @pfrazee Chris Kelley is having some luck setting up the development environment using mafintosh's Node-on-Android build. Looks like we may be able to not have to depend on Termux and we'll be productive developing Beaker as an isolated app soon.","timestamp":1501352000342}
{"from":"pfrazee","message":"@rjsteinert that's *extremely* exciting","timestamp":1501352017428}
{"from":"karissa","message":"aawesome","timestamp":1501352047595}
{"from":"barnie","message":"mafintosh: android project is awesome :)","timestamp":1501352292501}
{"from":"barnie","message":"do you just integrate the gradle project that is there in a larger js/node project?","timestamp":1501352320566}
{"from":"barnie","message":"i want to try with RN (ignite CLI), browserifying was hell","timestamp":1501352518350}
{"from":"karissa","message":"selective sync coming to cli https://github.com/datproject/dat/pull/834","timestamp":1501354439709}
{"from":"pfrazee","message":"karissa: nice!","timestamp":1501354703924}
{"from":"lachenmayer","message":"hi substack, i have just added a PR for https://github.com/substack/static-module/pull/33 which should solve https://github.com/substack/static-module/issues/23. could you take a look at it & let me know if that's as expected? thanks! :)","timestamp":1501354726226}
{"from":"lachenmayer","message":"this is currently preventing me from getting dat-js to work in a project that uses webpack with a brfs transform","timestamp":1501354764111}
{"from":"lachenmayer","message":"weirdly, browserify doesn't throw an error, even though running brfs directly without browserify does throw the error","timestamp":1501354811451}
{"from":"karissa","message":"lachenmayer: oh cool you're using dat-js! good to hear. heads up, dat-js is currently on the older version of hyperdrive, which is ok if you're only connecting to browser clients but it is not currently compatible with server-side dats (that includes beaker). this PR is a wip for updating to hyperdrive 9 https://github.com/datproject/dat-js/pull/7","timestamp":1501354878986}
{"from":"lachenmayer","message":"i see, thanks karissa!","timestamp":1501355140492}
{"from":"lachenmayer","message":"will probably pick this up again once that's sorted :)","timestamp":1501355239160}
{"from":"pfrazee","message":"https://github.com/beakerbrowser/injestdb v1 is *almost* done","timestamp":1501355263257}
{"from":"lachenmayer","message":"very very cool","timestamp":1501355572348}
{"from":"barnie","message":"pfrazee: injestdb rocks!","timestamp":1501355774328}
{"from":"pfrazee","message":"barnie: thanks!","timestamp":1501355781755}
{"from":"karissa","message":"pfrazee: i'm still not sure I understand the indexing you've got there - 'origin' is an implied field?","timestamp":1501355831740}
{"from":"pfrazee","message":"karissa: _origin and _url are autogenerated","timestamp":1501355842972}
{"from":"pfrazee","message":"_origin is the url of the archive that the record belongs to","timestamp":1501355853931}
{"from":"pfrazee","message":"_url is the url of the file that stores the record","timestamp":1501355861600}
{"from":"karissa","message":"where does that come from?","timestamp":1501355865219}
{"from":"pfrazee","message":"each record has a 1:1 relationship with a file in a dat archive","timestamp":1501355872830}
{"from":"pfrazee","message":"the definition of the schemas implies that mapping","timestamp":1501355882591}
{"from":"pfrazee","message":"eg, the broadcasts table in the readme maps to /broadcasts/*.json","timestamp":1501355899862}
{"from":"karissa","message":"ah interesting","timestamp":1501355909359}
{"from":"karissa","message":"so _origin is like owner","timestamp":1501356095630}
{"from":"karissa","message":"owner's url","timestamp":1501356120380}
{"from":"karissa","message":"in this social profile model","timestamp":1501356120824}
{"from":"karissa","message":"looks fun. i think i'll try to build something on it","timestamp":1501356151442}
{"from":"pfrazee","message":"karissa: yeah correct","timestamp":1501356674240}
{"from":"substack","message":"lachenmayer: why does pump do that? require('fs') returns {} in browserify","timestamp":1501356805678}
{"from":"substack","message":"anyways published","timestamp":1501356908856}
{"from":"substack","message":"probably pump shouldn't require('fs') at all","timestamp":1501357054902}
{"from":"barnie","message":"browserify by default points to an empty mock for fs","timestamp":1501357949052}
{"from":"barnie","message":"there's an impl for async fs methods though","timestamp":1501357968599}
{"from":"dat-gitter1","message":"(scriptjs) @pfrazee With the database being a mapping of files to archives for persistence, it’ll be interesting to see how performant it is as it develops. If its all good, I can't see any real problem using it with dat in node since you are not really relying on everything in memory with fake-indexedb","timestamp":1501358088877}
{"from":"dat-gitter1","message":"(scriptjs) just the mechanics of that lib outside a browser","timestamp":1501358148158}
{"from":"pfrazee","message":"@scriptjs yeah I think for this to be usable in node, we'd need to swap fake-indexeddb with leveldb","timestamp":1501358152499}
{"from":"pfrazee","message":"not too hard to do, I think","timestamp":1501358193202}
{"from":"pfrazee","message":"but yeah I'm curious to see how things go in just the browser first","timestamp":1501358204725}
{"from":"dat-gitter1","message":"(scriptjs) me also just to see how fast queries are","timestamp":1501358220094}
{"from":"pfrazee","message":"yeah","timestamp":1501358223999}
{"from":"pfrazee","message":"I know indexeddb isnt considered the *fastest* but it should be a decent starting point","timestamp":1501358243519}
{"from":"dat-gitter1","message":"(scriptjs) I think the analogy of tables and rows works well for many also. I think something like this together with what @mafintosh is doing on hyperdb gives you choices the sql like experience vs a map reduce experience like pouch/couch in nosql that is schemaless","timestamp":1501358386879}
{"from":"dat-gitter1","message":"(scriptjs) @pfrazee its all good.","timestamp":1501358453693}
{"from":"pfrazee","message":"yeah","timestamp":1501358491293}
{"from":"creationix","message":"Does simply having ZeroTier devices on the lan break dat?","timestamp":1501378369466}
{"from":"creationix","message":"I replaced my laptop's harddrive and installed fresh OSX, grabbed latest beaker and still can't browse dat urls","timestamp":1501378389229}
{"from":"creationix","message":"hmm, might be a beaker thing. CLI dat can clone some repos","timestamp":1501378615585}
{"from":"creationix","message":" I mean desktop dat client","timestamp":1501378645101}
{"from":"creationix","message":"and if I download the repo in dat desktop, then suddenly beaker can find it (using dat desktop as a local peer I assume)","timestamp":1501378714348}
{"from":"creationix","message":"pfrazee: any idea^","timestamp":1501378723150}
{"from":"pfrazee","message":"creationix: you're correct that beaker is locally peering with the dat desktop","timestamp":1501378817757}
{"from":"pfrazee","message":"creationix: that has the signs of a beaker bug. Do hashbase dat URLs work?","timestamp":1501378878767}
{"from":"creationix","message":"pfrazee: nope","timestamp":1501380374785}
{"from":"creationix","message":"not raw dat urls either","timestamp":1501380381754}
{"from":"creationix","message":"same if I host them myself using dathttpd on digital ocean","timestamp":1501381635271}
{"from":"creationix","message":"dat desktop loads them just fine, beaker can only find local peers","timestamp":1501381648639}
{"from":"pfrazee","message":"creationix: sanity check, whats the firewall setting in osx settings?","timestamp":1501382269559}
{"from":"creationix","message":"whatever is default. I literally just installed sierra","timestamp":1501382488050}
{"from":"creationix","message":"pfrazee: yep, firewall off","timestamp":1501382547801}
{"from":"creationix","message":"desktop dat has no problems finding peers","timestamp":1501382555594}
{"from":"pfrazee","message":"creationix: hm. Only way to figure that out is to crack open the DEBUG= sessions","timestamp":1501382613790}
{"from":"creationix","message":"pfrazee: here is with `DEBUG='*'` trying to go to dat://beakerbrowser.com https://gist.github.com/creationix/41106f0c491611125365fa63297486b6","timestamp":1501382909042}
{"from":"pfrazee","message":"creationix: thanks. I'll compare to what I get. Looks like no peers are being emitted by the discovery swarm","timestamp":1501383070238}
{"from":"creationix","message":"hmm, `DEBUG='*'` doesn't work with dat desktop","timestamp":1501383200299}
{"from":"pfrazee","message":"that's a shame","timestamp":1501383217563}
{"from":"creationix","message":"here is one using DAT cli which also works fine https://gist.github.com/creationix/28d1e93e16b875eaa5c691609012b318","timestamp":1501383325291}
{"from":"pfrazee","message":"there's a dns discovery probe that's not occurring in beaker for you","timestamp":1501383411957}
{"from":"pfrazee","message":"are you running from master or a prebuild?","timestamp":1501383421874}
{"from":"creationix","message":"dmg from website","timestamp":1501383449512}
{"from":"creationix","message":"I can try from master to compare","timestamp":1501383455242}
{"from":"pfrazee","message":"I just wanted to confirm some software versions... though that's an unlikely cause","timestamp":1501383502938}
{"from":"creationix","message":"heh, I left `DEBUG='*'` exported. Running npm install is *fun*","timestamp":1501383552881}
{"from":"pfrazee","message":"hah","timestamp":1501383568438}
{"from":"pfrazee","message":"ok I've gotta go but I'll look into that tomorrow","timestamp":1501383585627}
{"from":"creationix","message":"thanks","timestamp":1501383628834}
{"from":"creationix","message":"fwiw, the fresh build from master seems to be working","timestamp":1501383678976}
{"from":"creationix","message":"I've got lots of internet peers for both my library and sites I browse","timestamp":1501383707218}
{"from":"creationix","message":"and still no peers with official beaker binary","timestamp":1501383769024}
{"from":"creationix","message":"hmm, and now beaker from master can't find peers, but it loads content from cache just fine","timestamp":1501383912425}
{"from":"creationix","message":"oh well, g'night","timestamp":1501383921575}
{"from":"barnie","message":"terminology-wise is a dat the same as a dat archive?","timestamp":1501392460776}
{"from":"green-coder","message":"Dat could not find any connections for that link. There may not be any sources online.","timestamp":1501407403212}
{"from":"green-coder","message":"Hi, I tried to follow the website's guide with 'dat clone 0e582a1fb660c43b6082b2c9c2f7eae3547e454e6e6206e37d1377462d93fd71' to download the mac's desktop client but got the message above.","timestamp":1501407425683}
{"from":"TheLink","message":"green-coder: this hash works for me in dat-cli","timestamp":1501408260762}
{"from":"TheLink","message":"hmm, not downloading but it got the metadata","timestamp":1501408396558}
{"from":"green-coder","message":"@TheLink I still do not work for me. Should the user do something special with his firewall ?","timestamp":1501409482642}
{"from":"green-coder","message":"It still does not","timestamp":1501409509298}
{"from":"TheLink","message":"you could try if disabling the firewall helps","timestamp":1501410073418}
{"from":"green-coder","message":"I cannot let my computer go DMZ, that's a little too much","timestamp":1501410934777}
{"from":"TheLink","message":"dat uses port 3282 udp/tcp(?) iirc","timestamp":1501417575450}
{"from":"TheLink","message":"you could also run \"dat doctor 0e582a1fb660c43b6082b2c9c2f7eae3547e454e6e6206e37d1377462d93fd71\" and see what it says","timestamp":1501417642895}
{"from":"TheLink","message":"hmm, \"dat doctor\" without the hash probably","timestamp":1501417727422}
{"from":"green-coder","message":"dat doctor 9ee3a3bc71e0dc26ca2ea93222a3fef708df1284499a71319f9bacffb071832c","timestamp":1501418011136}
{"from":"TheLink","message":"seems like I can connect successfully","timestamp":1501418161757}
{"from":"green-coder","message":"Yes, and I did not touch my firewall. So the problem may simply be the data not being shared on the network.","timestamp":1501418265578}
{"from":"green-coder","message":"or it is, from behind firewalls","timestamp":1501418333594}
{"from":"confiks","message":"I was wondering is it's possibly (in principle) to do 'oblivious transfer' with Dat, so that a peer serving some dataset cannot know the actual contents of the dataset.","timestamp":1501418725789}
{"from":"confiks","message":"I understand that the public key, the dat address is now also the decryption key. I also read that a discovery key, being the hash of the public key, is used for finding peers which have a certain dataset. But as far as I could see, for the actual transfer to happen, the full dat address must be available on both sides.","timestamp":1501418797679}
{"from":"confiks","message":"s/the dat address is now also/the dat address, is also/","timestamp":1501418830361}
{"from":"confiks","message":"Would this be possible in principle, or are there any fundamental problems? I understand that with the content not readable, the integrity of the dataset cannot be verified (using for example the discovery key).","timestamp":1501418916025}
{"from":"confiks","message":"I probably don't really know what I'm talking about, so you may go right ahead and change the framing of the question.","timestamp":1501419234897}
{"from":"pfrazee","message":"https://gist.github.com/pfrazee/bf13db9dea21936af320c512811c2a2b","timestamp":1501428931602}
{"from":"ralphtheninja[m]","message":"wow cool","timestamp":1501429074658}
{"from":"barnie","message":"@karissa: i'm almost done with dat-awesome https://github.com/aschrijver/awesome-dat/blob/fresh/awesome/readme.md","timestamp":1501429377685}
{"from":"barnie","message":"@karissa: can you create a new repo 'awesome' for it in datproject, then I'll PR","timestamp":1501429412546}
{"from":"barnie","message":"its a complete overhaul","timestamp":1501429418912}
{"from":"pfrazee","message":"barnie: that's great work","timestamp":1501429428174}
{"from":"pfrazee","message":"really appreciate that","timestamp":1501429433872}
{"from":"barnie","message":"ya, I think it makes it much clearer where's what","timestamp":1501429451510}
{"from":"pfrazee","message":"I agree","timestamp":1501429461003}
{"from":"barnie","message":"would like more entries in 'Applications' section","timestamp":1501429489190}
{"from":"barnie","message":"I probably missed a bunch","timestamp":1501429516547}
{"from":"pfrazee","message":"we're still a little light on apps","timestamp":1501429591499}
{"from":"pfrazee","message":"tara and I are going to be releasing a bunch this month though","timestamp":1501429603002}
{"from":"barnie","message":"cool","timestamp":1501429611412}
{"from":"pfrazee","message":"I said earlier it's the month of education. It's really more the month of content","timestamp":1501429612109}
{"from":"pfrazee","message":"apps + tutorials","timestamp":1501429616143}
{"from":"barnie","message":"ha ha","timestamp":1501429622155}
{"from":"barnie","message":"comes with the job","timestamp":1501429638747}
{"from":"pfrazee","message":"yep yep","timestamp":1501429651006}
{"from":"barnie","message":";)","timestamp":1501429658486}
{"from":"bret","message":"Can I hang out with you guys today ogd mafintosh ?","timestamp":1501435126024}
{"from":"bret","message":"You in town today?","timestamp":1501435141241}
{"from":"G-Ray","message":"great pfrazee","timestamp":1501444156114}
{"from":"mafintosh","message":"bret: we were away camping","timestamp":1501461128134}
{"from":"bret","message":"Oh nice","timestamp":1501461892766}
{"from":"bret","message":"mafintosh/ogd are you around this week?","timestamp":1501461909337}
{"from":"barnie","message":"mafintosh: on node-on-android..","timestamp":1501487686669}
{"from":"barnie","message":"did some googling. you heard of J2V8?","timestamp":1501487704163}
{"from":"barnie","message":"https://github.com/eclipsesource/J2V8","timestamp":1501487721709}
{"from":"barnie","message":"https://www.infoq.com/presentations/node4j-nodejs-java","timestamp":1501487788424}
{"from":"barnie","message":"currently node 7.4.0 i think. runs on android","timestamp":1501487819945}
{"from":"barnie","message":"commercial product build on top of it: https://tabrisjs.com/","timestamp":1501487905738}
{"from":"FrozenFox","message":"Hello world. Where is the cryptography hiding in the documentation and/or source code? After spending too long searching through github, the most I see is hashing and a few *mentions* to libsodium?","timestamp":1501515091252}
{"from":"barnie","message":"I don't think there is too much documentation on that, but hypercore / hyperdrive use 'sodium-universal' and node's 'crypto..","timestamp":1501515242434}
{"from":"FrozenFox","message":"Evidence? :)","timestamp":1501515309928}
{"from":"FrozenFox","message":"I have at least two suspicions for potential design flaws, but I need code to audit..","timestamp":1501515375087}
{"from":"pfrazee","message":"FrozenFox: https://github.com/mafintosh/hypercore is probably the place to start. It uses https://github.com/sodium-friends/sodium-universal","timestamp":1501515746012}
{"from":"pfrazee","message":"FrozenFox: https://github.com/npmhub/npmhub is really helpful for exploring the dat ecosystem","timestamp":1501515821035}
{"from":"mafintosh","message":"FrozenFox: i can help you navigate as well. What's your concern?","timestamp":1501515950536}
{"from":"FrozenFox","message":"(1) how is an object encrypted? Given sodium, my guess is secret_box, and the key derivation is probably blake2(publickey) ?","timestamp":1501516049848}
{"from":"FrozenFox","message":"(2) what is the layering of encrypting vs signing? If the signature is on the outside, then the public key may be recoverable from the signature. If the signature is on the inside, then the nonce may be changed by anyone who also holds the public key. Similarly, a randomly chosen nonce of length <96bytes isn't considered secure. If this is using the 96-byte nonce, then the security relies on no party repeats the","timestamp":1501516236361}
{"from":"FrozenFox","message":"nonce with different messages. Where these parties include the author (with the private key) and the readers (with the public key).","timestamp":1501516238571}
{"from":"pfrazee","message":"mafintosh: ^","timestamp":1501516286548}
{"from":"FrozenFox","message":"I am not 100% sure about the public key recoverability of ed25519 signatures. ##crypto has not yielded an immediate result.","timestamp":1501516705836}
{"from":"FrozenFox","message":"pfrazee: I thought I said this already, apparently not, thanks for the links. I found the keygen and signatures, but no encryption still.","timestamp":1501517272624}
{"from":"pfrazee","message":"FrozenFox: sure thing, I couldn't tell you where to look for that. When maf gets back he can tell you more","timestamp":1501517312048}
{"from":"pfrazee","message":"FrozenFox: tara recognized your name - are you from austin?","timestamp":1501517330019}
{"from":"FrozenFox","message":"s/in/ralia","timestamp":1501517347059}
{"from":"FrozenFox","message":"pfrazee, tara: ^ = Australia. :)","timestamp":1501518128050}
{"from":"pfrazee","message":"FrozenFox: oh hah that took me a second","timestamp":1501518144946}
{"from":"pfrazee","message":"ok cool, must be a different frozen fox","timestamp":1501518152063}
{"from":"FrozenFox","message":"OR a recall, although probably another.","timestamp":1501518181186}
{"from":"FrozenFox","message":"mis-recall","timestamp":1501518184651}
{"from":"taravancil","message":"nice! i mentioned it to pfrazee because i know of someone who goes by frozen fox and is also interested in crypto stuff","timestamp":1501518189261}
{"from":"FrozenFox","message":":)","timestamp":1501518240126}
{"from":"mafintosh","message":"FrozenFox: all encryption is in hypercore-protocol","timestamp":1501518554619}
{"from":"mafintosh","message":"FrozenFox: its using xsalsa20 (non authenticated as the underlying protocol has integrity checks)","timestamp":1501518602949}
{"from":"mafintosh","message":"And we never reuse nonces","timestamp":1501518614675}
{"from":"FrozenFox","message":"mafintosh: Thanks.","timestamp":1501518617873}
{"from":"FrozenFox","message":"Signatures inside xsalsa?","timestamp":1501518628919}
{"from":"mafintosh","message":"Ya encryption comes as the last step","timestamp":1501518652235}
{"from":"mafintosh","message":"On the entire protocol stream","timestamp":1501518661537}
{"from":"FrozenFox","message":"Good. But... Unauthenticated encryption may be tampered, which can sometimes lead to full recovery of the message. I strongly advise against using xsalsa20 without poly1305.","timestamp":1501518722071}
{"from":"mafintosh","message":"Ya it can be tampered with but the underlying data will fail it's merkle tree checks then","timestamp":1501518808115}
{"from":"FrozenFox","message":"Yes, but when or how it fails if it is not constant time, will cause issues.","timestamp":1501518837496}
{"from":"FrozenFox","message":"Or similar side channels that you may not be defending against.","timestamp":1501518854764}
{"from":"FrozenFox","message":"Is any subset of the packet valid if standalone? I.e. a node of the tree may be a valid root if interpreted as one.","timestamp":1501518888252}
{"from":"mafintosh","message":"Nope","timestamp":1501518908271}
{"from":"mafintosh","message":"See the crypto.js file in hypercore","timestamp":1501518934946}
{"from":"mafintosh","message":"We guard against it there (see the preimage note)","timestamp":1501518972419}
{"from":"FrozenFox","message":"That helps, but you are stepping on thin ice for a minor performance benefit with possibly degraded security.","timestamp":1501518976635}
{"from":"FrozenFox","message":"Or probably* even","timestamp":1501519002785}
{"from":"FrozenFox","message":"I'm not having luck loading the mafintosh/hypercore-protocol, \"page is taking too long to load\"","timestamp":1501519024898}
{"from":"mafintosh","message":"Can you explain why it's degraded?","timestamp":1501519033627}
{"from":"barnie","message":"FrozenFox: there is also a community contrib that adds private key encryption: https://github.com/jayrbolton/dat-pki","timestamp":1501519034319}
{"from":"barnie","message":"wip","timestamp":1501519065311}
{"from":"mafintosh","message":"FrozenFox: brb","timestamp":1501519177823}
{"from":"barnie","message":"github has some issues, here too","timestamp":1501519191701}
{"from":"FrozenFox","message":"mafintosh: I am not the best to describe why, but I can assure you that the cost of a symmetric MAC is significantly cheaper than analysing the risk of permitting malleable ciphertexts.","timestamp":1501519343420}
{"from":"FrozenFox","message":"Cryptography is allowed to have some redundancy to make it easier to reason with the security of a protocol and correctly analyse the segmentation of the properties expected from each component.","timestamp":1501519596003}
{"from":"FrozenFox","message":"*NEVER* use unauthenticated encryption. If even a single bit may be flipped by an adversary without detection, the ciphertext is malleable, which opens it to many potential attacks.","timestamp":1501519644213}
{"from":"mafintosh","message":"FrozenFox: yes i understand unauthenticated encryption (it was a consisous choice)","timestamp":1501519674675}
{"from":"mafintosh","message":"FrozenFox: you should look at our paper. hypercore/dat is designed to run completely without an encryption layer. we just add it for privacy","timestamp":1501519709369}
{"from":"FrozenFox","message":"One that I fear may bite you down the line, for no real performance gain. If you need the performance, do not use javascript...","timestamp":1501519713414}
{"from":"mafintosh","message":"its not about performance","timestamp":1501519725591}
{"from":"mafintosh","message":"by adding macs, we need extra message framing, and the 16 byte overhead per message is *a lot* for us (we send lots of small messages)","timestamp":1501519773887}
{"from":"mafintosh","message":"and our unencrypted messages are *fully* authenticated","timestamp":1501519795787}
{"from":"mafintosh","message":"and its a first class feature that you don't need to extra encryption layer","timestamp":1501519806738}
{"from":"FrozenFox","message":"I mean, if you choose unauthenticated encryption over authenticated encryption; you have two primary motivations (1) faster code, skipping the auth. (2) less code, but you're already depending on libsodium and poly1305 is smaller than blake2.","timestamp":1501519808228}
{"from":"FrozenFox","message":"Or I guess a (3) no 16 bytes for mac, but tiny, even for tiny messages.","timestamp":1501519862228}
{"from":"mafintosh","message":"our tiny messages are 1-2 bytes","timestamp":1501519879230}
{"from":"FrozenFox","message":"Oh. Really tiny!","timestamp":1501519892210}
{"from":"mafintosh","message":"and we send one of those for each piece of data downloaded","timestamp":1501519895402}
{"from":"mafintosh","message":"FrozenFox: for any other protocol i'd agree with you","timestamp":1501519942103}
{"from":"FrozenFox","message":"Considering that you already rely on at least UDP, the 1-2 byte messages aren't going to grow significantly with 16 bytes.","timestamp":1501520016721}
{"from":"FrozenFox","message":"I mean, UDP/IP already outweigh the 16 bytes","timestamp":1501520030106}
{"from":"FrozenFox","message":"8 bytes for just ipv6","timestamp":1501520035736}
{"from":"FrozenFox","message":"16*","timestamp":1501520038943}
{"from":"FrozenFox","message":"4 bytes for v4","timestamp":1501520048464}
{"from":"barnie","message":"those 16 bytes shouldn't matter anyway even if msg is now 2 byte. how many are you sending? how can that be problematic?","timestamp":1501520120941}
{"from":"mafintosh","message":"one per chunk","timestamp":1501520146726}
{"from":"FrozenFox","message":"^","timestamp":1501520147391}
{"from":"blahah","message":"the chatter in the protocol is already a bottleneck on my connection","timestamp":1501520155510}
{"from":"mafintosh","message":"this is something we profiled as an actual bottleneck","timestamp":1501520168948}
{"from":"mafintosh","message":"not something i'm making up :)","timestamp":1501520177472}
{"from":"barnie","message":"ok. thx. good to know","timestamp":1501520193678}
{"from":"FrozenFox","message":"Why/how do you have messages so tiny as 1-2 bytes?","timestamp":1501520200342}
{"from":"FrozenFox","message":"That sounds like a poor protocol alone.","timestamp":1501520218742}
{"from":"mafintosh","message":"it's similar to bittorrents have messages","timestamp":1501520221066}
{"from":"mafintosh","message":"you tell other people in the swarm which pieces you data you aquired","timestamp":1501520234509}
{"from":"mafintosh","message":"(this is all in the paper)","timestamp":1501520236832}
{"from":"FrozenFox","message":"dat-paper.pdf ?","timestamp":1501520296642}
{"from":"barnie","message":"is there some reading on _when_ it becomes problematic, i.e. how many peers, etc?","timestamp":1501520298124}
{"from":"FrozenFox","message":"dat-paper.md ?","timestamp":1501520311873}
{"from":"mafintosh","message":"2 peers, many small chunks","timestamp":1501520316332}
{"from":"barnie","message":"hmm","timestamp":1501520341567}
{"from":"FrozenFox","message":"This sounds like an ineffecient acknowledgment ?","timestamp":1501520350470}
{"from":"mafintosh","message":"similar to bt","timestamp":1501520388475}
{"from":"mafintosh","message":"https://datproject.org/paper","timestamp":1501520388581}
{"from":"FrozenFox","message":"datproject/docs/papers/dat-paper.md?","timestamp":1501520420375}
{"from":"mafintosh","message":"see my link","timestamp":1501520430538}
{"from":"mafintosh","message":"yea the repo is prob that","timestamp":1501520438597}
{"from":"FrozenFox","message":"I prefer .md","timestamp":1501520469275}
{"from":"mafintosh","message":"ok","timestamp":1501520472473}
{"from":"mafintosh","message":"anyways, the choice of xsalsa20 is not an accident as i said. the only thing the encryption adds is privacy. all data integrity happens in the protocol underneath","timestamp":1501520575746}
{"from":"mafintosh","message":"so flipping a bit wont affect any security","timestamp":1501520643174}
{"from":"FrozenFox","message":"For the 1-2 byte messages, you have a hash on these?","timestamp":1501520646674}
{"from":"FrozenFox","message":"Where are these 1-2 byte messages in the paper?","timestamp":1501520667235}
{"from":"mafintosh","message":"no","timestamp":1501520667592}
{"from":"mafintosh","message":"it's a hint the other peer sends you that they might have the data","timestamp":1501520697406}
{"from":"FrozenFox","message":"What happens if I tamper with these 1-2 byte messages?","timestamp":1501520699687}
{"from":"mafintosh","message":"nothing","timestamp":1501520708555}
{"from":"mafintosh","message":"a peer will send you a request for the data","timestamp":1501520718406}
{"from":"mafintosh","message":"and you wont be able to respond","timestamp":1501520726425}
{"from":"mafintosh","message":"and the peer will disconnect from you","timestamp":1501520733486}
{"from":"FrozenFox","message":"I don't like this still. :|","timestamp":1501520764177}
{"from":"mafintosh","message":"authenticated encryption wont change *any* of that","timestamp":1501520784808}
{"from":"mafintosh","message":"you are talking to people you don't trust","timestamp":1501520795428}
{"from":"FrozenFox","message":"But it seems I cannot convince you otherwise, and my primary concern for signature-based public-key recovery has been resolved in ##crypto as probably not applicable to ed25519.","timestamp":1501520799564}
{"from":"ogd","message":"mafintosh: check this out https://developers.google.com/nearby/connections/overview","timestamp":1501520873838}
{"from":"mafintosh","message":"FrozenFox: this is something we thought about a lot, but you have to show me a concrete benefit in our protocol for this to convice me","timestamp":1501520948154}
{"from":"mafintosh","message":"*convince","timestamp":1501520950262}
{"from":"mafintosh","message":"ogd: sweet","timestamp":1501520951518}
{"from":"FrozenFox","message":"mafintosh: Good luck with #dat! I must admit that using the public key for symmetric encryption is a neat idea. You are pre-hashing the curve point before using it as a key right?","timestamp":1501521037618}
{"from":"FrozenFox","message":"With a different hash than used for indexing*","timestamp":1501521048803}
{"from":"mafintosh","message":"FrozenFox: yea","timestamp":1501521078270}
{"from":"mafintosh","message":"FrozenFox: and thanks :) really appreciate the input","timestamp":1501521084415}
{"from":"barnie","message":"mafintosh: about node-on-android.. you looked at J2V8? https://github.com/mafintosh/node-on-android/issues/1#issuecomment-319039616","timestamp":1501521215801}
{"from":"FrozenFox","message":"One final suggestion. You might want to take a look into human pronouncable and memorable phrases. For my own project, with some similar attributes, I'll prefer to use diceware wordlists like https://eff.org/dice for *at least* for ease of use. I.e. it is easier to type words across computers, or speak over the phone, than 58 hex characters.","timestamp":1501521243968}
{"from":"FrozenFox","message":"Or write down, or remember.","timestamp":1501521254090}
{"from":"mafintosh","message":"FrozenFox: nice! is there a node module for that?","timestamp":1501521284878}
{"from":"mafintosh","message":"barnie: ya, i did","timestamp":1501521295860}
{"from":"mafintosh","message":"barnie: but i want just pure google v8","timestamp":1501521303260}
{"from":"mafintosh","message":"and it was surprisingly easy to build + embed","timestamp":1501521318491}
{"from":"barnie","message":"want to avoid the jni","timestamp":1501521320557}
{"from":"mafintosh","message":"why?","timestamp":1501521326058}
{"from":"FrozenFox","message":"No idea, but there is probably a password generator that uses the word lists. But that'll probably pick a random element, and not encode/decode from the base 6^5","timestamp":1501521326642}
{"from":"ralphtheninja[m]","message":"I turn my back .. then 150 new messages :)","timestamp":1501521349507}
{"from":"FrozenFox","message":"mafintosh: I don't do anything with node. :)","timestamp":1501521352473}
{"from":"mafintosh","message":"ralphtheninja[m]: like how you are on us time","timestamp":1501521408827}
{"from":"FrozenFox","message":"mafintosh: An important property here is that as an encoding, it may be translated between languages and representations freely. So you could accept multiple formats and wordlists/languages.","timestamp":1501521448008}
{"from":"mafintosh","message":"FrozenFox: yea i'm into this idea","timestamp":1501521484931}
{"from":"mafintosh","message":"emilbayes was talking about something similar","timestamp":1501521492357}
{"from":"FrozenFox","message":"mafintosh: You could use larger lists or base2-friendly lists, but then you'll need to construct and manage them. I've decided for my projects, that I may as well just adopt the existing work from the eff dice list and just accept the 6^5 base.","timestamp":1501521545391}
{"from":"emilbayes","message":"FrozenFox: mafintosh Here is the gist of it","timestamp":1501521547822}
{"from":"emilbayes","message":"https://github.com/emilbayes/mindvault","timestamp":1501521548351}
{"from":"FrozenFox","message":"Github too unreliable today. I'll check it out tomorrow - and even stay in this channel for now :)","timestamp":1501521603459}
{"from":"FrozenFox","message":"emilbayes: Could you describe your mindvault while github doesn't load it? :)","timestamp":1501521651907}
{"from":"emilbayes","message":"FrozenFox: It's just what you said. EFF diceware, select your entropy level, use email as global salt, hash with strong kdf (argon2i), generate keypair :)","timestamp":1501521657173}
{"from":"emilbayes","message":"takes additional appid to further partition the keyspace, so (password, email, appId) is universally unique","timestamp":1501521745146}
{"from":"emilbayes","message":"Here's the EFF wordlist: https://www.npmjs.com/package/eff-diceware-passphrase","timestamp":1501521791530}
{"from":"FrozenFox","message":"emilbayes: Fun. I have a similar project but much more featureful. Key-value encrypted datastore and hierarchical key derivation for both blinding-based and distinct keys. (Or as silly blockchain people like to say, \"non-hardened and hardened\")","timestamp":1501521794184}
{"from":"emilbayes","message":"actually checked today, and with brotli you can compress the wordlist to 21kb","timestamp":1501521809973}
{"from":"FrozenFox","message":"If you have a slow derivation step in the graph, then it may be cached. I.e. RSA key generation is slow, but can be deterministic. Caching it in the key-value store is cheaper (and safer) than regenerating it.","timestamp":1501521858236}
{"from":"ogd","message":"hey crypto ppl we need to revisit this important research: https://github.com/pfrazee/base-emoji/issues/7","timestamp":1501521864742}
{"from":"ogd","message":":)","timestamp":1501521867848}
{"from":"FrozenFox","message":"Or a sub-graph where another argon2i/scrypt/pbkdf2 call occurs as a child, unlocking the parent which would derive the phrase, can also unlock the cached key, skipping the KDF cost for children.","timestamp":1501521923639}
{"from":"emilbayes","message":"ogd: that's not math :p","timestamp":1501521974981}
{"from":"ogd","message":"haha","timestamp":1501521981969}
{"from":"FrozenFox","message":"emilbayes: I wonder what the rust-written state machine automata (or whatever it is called) would compress it to. (Convert it to a compact and fuzzy queryable state machine / graph)","timestamp":1501522020164}
{"from":"emilbayes","message":"FrozenFox: I have no idea what any of these words mean :p","timestamp":1501522037636}
{"from":"emilbayes","message":"FrozenFox: Why would you want to cache the keys?","timestamp":1501522125243}
{"from":"emilbayes","message":"The whole point is that you generate them when you need them, so you can use you keys regardless of device :)","timestamp":1501522166904}
{"from":"FrozenFox","message":"emilbayes: My variant of what you described is a stateless key derivation framework that can derive passwords, keys, and encrypt arbitrary messages (deterministically, no need to fear or manage nonces); with an optional key-value store which may double as a cache.","timestamp":1501522196036}
{"from":"FrozenFox","message":"One key may be for SSH, or PGP.","timestamp":1501522210993}
{"from":"emilbayes","message":"FrozenFox: RE hierarchical keys, then we had a pretty interesting discussion a couple of days back about an extension to ed25519 where you can generate child keys that can be proven to be derive from a master key, without having access to the master key when generating them","timestamp":1501522236469}
{"from":"pfrazee","message":"ogd: I tried a couple of other pubkey visualizer concepts https://github.com/pfrazee/pubkey-avis https://github.com/pfrazee/bezier-signatures","timestamp":1501522297011}
{"from":"emilbayes","message":"FrozenFox: Yeah okay, so that's why I made this module too ^^ Wanted to be able to stretch weak key material so I could use it with the libsodium primitives","timestamp":1501522307268}
{"from":"FrozenFox","message":"emilbayes: Prove the master is a parent by recording the scalars multiplied to make the children?","timestamp":1501522349557}
{"from":"pfrazee","message":"ogd: I think, if I were to try something new, I'd create a cartoon scene and use the pubkey values to customize it. What color is the hat, is it a boy or a girl in the scene, etc. Things that people would really easily remember","timestamp":1501522361531}
{"from":"FrozenFox","message":"Or even the multiple of the children' scalars, as they commute.","timestamp":1501522362881}
{"from":"ogd","message":"wow i read the last two messages above this one at the same time and got confused as to why the children in the picture had scalars","timestamp":1501522401845}
{"from":"FrozenFox","message":"I'm not sure how this would be done with ed25519, but I certainly know it for curve25519, which is probably the same way. But care must be taken with regard to reusing dh + signatures, but that is already the case without hierarchical keys. :)","timestamp":1501522411453}
{"from":"emilbayes","message":"pfrazee: There's some old sbot discussions about dominic trying to do the same as you pubkey-avis, but people tended to like emoji more because they're instantly familiar and you associate them with something emotional, while abstract art is hard to remember","timestamp":1501522412212}
{"from":"pfrazee","message":"ogd: lol","timestamp":1501522424483}
{"from":"pfrazee","message":"emilbayes: yeah I agree, the emojis get closer to what I think is the right idea, but theyre still *very long* strings","timestamp":1501522444501}
{"from":"FrozenFox","message":"emilbayes: Re caching keys. RSA keygen is slow, so caching it by encrypting it with a key - derived alongside the seed used to generate the RSA key - allows you to skip the delay of keygen.","timestamp":1501522535249}
{"from":"emilbayes","message":"pfrazee: yeah alright, I don't know what the probability of generating a colliding prefix and suffix for a pubkey is. I'm just thinking with the way that commit hashes usually are abbreviated too","timestamp":1501522539231}
{"from":"emilbayes","message":"that maybe you only want to show the first 3 and last 3 emoji and still get good enough security","timestamp":1501522586666}
{"from":"ralphtheninja[m]","message":"mafintosh: wasn't a negative comment, my mind is blown on this channel almost every day :)","timestamp":1501522588960}
{"from":"FrozenFox","message":"emilbayes: Or for child graphs, where you may derive a passphrase for a new instance, you can cache the slow argon2i/scrypt call, with a key derived alongside the phrase. Again, just to skip the delay. No weaker.","timestamp":1501522590442}
{"from":"pfrazee","message":"emilbayes: oh sure. And by that metric, emojis are actually *much* better than hex","timestamp":1501522598241}
{"from":"FrozenFox","message":"But the cache is entirely optional.","timestamp":1501522598913}
{"from":"pfrazee","message":"emilbayes: because each emoji \"digit\" (in the module I used) encodes 8 bits, whereas a hex digit is only 4 bits","timestamp":1501522629642}
{"from":"FrozenFox","message":"I'm out for now. Happy hacking to all #dat'ters. o/","timestamp":1501522733373}
{"from":"emilbayes","message":"seems like there are 481 apple emoji","timestamp":1501522733718}
{"from":"ogd","message":"cya FrozenFox","timestamp":1501522750550}
{"from":"pfrazee","message":"FrozenFox: o/","timestamp":1501522775946}
{"from":"emilbayes","message":"so yeah, you can fit 8 bits into each emoji","timestamp":1501522844544}
{"from":"emilbayes","message":"or 8.9 bits","timestamp":1501522850387}
{"from":"mafintosh","message":"okay, enough mobile hacking for now","timestamp":1501546111744}
{"from":"dat-gitter1","message":"(sdockray) mafintosh: hyperdb!","timestamp":1501546305733}
{"from":"mafintosh","message":"ya exactly","timestamp":1501546321410}
{"from":"mafintosh","message":":D","timestamp":1501546326732}
{"from":"yoshuawuyts","message":":D","timestamp":1501546333369}
{"from":"dat-gitter1","message":"(sdockray) :)","timestamp":1501546343429}
{"from":"pfrazee","message":"hypermobilelocalblockchain tokens","timestamp":1501549577510}
{"from":"pfrazee","message":"I've been dropping that phrase in conversations at coffee shops to see if a VC appears","timestamp":1501549618444}
{"from":"dat-gitter1","message":"(lukeburns) lol","timestamp":1501551046047}
{"from":"beardicus","message":"you have to say it three times in a row for that to happen.","timestamp":1501554158738}
{"from":"cblgh","message":"ohhhhh hyperdb hacking yessssssssss","timestamp":1501576962028}
{"from":"FrozenFox","message":"mafintosh: Hmm.. You use xsalsa20, where are you sending the nonce? Or are you counting from a pre-established random nonce?","timestamp":1501578385849}
{"from":"FrozenFox","message":"s/where//","timestamp":1501578400345}
{"from":"FrozenFox","message":"If you send the nonce, then you may as well send a MAC. Or ignore the nonce and use a DAE/SIV.","timestamp":1501578460933}
{"from":"mafintosh","message":"FrozenFox: nonce is the message ever sent over the stream","timestamp":1501579833786}
{"from":"FrozenFox","message":"What?","timestamp":1501579860206}
{"from":"FrozenFox","message":"The nonce for an xsalsa20, isn't encrypted with xsalsa20.","timestamp":1501579870671}
{"from":"mafintosh","message":"FrozenFox: no the nonce is sent unencrypted as the first message of the steam","timestamp":1501579912463}
{"from":"mafintosh","message":"It is the only unencrypted message","timestamp":1501579927675}
{"from":"FrozenFox","message":"Right. So as I said, a counting random nonce.","timestamp":1501579940000}
{"from":"FrozenFox","message":"Are you using TCP or UDP?","timestamp":1501579946566}
{"from":"mafintosh","message":"Tcp and utp","timestamp":1501579953853}
{"from":"mafintosh","message":"Utp is basically tcp over udp","timestamp":1501579972840}
{"from":"FrozenFox","message":"This wouldn't be an issue for TCP due to being reliable and ordered. But UDP may reorder messages.","timestamp":1501579987708}
{"from":"mafintosh","message":"No utp handles that","timestamp":1501579999906}
{"from":"FrozenFox","message":"Does utp correct order and ensure delivery?","timestamp":1501580002689}
{"from":"mafintosh","message":"Yup","timestamp":1501580012368}
{"from":"FrozenFox","message":"And what overheads does utp add ontop of these udp packets?","timestamp":1501580024807}
{"from":"mafintosh","message":"Unsure, a tiny header","timestamp":1501580161514}
{"from":"FrozenFox","message":"<16 bytes at least :P","timestamp":1501580174581}
{"from":"FrozenFox","message":"Okay. Carry on with your weird protocol ;)","timestamp":1501580192960}
{"from":"mafintosh","message":"It's not just about the 16 bytes","timestamp":1501580435889}
{"from":"mafintosh","message":"Using secretboxes requires framimg which adds more complexity","timestamp":1501580464526}
{"from":"mafintosh","message":"It all adds up","timestamp":1501580470114}
{"from":"mafintosh","message":"For no gain","timestamp":1501580476410}
{"from":"mafintosh","message":"Anywho I'm off to bed","timestamp":1501580533879}
{"from":"FrozenFox","message":"Not no gain... It helps with at least auditing and reasoning with the protocol and the properties of each component.","timestamp":1501580556377}
{"from":"mafintosh","message":"You should audit from the other way","timestamp":1501580611333}
{"from":"mafintosh","message":"Look at our data sharing protocol (before encryption)","timestamp":1501580624615}
{"from":"FrozenFox","message":"The tiny complexity for the framing isn't for *you* to manage. sodium manages it for you.","timestamp":1501580639827}
{"from":"mafintosh","message":"No I need framimg when sending those over the wire","timestamp":1501580664866}
{"from":"mafintosh","message":"In xsalsa i dont","timestamp":1501580683091}
{"from":"mafintosh","message":"As it just uses the underlying framing","timestamp":1501580698081}
{"from":"FrozenFox","message":"Yes, but *you* don't manage that. Sodium does.","timestamp":1501580725553}
{"from":"mafintosh","message":"But review the data integrity in the unencrypted protocol","timestamp":1501580744416}
{"from":"FrozenFox","message":"Link?","timestamp":1501580774027}
{"from":"mafintosh","message":"See that, that guarantees \"safe\" data transfer between untrusted parties","timestamp":1501580779142}
{"from":"FrozenFox","message":"raw .md link prefer?","timestamp":1501580780860}
{"from":"mafintosh","message":"Its the paper again","timestamp":1501580790241}
{"from":"mafintosh","message":"All in there","timestamp":1501580793979}
{"from":"FrozenFox","message":"Hm","timestamp":1501580795847}
{"from":"mafintosh","message":"Anything that isn't clear there I'm happy to answer :)","timestamp":1501580832385}
{"from":"FrozenFox","message":"2.1?","timestamp":1501580859236}
{"from":"mafintosh","message":"2.1?","timestamp":1501580873677}
{"from":"FrozenFox","message":"section, content integgrity?","timestamp":1501580883071}
{"from":"mafintosh","message":"Prob ya","timestamp":1501580913390}
{"from":"mafintosh","message":"I'd read the entire thing if I was you","timestamp":1501580926372}
{"from":"FrozenFox","message":"With regard to \"bad peers\", the owner of the private key can still be a bad peer. I.e. They may disregard your desired versioning chain and create logical branches.","timestamp":1501580940630}
{"from":"mafintosh","message":"Ya but a signing peer always signs the root of the merkle tree","timestamp":1501581001632}
{"from":"mafintosh","message":"So two peers will know this is happening","timestamp":1501581017124}
{"from":"mafintosh","message":"And have proof the peer is bad","timestamp":1501581025043}
{"from":"mafintosh","message":"(two signed inconsistent roots)","timestamp":1501581046244}
{"from":"FrozenFox","message":"Yes they will, but do you catch this case and complain, or let the user pick a branch?","timestamp":1501581056852}
{"from":"mafintosh","message":"Yes we catch it","timestamp":1501581085885}
{"from":"mafintosh","message":"Of course","timestamp":1501581092387}
{"from":"mafintosh","message":"The dead is declared \"bad\" if that happens","timestamp":1501581111873}
{"from":"mafintosh","message":"dat not dead","timestamp":1501581120497}
{"from":"FrozenFox","message":"Good :)","timestamp":1501581128701}
{"from":"barnie","message":"hi all, fyi: i added a 'culture swot' to my dat project investigations: https://github.com/datproject/discussions/issues/58","timestamp":1501581293788}
{"from":"mafintosh","message":"Off to bed for reals this time","timestamp":1501581449521}
{"from":"FrozenFox","message":"o/","timestamp":1501581458417}
{"from":"mafintosh","message":"FrozenFox: if you mention me other protocol questions ill get to them later","timestamp":1501581474823}
{"from":"barnie","message":"o/","timestamp":1501581476981}
{"from":"FrozenFox","message":"Sure","timestamp":1501581487183}
{"from":"mafintosh","message":"ogd: just converted 106F to celcius and now I think we need to leave asap","timestamp":1501602969999}
{"from":"G-Ray","message":"Is it possible to know how many peers share specific files (or index range) in a dat ?","timestamp":1501603835229}
{"from":"jhand","message":"G-Ray: kind of. Each peer object (e.g. archive.content.peers[0]) has a remoteBitfield property. You can traverse that to see what blocks of data that peer has.","timestamp":1501605783923}
{"from":"ralphtheninja[m]","message":"mafintosh: 106F .. holy shit","timestamp":1501606650785}
{"from":"pfrazee","message":"yall need to come to texas","timestamp":1501608322464}
{"from":"pfrazee","message":"only place that beats us is arizona","timestamp":1501608347482}
{"from":"pfrazee","message":"when it comes to the heat","timestamp":1501608352507}
{"from":"ralphtheninja[m]","message":"don't understand how people can work in that heat, my brain just doesn't function .. need cool nights","timestamp":1501609294052}
{"from":"mafintosh","message":"ralphtheninja[m]: its the scandinavian curse","timestamp":1501609698004}
{"from":"mafintosh","message":"pfrazee: i've been to texas!!","timestamp":1501609705983}
{"from":"mafintosh","message":"as a teenager","timestamp":1501609712300}
{"from":"pfrazee","message":"mafintosh: oh yeah?","timestamp":1501609722196}
{"from":"mafintosh","message":"ya houston","timestamp":1501609744721}
{"from":"mafintosh","message":"and the surrounding area","timestamp":1501609755136}
{"from":"mafintosh","message":"went to a summer camp as well","timestamp":1501609763005}
{"from":"mafintosh","message":"one of the weirdest experiences i've ever had","timestamp":1501609774706}
{"from":"mafintosh","message":"i got to fire a gun there for the first time!","timestamp":1501609781970}
{"from":"mafintosh","message":"but we weren't allowed to go to the bathroom alone... for safety reasons","timestamp":1501609801167}
{"from":"mafintosh","message":"and boys and girls were segregated, much culture shock","timestamp":1501609815831}
{"from":"ralphtheninja[m]","message":"bathrooms can be dangerous :)","timestamp":1501609922406}
{"from":"ralphtheninja[m]","message":"I've been following the bitcoin hardfork that's happening today .. watching a live stream for 5 hours","timestamp":1501609965614}
{"from":"pfrazee","message":"mafintosh: hah yeah that sounds about right","timestamp":1501610233071}
{"from":"pfrazee","message":"https://twitter.com/pfrazee/status/892442105964113921","timestamp":1501610234901}
{"from":"millette","message":"ralphtheninja[m], what, no wet paint to watch?","timestamp":1501610433865}
{"from":"pfrazee","message":"millette: haha no jokee","timestamp":1501610464797}
{"from":"jhand","message":"I'd recommend this title: http://www.imdb.com/title/tt5375100/.","timestamp":1501610552864}
{"from":"jhand","message":"available for free online! https://www.youtube.com/watch?v=Tpk4q_Zo2ws","timestamp":1501610579449}
{"from":"millette","message":"9.4, nice","timestamp":1501610586085}
{"from":"millette","message":"stealing warhol's thunder","timestamp":1501610603818}
{"from":"millette","message":"great \"Created by director Charlie Lyne to force the British Board of Film Classification to watch many hours of paint drying to protest the practices of the British censors.\"","timestamp":1501610645903}
{"from":"jhand","message":"yea hah. It was a great idea: https://www.kickstarter.com/projects/charlielyne/make-the-censors-watch-paint-drying/description","timestamp":1501610683404}
{"from":"millette","message":"ratings beat the hell out of www.imdb.com/title/tt0196530/","timestamp":1501610769627}
{"from":"ralphtheninja[m]","message":"millette: not today :)","timestamp":1501610990069}
{"from":"mafintosh","message":"pfrazee: whoa great work","timestamp":1501611057269}
{"from":"pfrazee","message":"mafintosh: thanks man","timestamp":1501611068799}
{"from":"ogd","message":"pfrazee: does injestdb replicate with the app?","timestamp":1501611102720}
{"from":"ogd","message":"pfrazee: or is the idea you just use it to do views but you store primary data in dat?","timestamp":1501611133841}
{"from":"ogd","message":"pfrazee: cause either way you should put that in the readme :)","timestamp":1501611158807}
{"from":"pfrazee","message":"ogd: the latter, and will do","timestamp":1501611204937}
{"from":"pfrazee","message":"ogd: https://github.com/beakerbrowser/injestdb#how-it-works does this answer that and I need to put it higher, or do I need better explanation, or both?","timestamp":1501611238507}
{"from":"ogd","message":"pfrazee: i read 'InjestDB abstracts over the DatArchive API to provide a simple database-like interface. ' but it didnt really describe it for me","timestamp":1501611270234}
{"from":"pfrazee","message":"yeah","timestamp":1501611277695}
{"from":"pfrazee","message":"ogd: ok I'll work on that","timestamp":1501611288274}
{"from":"ogd","message":"pfrazee: if you call InjestTable#put, does it update the value in the dat?","timestamp":1501611308533}
{"from":"pfrazee","message":"ogd: yes, mutations are persisted to dat archives","timestamp":1501611336903}
{"from":"ogd","message":"pfrazee: how does it store the IDB in the dat?","timestamp":1501611359674}
{"from":"TheLink","message":"I gues hyperdb and injestdb serve different purposes?","timestamp":1501611377342}
{"from":"TheLink","message":"*guess","timestamp":1501611380550}
{"from":"pfrazee","message":"ogd: it doesnt. I'll write up a better explanation but I'll explain in detail here","timestamp":1501611387602}
{"from":"pfrazee","message":"TheLink: yes, I'll explain that now too","timestamp":1501611392794}
{"from":"TheLink","message":"cool :)","timestamp":1501611410268}
{"from":"pfrazee","message":"Injest sits on top of dat archives. It duplicates data it's handling into indexeddb, and that duplicated data is a throwaway cache-- it can be reconstructed at any time from the dat archives","timestamp":1501611439340}
{"from":"pfrazee","message":"Injest treats individual files in the dat archive as individual records in a table. As a result, there's a direct mapping for each table to a folder of .json files. Eg, if you had a 'tweets' table, it would map to the `/tweets/*.json` files","timestamp":1501611488215}
{"from":"pfrazee","message":"Injest's mutators, such as put or add or update, simply write those json files","timestamp":1501611524863}
{"from":"pfrazee","message":"Injest's readers & query-ers, such as get() or where(), read from the indexeddb cache","timestamp":1501611552455}
{"from":"pfrazee","message":"and then Injest watches the archives for changes to the json files. When they change, it reads them and updates indexeddb, thus the queries stay uptodate","timestamp":1501611586314}
{"from":"ralphtheninja[m]","message":"nice","timestamp":1501611611985}
{"from":"pfrazee","message":"put() -> archive/tweets/12345.json -> indexer -> indexeddb -> get()","timestamp":1501611624208}
{"from":"pfrazee","message":"I'm not 100% sure of the scope of hyperdb, but afaik it's designed to provide a keyvalue store which can support multiple writers","timestamp":1501611663269}
{"from":"ogd","message":"pfrazee: any particular reason for doing straight IDB as opposed to levelup?","timestamp":1501611665772}
{"from":"pfrazee","message":"therefore injest should be able to sit on top of hyperdb in the future","timestamp":1501611694144}
{"from":"pfrazee","message":"ogd: I wrote injest by copying dexie.js code, because I wanted the same query interface","timestamp":1501611717873}
{"from":"pfrazee","message":"I might have been able to save time with levelup and the level ecosystem, but I had enough unknowns that this felt like the simplest option for me (as the dev)","timestamp":1501611756085}
{"from":"ogd","message":"you should just copy paste all this into the readme","timestamp":1501611811339}
{"from":"pfrazee","message":"ogd: yeah I will. Any other things I should clarify?","timestamp":1501611836577}
{"from":"ogd","message":"not that i can think of","timestamp":1501611851871}
{"from":"pfrazee","message":"ok","timestamp":1501611855192}
{"from":"pfrazee","message":"TheLink: that all clear?","timestamp":1501611860427}
{"from":"ogd","message":"maybe mention how dat is a virtual filesystem so backing the db with a filesystem doesnt incur the same costs as backing it with a real linux filesystem","timestamp":1501611888179}
{"from":"TheLink","message":"pfrazee: as clear as my understanding can grasp it","timestamp":1501611896878}
{"from":"pfrazee","message":"ogd: well truth is, in Beaker, we end up writing the JSON to the FS first. Injest is a bit quick and dirty since it's built in userland","timestamp":1501612001747}
{"from":"ogd","message":"pfrazee: oh i figured you were writing it to a hypercore directly","timestamp":1501612065174}
{"from":"pfrazee","message":"if building something like injest from the ground up, I'd use a more traditional DB design (an mmaped file or osmething like that) and then asynchronously publish the changes to the dat","timestamp":1501612069820}
{"from":"pfrazee","message":"ogd: beaker writes to the \"staging folder\" first, and then commits that to the hypercore. Not efficient","timestamp":1501612097013}
{"from":"ogd","message":"ah ok","timestamp":1501612103593}
{"from":"ogd","message":"pfrazee: i think having an example tutorial that involved replication would be cool too","timestamp":1501612308097}
{"from":"ogd","message":"pfrazee: like, you clone a dat, boot up the injest code, then you change the upstream, and watch the changes apply","timestamp":1501612356654}
{"from":"ogd","message":"if i wrote this i would have called it CascadeDB for pacific northwest pride","timestamp":1501612393896}
{"from":"pfrazee","message":"ogd: injest is definitely a silly name but it's memorable","timestamp":1501612486719}
{"from":"pfrazee","message":"ogd: I'm going to iron out some bugs with nexus, and then we'll write up some better tutorials","timestamp":1501612503670}
{"from":"ogd","message":"pfrazee: whats nexus","timestamp":1501612518693}
{"from":"pfrazee","message":"ogd: twitter clone we're writing","timestamp":1501612527004}
{"from":"ogd","message":"ah cool","timestamp":1501612532549}
{"from":"mafintosh","message":"jhand: try this on your phone at some point, https://usercontent.irccloud-cdn.com/file/qxGqyrzg/app.apk","timestamp":1501614059417}
{"from":"mafintosh","message":"cool node app","timestamp":1501614059896}
{"from":"ogd","message":"jhand: karissa pfrazee https://github.com/datproject/docs/commit/b48d7e89c9bbafe7930d127daf68c93b80325c10","timestamp":1501615276644}
{"from":"pfrazee","message":"ogd: awesome, pinned for later","timestamp":1501615310906}
{"from":"pfrazee","message":"ogd: mafintosh: I mentioned this earlier, I'm still thinking pretty heavily about implementing hosted dats for transactional strict consensus","timestamp":1501615409494}
{"from":"pfrazee","message":"this would use some kind of dat-rpc interface. Keys would be retained by the client, and signed updates would come from the clients, but the server would coordinate the writes to avoid conflicts","timestamp":1501615453989}
{"from":"ogd","message":"pfrazee: do you have a use case in mind?","timestamp":1501615457131}
{"from":"pfrazee","message":"ogd: exact same as multiwriter, but with different characteristics. I think we can expose it as a configurable flag on the DatArchive interface","timestamp":1501615506846}
{"from":"pfrazee","message":"ogd: eventual consistency is a little harder to write because the developer has to handle conflict states. However, it's the only mode that supports offline writes","timestamp":1501615552984}
{"from":"pfrazee","message":"ogd: I think it'd be really great if we could just support both. If you dont want to handle conflict states, just turn that off when you instantiate the archive, and then multiple writers has to be done via a leader","timestamp":1501615596918}
{"from":"ogd","message":"pfrazee: for that cant you just have the client only use the original authors version?","timestamp":1501615635783}
{"from":"pfrazee","message":"ogd: you lose transactions in that case","timestamp":1501615666197}
{"from":"ogd","message":"pfrazee: yea whats the use case for transactions?","timestamp":1501615699203}
{"from":"pfrazee","message":"ogd: update or upsert without conflicts is a good example","timestamp":1501615732930}
{"from":"pfrazee","message":"ogd: if I can transactionally read, make the changes, then write, then there will never be conflicts and the code is easier to write","timestamp":1501615763250}
{"from":"pfrazee","message":"ogd: but ofc the cost is that you need to be online to do transaction writes with multiple authors, which is why we want dat's builtin multiwriter as the default","timestamp":1501615795278}
{"from":"ogd","message":"say i have a dat. i add you. mafintosh chooses to use my keys only, and disregard your keys. you write a forked key. mafintosh app is unaffected till i choose to merge your key into my feed","timestamp":1501615811183}
{"from":"ogd","message":"its like branches","timestamp":1501615814668}
{"from":"pfrazee","message":"ogd: yeah and that's fine except that branches require merges","timestamp":1501615852209}
{"from":"ogd","message":"pfrazee: so is your idea to just proxy writes to a central writer? or is there something i'm missing","timestamp":1501615927907}
{"from":"pfrazee","message":"ogd: correct. It's a leader system without builtin election","timestamp":1501615946190}
{"from":"pfrazee","message":"ogd: user device retains the signing keys. We just use the leader to ensure there's a single order","timestamp":1501615973897}
{"from":"ogd","message":"pfrazee: i dunno if i havent had enough coffee today but i havent understood anything you've said today lol","timestamp":1501615975480}
{"from":"pfrazee","message":"lol","timestamp":1501615989289}
{"from":"jhand","message":"lol","timestamp":1501615993136}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh What doe the dynamic add-feed thing refer to here?https://github.com/mafintosh/hyperdb/blob/production-ready/index.js#L32","timestamp":1501618044546}
{"from":"mafintosh","message":"@scriptjs it's an event that is emitted when a new feed is added","timestamp":1501621466212}
{"from":"mafintosh","message":"it's using that internally for now","timestamp":1501621474139}
{"from":"mafintosh","message":"which isn't the best idea for various reasons but gets the job done","timestamp":1501621488580}
{"from":"jhand","message":"mafintosh: https://github.com/datproject/dat/blob/master/src/ui/archive.js","timestamp":1501623432953}
{"from":"mafintosh","message":"jhand: https://gist.github.com/mafintosh/460904e61a09269d619f720233e57e48","timestamp":1501625998824}
{"from":"mafintosh","message":"jhand: install and then node neat-log-debug.js 484f5b9847afdd24069006bfba6eae5456495160d2fc5038df6dfcd7dd7320b2","timestamp":1501626034652}
{"from":"jhand","message":"cool free bitcoint o/","timestamp":1501626417813}
{"from":"ralphtheninja[m]","message":"hehe","timestamp":1501628085436}
{"from":"ralphtheninja[m]","message":"I don't know what I'm doing :)","timestamp":1501628838619}
{"from":"mafintosh","message":"ralphtheninja[m]: key has to be a hypercore, run it once without args","timestamp":1501630069267}
{"from":"ralphtheninja[m]","message":"thx","timestamp":1501630142224}
{"from":"ralphtheninja[m]","message":"mafintosh: it's super useful with gists like this if you're a newcomer and want to wrap your head around use cases","timestamp":1501630340843}
{"from":"mafintosh","message":"ralphtheninja[m]: i'll keep that in mind :)","timestamp":1501630353978}
{"from":"ralphtheninja[m]","message":"at least for me it's more important to get into the thinking rather than what api a certain module has, if that makes more sense","timestamp":1501630403943}
{"from":"ralphtheninja[m]","message":"-more","timestamp":1501630424957}
{"from":"ralphtheninja[m]","message":"mafintosh: try connect to my feed e0884680da7cbcbd53c5134972c87e7b73787ba92fb583a7ec4f7fe52d83a6ee","timestamp":1501631923826}
{"from":"mafintosh","message":"ralphtheninja[m]: one sec, keep it running","timestamp":1501632055326}
{"from":"ralphtheninja[m]","message":"yep it's up","timestamp":1501632260708}
{"from":"ralphtheninja[m]","message":"15 coins atm","timestamp":1501632263504}
{"from":"ralphtheninja[m]","message":"I'm on american time now lol","timestamp":1501632281718}
{"from":"mafintosh","message":"ralphtheninja[m]: try cloning it on a server now","timestamp":1501632427195}
{"from":"mafintosh","message":"I wanna see if i get clone it my phone","timestamp":1501632441844}
{"from":"mafintosh","message":"And i cannot seem to hole punch atm :)","timestamp":1501632458546}
{"from":"jhand","message":"ralphtheninja[m]: i connected =)","timestamp":1501632504786}
{"from":"mafintosh","message":"ralphtheninja[m]: no wait it totally works","timestamp":1501632508568}
{"from":"mafintosh","message":"Haha","timestamp":1501632511567}
{"from":"mafintosh","message":"Had a typo","timestamp":1501632517625}
{"from":"mafintosh","message":"ralphtheninja[m]: i'm mining your coins on my phone","timestamp":1501632534947}
{"from":"mafintosh","message":"ralphtheninja[m]: so good","timestamp":1501632601575}
{"from":"jhand","message":"mafintosh: whole new meaning of remote debugging","timestamp":1501632755760}
{"from":"ralphtheninja[m]","message":"mafintosh: looooool","timestamp":1501632769472}
{"from":"ralphtheninja[m]","message":"jhand: wait .. think you're onto something here","timestamp":1501632801635}
{"from":"jhand","message":"I blame mafintosh","timestamp":1501632834510}
{"from":"ralphtheninja[m]","message":"hehe","timestamp":1501632839476}
{"from":"ralphtheninja[m]","message":"just think of it .. you have a mobile app (or any app rather) writing log data to a hypercore","timestamp":1501632877977}
{"from":"ralphtheninja[m]","message":"then as a developer you could easily \"attach\" to that log to see why things aren't working","timestamp":1501632901837}
{"from":"jhand","message":"so mafintosh seems like we can just plug this into the dat cli... you just pass --remote <key> or something and uses that state","timestamp":1501632907211}
{"from":"jhand","message":"ralphtheninja[m]: ya exactly =)","timestamp":1501632918143}
{"from":"jhand","message":"ralphtheninja[m]: its neat in neat-log because you just need the \"state\" object, not the whole output","timestamp":1501632975754}
{"from":"ralphtheninja[m]","message":"haven't look at neat-log .. will check it out","timestamp":1501633022700}
{"from":"ralphtheninja[m]","message":"well I've encountered it ;) but never used","timestamp":1501633035355}
{"from":"jhand","message":"ralphtheninja[m]: I think Im the only one thats used it so you'll be early adopter =)","timestamp":1501633055076}
{"from":"ralphtheninja[m]","message":"perfect, I like to gamble :D","timestamp":1501633076051}
{"from":"mafintosh","message":"It's a really great module","timestamp":1501633101311}
{"from":"mafintosh","message":"One of my new favs","timestamp":1501633108136}
{"from":"jhand","message":"mafintosh: oh the state in dat cli may not be serializable. it has stuff like the archive instance in it.","timestamp":1501633215532}
{"from":"jhand","message":"forgot that part","timestamp":1501633217773}
{"from":"ralphtheninja[m]","message":"btw, if there's anything someone need help with, let me know","timestamp":1501633224262}
{"from":"ralphtheninja[m]","message":"it could be reading/writing docs, writing tests, modules, rubber ducking","timestamp":1501633278260}
{"from":"ralphtheninja[m]","message":"or mining coins :)","timestamp":1501633296466}
{"from":"mafintosh","message":"jhand: yea you'd wanna transform that","timestamp":1501633327908}
{"from":"mafintosh","message":"ralphtheninja[m]: hyperdrive needs more tests","timestamp":1501633341660}
{"from":"mafintosh","message":"ralphtheninja[m]: and writing modules on top is always helpful if you have an idea :)","timestamp":1501633374146}
{"from":"ralphtheninja[m]","message":"mafintosh: check","timestamp":1501633469636}
{"from":"mafintosh","message":"ralphtheninja[m]: hyperdrive prob needs more docs as well","timestamp":1501633527386}
{"from":"ralphtheninja[m]","message":"jhand: would be nice to get rid of the output variable .. can't you just send the result of the view function into output inside neat-log?","timestamp":1501633743840}
{"from":"jhand","message":"ralphtheninja[m]: yea that too, but may be useful for debugging to have more information. Then you can recreate any \"view\" without having the user change stuff.","timestamp":1501633783185}
{"from":"ralphtheninja[m]","message":"jhand: yep","timestamp":1501633864430}
{"from":"ralphtheninja[m]","message":"mafintosh: mind adding me as collab?","timestamp":1501633907955}
{"from":"ralphtheninja[m]","message":"jhand: it's really interesting what has happened to the front end the past year or two","timestamp":1501633949411}
{"from":"ralphtheninja[m]","message":"feels like I'm lightyears behind on front-end stuff these days :)","timestamp":1501634000706}
{"from":"ralphtheninja[m]","message":"would be nice to polish that a bit and also learn more about how dat and beaker works","timestamp":1501634032912}
{"from":"ralphtheninja[m]","message":"I think writing more apps for beaker could be very useful","timestamp":1501634053272}
{"from":"jhand","message":"ralphtheninja[m]: ya i felt the same way with front end too. But after diving into choo and tachyons I can do most of what i need without spending 200 hours learning react.","timestamp":1501634081283}
{"from":"ralphtheninja[m]","message":"react is just terrible","timestamp":1501634110038}
{"from":"ralphtheninja[m]","message":"and react native","timestamp":1501634112843}
{"from":"jhand","message":"i think its good for a specific type of application where there is lots of data and lots of hands working on it (e.g. facebook)","timestamp":1501634162166}
{"from":"jhand","message":"But I spend a lot of time with students & volunteers trying to make small react apps and it makes me sad","timestamp":1501634190273}
{"from":"ralphtheninja[m]","message":"aye, it prob makes huge sense for facebook and not necessarily for everyone else","timestamp":1501634277702}
{"from":"ralphtheninja[m]","message":"61 coins mined! :)","timestamp":1501634618041}
{"from":"mafintosh","message":"ralphtheninja[m]: sure, which repo?","timestamp":1501634745058}
{"from":"ralphtheninja[m]","message":"mafintosh: hyperdrive","timestamp":1501634898745}
{"from":"dat-gitter","message":"(serapath) do you guys mine real bitcoins?","timestamp":1501634946232}
{"from":"mafintosh","message":"@serapath haha no, just a silly gist i wrote","timestamp":1501634996250}
{"from":"ralphtheninja[m]","message":"serapath nah, we're just joking around .. I used to but not anymore","timestamp":1501634999720}
{"from":"dat-gitter","message":"(serapath) i don't but the speed in which you mention \"new coins mined\" sounded very interesting xD","timestamp":1501635085203}
{"from":"mafintosh","message":"ralphtheninja[m]: done","timestamp":1501635431891}
{"from":"ralphtheninja[m]","message":"mange tak","timestamp":1501635490656}
{"from":"mafintosh","message":"Selv tak","timestamp":1501635599627}
{"from":"bret","message":"ralphtheninja[m]: don't worry about missing out on react, all you are missing out on is terribly scoped projects with questionable plugin architectures","timestamp":1501636068479}
{"from":"ralphtheninja[m]","message":"bret: I'm not :)","timestamp":1501636137404}
{"from":"ralphtheninja[m]","message":"mafintosh: `this.ready` in `hyperdrive`, is it meant to be part of the api?","timestamp":1501636758451}
{"from":"ralphtheninja[m]","message":"I'm thinking we should make it internal since we already have events for 'ready' and 'error'","timestamp":1501636791440}
{"from":"ralphtheninja[m]","message":"since this.ready(cb) potentially can callback with an error, the name is a bit confusing","timestamp":1501636820229}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh thanks on the hyperdb question","timestamp":1501638328848}
{"from":"mafintosh","message":"ralphtheninja[m]: it used to be public but i deprecated it in favor of the event :)","timestamp":1501639619507}
{"from":"mafintosh","message":"So its undocumented","timestamp":1501639631845}
{"from":"ralphtheninja[m]","message":"mafintosh: gotcha","timestamp":1501642084830}
{"from":"ralphtheninja[m]","message":"mafintosh: how is it deprecated?","timestamp":1501642249132}
{"from":"ralphtheninja[m]","message":"aah now I understand how `this.ready` works inside, since the thunk caches the return value","timestamp":1501642364103}
{"from":"ralphtheninja[m]","message":"does it callback sync with the cached value or on nextTick?","timestamp":1501642404122}
{"from":"ralphtheninja[m]","message":"dumb question, of course it's on nextTick :)","timestamp":1501642691396}
{"from":"ralphtheninja[m]","message":"mafintosh: aha, so older versions of node doesn't support passing parameters directly to `process.nextTick`, didn't know this","timestamp":1501642774795}
{"from":"mafintosh","message":"ralphtheninja[m]: ya but that's only in 0.12 so we can drop support for that","timestamp":1501643645196}
{"from":"barnie","message":"hi all, I just asked this on SO: https://stackoverflow.com/questions/45459909/approaches-to-running-nodejs-on-android","timestamp":1501674111143}
{"from":"barnie","message":"with mafintosh node-on-android in it","timestamp":1501675059753}
{"from":"barnie","message":"afraid it may be too broad for SO, so upvoting may help keep it open (PS. permalink: https://stackoverflow.com/q/45459909/8295283)","timestamp":1501676828108}
{"from":"barnie","message":"2nd downvote :(","timestamp":1501677193661}
{"from":"barnie","message":"1 upvote by ralphtheninjs :)","timestamp":1501677604732}
{"from":"barnie","message":"ralphtheninja ^","timestamp":1501677627933}
{"from":"dat-gitter","message":"(green-coder) Hi. Would a DAT be appropriate to store a live chat log ?","timestamp":1501678196208}
{"from":"pfrazee","message":"@green-coder there's actually a demo of that which jhand made...","timestamp":1501678247329}
{"from":"pfrazee","message":"http://dat-chat.netlify.com/ there it is","timestamp":1501678265931}
{"from":"dat-gitter","message":"(green-coder) hell yes !","timestamp":1501678297947}
{"from":"dat-gitter","message":"(green-coder) It took about 1 sec for the update to show up in the demo, that's highly acceptable. Nice.","timestamp":1501678357137}
{"from":"pfrazee","message":":D","timestamp":1501678424334}
{"from":"dat-gitter","message":"(green-coder) it is possible to clone that DAT on my computer to see how it behave ? where is its url ?","timestamp":1501678517083}
{"from":"dat-gitter","message":"(green-coder) Nevermind, I found it. https://mafintosh.github.io/hyperirc-www/#4e397d94d0f5df0e2268b2b7b23948b6dddfca66f91c2d452f404202e6d0f626","timestamp":1501678880490}
{"from":"dat-gitter","message":"(green-coder) (Test .....)","timestamp":1501679173970}
{"from":"ralphtheninja[m]","message":"testing 1 2 3 (sorry for spam)","timestamp":1501679839105}
{"from":"aldebrn","message":"Hallo, I'd like to play with hypercore a bit more so I thought to try and get my own copy of #dat logs on hypercore (via hyperirc: https://github.com/mafintosh/hyperirc#mirrored-irc-channels). Is it sufficient to do something like this?: https://gist.github.com/fasiha/aff50afecb531b2035939965416aebd3 I ask because, when I run that in a node REPL, a directory, `my-first-dataset` is created, but it has an","timestamp":1501683983731}
{"from":"aldebrn","message":"empty `data` file inside, and `feed.peers` has been empty for a few minutes now (but the logs show up right away in my browser)","timestamp":1501683989777}
{"from":"pfrazee","message":"aldebrn: hypercore doesnt have the networking stack included. Dat is all very modular","timestamp":1501685798106}
{"from":"pfrazee","message":"aldebrn: give me a moment to find the code youll need...","timestamp":1501685823124}
{"from":"pfrazee","message":"aldebrn: https://github.com/datproject/dat-node/blob/master/lib/network.js","timestamp":1501685985865}
{"from":"pfrazee","message":"actually I can find an easier example... 1 sec","timestamp":1501686045903}
{"from":"pfrazee","message":"because that's missing some code","timestamp":1501686056145}
{"from":"pfrazee","message":"aldebrn: here we go https://github.com/mafintosh/hyperirc/blob/master/index.js","timestamp":1501686089768}
{"from":"aldebrn","message":"Aha, I see pfrazee thanks! That was a super-clarifying explanation","timestamp":1501686139549}
{"from":"pfrazee","message":"aldebrn: good deal :) LMK if you have other questions","timestamp":1501686167543}
{"from":"aldebrn","message":"I will leave you alone to hack on injestdb, it is like what I was building in a totally ad-hoc-half-ass way :D","timestamp":1501686194097}
{"from":"aldebrn","message":"I mean, I was building something totally junky, and reading the injestdb docs I'm like \"Ahhh that's exactly what I neeeeeeed!\"","timestamp":1501686235610}
{"from":"confiks","message":"If you'll excuse me rambling: I recognize that at-rest encryption is not really a primary goal of Dat, but I've been wondering at what level of the implementation it could exist.","timestamp":1501686370669}
{"from":"confiks","message":"Of course it could at the filesystem level; something like an EncFS folder inside Dat.","timestamp":1501686388244}
{"from":"confiks","message":"But I now see that hyperdrive uses the append-tree package, which then continues to uses the codecs package to use some encoding for the tree `put`.","timestamp":1501686486924}
{"from":"confiks","message":"That looks like place where the contents could be AES encrypted, possibly in counter mode based on te position in the tree.","timestamp":1501686567615}
{"from":"mafintosh","message":"confiks: you should write a storage provider that is encrypted","timestamp":1501686589723}
{"from":"aldebrn","message":"mafintosh: just to make sure I understood that, with an encrypted storage provider, *peers* could still look at your dat data, right? Encrypted storage provider is just for at-rest","timestamp":1501686788054}
{"from":"mafintosh","message":"aldebrn: ya you'd derive a key from the dat key and use that","timestamp":1501686852447}
{"from":"ralphtheninja[m]","message":"mafintosh: lol 1097 coins mined","timestamp":1501686866374}
{"from":"confiks","message":"I'm afraid I wasn't precise enough when I said at-rest encrypt. I actually mean something like 'oblivious storage', where the host doesn't possess the key.","timestamp":1501686869306}
{"from":"ralphtheninja[m]","message":"forgot to close it","timestamp":1501686871978}
{"from":"confiks","message":"mafintosh: I presumed that if the entire storage layer is encrypted, and the peer doesn't possess the key, the peers couldn't execute the hypercore-protocol.","timestamp":1501686891907}
{"from":"confiks","message":"It seems undesirable to completely encrypt the metadata of the filesystem.","timestamp":1501686920778}
{"from":"mafintosh","message":"confiks: ah","timestamp":1501686935777}
{"from":"mafintosh","message":"confiks: i wanna hack on something like that sooj actually","timestamp":1501686952879}
{"from":"confiks","message":"Does encryption inside the append-tree package's `put` and `get` look like a good place? The names for tree traversal of course also would have to be encrypted.","timestamp":1501687081592}
{"from":"pfrazee","message":"aldebrn: glad to hear that! I'm excited to get Injest production ready","timestamp":1501687190854}
{"from":"barnie","message":"hi, could someone plz upvote, so it doesn't get closed: https://stackoverflow.com/questions/45459909/compiling-nodejs-as-native-library-on-android","timestamp":1501687394958}
{"from":"pfrazee","message":"barnie: weird why the downvotes?","timestamp":1501687465003}
{"from":"barnie","message":"don't know, there are downvote trolls or something. other similar questions have 30+ upvotes","timestamp":1501687496106}
{"from":"barnie","message":"added a comment for them, but that is down the page..","timestamp":1501687523416}
{"from":"barnie","message":"thx, for upvote <3","timestamp":1501687585257}
{"from":"barnie","message":"well...it is a broad question if you want to answer it in one go :)","timestamp":1501687676604}
{"from":"barnie","message":"i have to admit","timestamp":1501687689006}
{"from":"pfrazee","message":"https://groups.google.com/forum/#!topic/beaker-browser/rScUgpsyH-0 share your dat projects with motherboard","timestamp":1501688319883}
{"from":"pfrazee","message":"(and also beware of trolls)","timestamp":1501688337510}
{"from":"cblgh","message":"btw it's inadvisable to update a dat from two different computers right?","timestamp":1501688930025}
{"from":"aldebrn","message":"cblgh: mafintosh can give you the details but yes, that's inadvisable, but multiwriter support is coming (soon)","timestamp":1501689019873}
{"from":"ralphtheninja[m]","message":"multi write will be awesome","timestamp":1501689110215}
{"from":"cblgh","message":"ya figured, just wasn't 100% sure what the consequences would be","timestamp":1501689267904}
{"from":"cblgh","message":"haven't tried it yet","timestamp":1501689270627}
{"from":"cblgh","message":"maybe i should","timestamp":1501689272392}
{"from":"aldebrn","message":"cblgh: the way it's been explained to me, the protocol will detect this and (in very vague terms) mark the entire feed as \"bad\" or \"compromised\". The specifics I don't yet understand","timestamp":1501689603597}
{"from":"barnie","message":"obensour1: that was unclear to me too, could be a nice addition to the docs","timestamp":1501689792149}
{"from":"barnie","message":"oh sorry it was for aldebrn:","timestamp":1501689810200}
{"from":"pfrazee","message":"aldebrn: barnie: it's an informal system at the moment","timestamp":1501689833495}
{"from":"pfrazee","message":"basically every node has a copy of the archive history log","timestamp":1501689851441}
{"from":"pfrazee","message":"(every node that cares about the archive)","timestamp":1501689881757}
{"from":"pfrazee","message":"if you issue two messages with the same sequence number, then some nodes will receive message A, and some will receive message B","timestamp":1501689893964}
{"from":"pfrazee","message":"if you received message A, then you'll reject message B and any messages that follow the message B log","timestamp":1501689942688}
{"from":"pfrazee","message":"and visa-ersa","timestamp":1501689949446}
{"from":"barnie","message":"aha","timestamp":1501689961534}
{"from":"aldebrn","message":"netsplit :P","timestamp":1501690061966}
{"from":"aldebrn","message":"Ok, so it's not like the nodes will come to a consensus to stop replicating or anything complicated like that, just, netsplit.","timestamp":1501690119975}
{"from":"aldebrn","message":"Bitcoin style","timestamp":1501690138274}
{"from":"pfrazee","message":"aldebrn: yes and there's no mechanism for reconciliation at this point","timestamp":1501690189941}
{"from":"barnie","message":"planned?","timestamp":1501690252258}
{"from":"pfrazee","message":"it doesnt cost anything to add entries to the history, so unlike btc we can't just go with the longest chain","timestamp":1501690253788}
{"from":"pfrazee","message":"TBH Im not sure. I dont have any ideas yet","timestamp":1501690269133}
{"from":"barnie","message":"looks it could be complex","timestamp":1501690279008}
{"from":"pfrazee","message":"yeah","timestamp":1501690285187}
{"from":"barnie","message":"thx!","timestamp":1501690373932}
{"from":"pfrazee","message":"np","timestamp":1501690378616}
{"from":"pfrazee","message":"new hashbase FP ui https://hashbase.io/","timestamp":1501690389924}
{"from":"aldebrn","message":"And I'm sure I'm jumping the gun asking this but multiwrite. I seem to have closed the tab where Joe was commenting on how it'd work, but when you create a dat (or possibly at any time), you specify the different keys that can be leigitimately used to write to it?","timestamp":1501690409269}
{"from":"aldebrn","message":"And from the above discussion presumably there's a mechanism to order them (I wouldn't be surprised to see a CouchDB-like `revs` approach)?","timestamp":1501690468885}
{"from":"pfrazee","message":"aldebrn: yep, similar to `revs`. I dont know the details of how the keys are set yet but that's roughly it","timestamp":1501690491484}
{"from":"barnie","message":"jhand: could you look at https://github.com/datproject/awesome-dat/pull/4 ?","timestamp":1501690521800}
{"from":"aldebrn","message":"I can't think of any reason why a writable \"collaborator\" can't then *remove* another writable collaborator then?","timestamp":1501690565152}
{"from":"barnie","message":"jhand: and example https://github.com/aschrijver/awesome-dat/blob/fresh/awesome/readme.md","timestamp":1501690579745}
{"from":"barnie","message":"spend a lot of time there","timestamp":1501690621688}
{"from":"aldebrn","message":"pfrazee: +1 on the hashbase look! Looks slick","timestamp":1501690638388}
{"from":"pfrazee","message":"aldebrn: thanks!","timestamp":1501690648309}
{"from":"pfrazee","message":"we're still tweaking it but I think taravancil did a great job with it","timestamp":1501690661106}
{"from":"barnie","message":"yep, it is indeed slick, nice minimalistic design","timestamp":1501690676525}
{"from":"barnie","message":"i would not add feature at about/features, but in the main section","timestamp":1501690737409}
{"from":"aldebrn","message":"barnie: this list of dat projects is blowing my mind. I knew you JavaScripties liked to hypermodularize but I didn't realize how prolific y'all've been, wow, so seeing all these projects in one place is super-helpful and interesting","timestamp":1501690787104}
{"from":"beardicus","message":"looks nice pfrazee. the hashbase logo looks a bit blurry on a non-retina monitor w/ mac/chrome.","timestamp":1501690790610}
{"from":"pfrazee","message":"beardicus: good to know","timestamp":1501690805259}
{"from":"barnie","message":"i am not a JavaScripty :D I am from Java.","timestamp":1501690810978}
{"from":"barnie","message":"That's why i made the list, dat ecosystem dazzled me","timestamp":1501690826489}
{"from":"barnie","message":"maybe you'll also find something here: https://github.com/datproject/discussions/issues/58","timestamp":1501690884348}
{"from":"barnie","message":"trying to get a discussion going","timestamp":1501690893141}
{"from":"beardicus","message":"why are all the \"from the community\" features listed as 3.7mb? is that right?","timestamp":1501690943855}
{"from":"barnie","message":"where do you mean? in the awesome list?","timestamp":1501690999086}
{"from":"barnie","message":"oh sorry","timestamp":1501691014437}
{"from":"barnie","message":"cross-talk","timestamp":1501691018416}
{"from":"pfrazee","message":"beardicus: that's a bug, thanks for pointing it out","timestamp":1501691122798}
{"from":"barnie","message":"pfrazee: why put features in about/features. 1st thing I would like to know, if only to get a feel of what I am seeing (otherwise I have only the tagline to go for)","timestamp":1501691221564}
{"from":"taravancil","message":"thanks beardicus. that sort of thing is tough to spot when you've been looking at the page for as long as i have :)","timestamp":1501691233619}
{"from":"pfrazee","message":"barnie: you think we should put those features in the FP?","timestamp":1501691259670}
{"from":"barnie","message":"its not a 2-page big block, but it gives more clarity on what hashbase does","timestamp":1501691286888}
{"from":"beardicus","message":"no prob. i should probably move into QA and test engineering... i like finding bugs and breaking things. :)","timestamp":1501691288567}
{"from":"pfrazee","message":"yeah","timestamp":1501691294315}
{"from":"aldebrn","message":"Question about https://github.com/datproject/docs/blob/master/papers/dat-paper.md#replication-example about bat.jpg and cat.jpg: the example assumes both files are chunked into three 64KB chunks. My question is: if a file is less than 64 KB, will its third chunk be padded with empties, so chunk boundaries align with file boundaries? Or will the leftover space in a chunk always be used with the initial parts","timestamp":1501691331903}
{"from":"taravancil","message":"barnie that's not a bad idea. i don't think the features list how it's formatted on /about will work","timestamp":1501691334500}
{"from":"aldebrn","message":"of the next file?","timestamp":1501691338455}
{"from":"cblgh","message":"pfrazee: something wrong with the archive-footer in firefox https://cblgh.org/i/2017-08/1864gHw.png","timestamp":1501691344420}
{"from":"taravancil","message":"but we can consider reworking it for the front page","timestamp":1501691345975}
{"from":"taravancil","message":"cblgh we're actually removing the archive footer right now","timestamp":1501691361557}
{"from":"taravancil","message":"but thanks for the heads up","timestamp":1501691374955}
{"from":"cblgh","message":"ah aight!","timestamp":1501691375814}
{"from":"pfrazee","message":"it's caused us nothing but pain for the past 10 minutes!","timestamp":1501691387785}
{"from":"barnie","message":"when i first hit a new product page, the tagline should be grabbing. that's what makes me want to scoll down. then some small very on-topic feature listed would be great","timestamp":1501691389161}
{"from":"cblgh","message":"rascal","timestamp":1501691400462}
{"from":"pfrazee","message":"barnie: yah I think that's a good point","timestamp":1501691400651}
{"from":"pfrazee","message":"(thanks all the hashbase FP feedback is A+)","timestamp":1501691415959}
{"from":"cblgh","message":"otherwise it looks goooood","timestamp":1501691419705}
{"from":"cblgh","message":"gj taravancil pfrazee","timestamp":1501691422227}
{"from":"pfrazee","message":"thanks, it's all taravancil","timestamp":1501691428221}
{"from":"cblgh","message":"super clean","timestamp":1501691430804}
{"from":"barnie","message":"another thing: put pricing on top as well, unless that's only for fun :)","timestamp":1501691466783}
{"from":"cblgh","message":"lol i love the beaker browser only error message","timestamp":1501691468022}
{"from":"pfrazee","message":"cblgh: photos app?","timestamp":1501691492588}
{"from":"cblgh","message":"pfrazee: yup!","timestamp":1501691495559}
{"from":"pfrazee","message":"barnie: ok!","timestamp":1501691500390}
{"from":"pfrazee","message":"cblgh: ya :D","timestamp":1501691505318}
{"from":"cblgh","message":"taravancil / pfrazee: btw how do y'all finance beaker browser/rest of the cool stuff you are doing?","timestamp":1501691585211}
{"from":"barnie","message":"maybe you should even have the pricing option on the landing page. it takes only couple inches and makes clear that you pay for storage (not the code which is wholly OSS)","timestamp":1501691589801}
{"from":"barnie","message":"makes it look friendlier, less commercial (cool landing page --> then the pricing surpise)","timestamp":1501691676158}
{"from":"pfrazee","message":"cblgh: two ways, hashbase and open collective (https://opencollective.com/beaker)","timestamp":1501691705878}
{"from":"aldebrn","message":"taravancil: pfrazee: it's surprising to me that the three Featured Apps don't work at all in regular browsers. But I imagine if most visitors are familiar with Dat they'll understand what to do.","timestamp":1501691737954}
{"from":"pfrazee","message":"barnie: yeah. After our current set of sprints we're going to spend some time focusing on the self-deployability of hashbase too. We'll work on the pricing story then","timestamp":1501691749729}
{"from":"pfrazee","message":"aldebrn: noted. I suppose we should alert people to that fact eh","timestamp":1501691767725}
{"from":"barnie","message":"any way, I wouldn't call the page 'pricing' in the url, because you get there from getting started, so its a 'gettingstarted' page, where you choose your option (even friendlier)","timestamp":1501691818679}
{"from":"taravancil","message":"aldebrn: yeah, we notify the user for the photos and pastedat app, we'll do that for the rss one too","timestamp":1501691860335}
{"from":"cblgh","message":"pfrazee: guess you contract on the side for general living expenses?","timestamp":1501691959174}
{"from":"pfrazee","message":"cblgh: yeah, about to take on another contract now in fact","timestamp":1501691995441}
{"from":"cblgh","message":"gl!","timestamp":1501692075603}
{"from":"pfrazee","message":"thanks!","timestamp":1501692090283}
{"from":"barnie","message":"pfrazee: I would completely rephrase your feature list","timestamp":1501693123021}
{"from":"pfrazee","message":"barnie: open to ideas, wdyt we should do?","timestamp":1501693152333}
{"from":"barnie","message":"well, as it is now looks like you are only targeting existing dat community","timestamp":1501693175454}
{"from":"barnie","message":"features way too technical","timestamp":1501693181115}
{"from":"barnie","message":"and not the essentials maybe","timestamp":1501693188654}
{"from":"pfrazee","message":"yeah","timestamp":1501693215027}
{"from":"barnie","message":"also if i would read the full 'about' page and not knowing anything about dat or p2p or decentralized computing, the page would be completely non-sensical to me, raising a lot of questions","timestamp":1501693299472}
{"from":"barnie","message":"where is my data? how is it stored? is it secure? etc","timestamp":1501693349433}
{"from":"pfrazee","message":"yeah that makes sense","timestamp":1501693385571}
{"from":"barnie","message":"what is this \"super peer\" that is more reliable?","timestamp":1501693387262}
{"from":"barnie","message":"you need a complete outsider mindset when writing that page","timestamp":1501693411542}
{"from":"barnie","message":"sorry for my critical remarks, but I have the look of a product owner, besides a dev ;)","timestamp":1501693495583}
{"from":"pfrazee","message":"no, we appreciate it","timestamp":1501693518450}
{"from":"barnie","message":"i also have a bit of a marketing background that comes gurgling up when I think of these kind of improvements","timestamp":1501693617652}
{"from":"pfrazee","message":"yeah","timestamp":1501693645618}
{"from":"barnie","message":"can't suppress it","timestamp":1501693667119}
{"from":"barnie","message":"having said that, non-profit initiatives like dat project could use some more marketing... its not only a dirty word :)","timestamp":1501693751874}
{"from":"barnie","message":"strategic marketing","timestamp":1501693783033}
{"from":"ungoldman","message":"ogd: made an experimental electron-repl program based of your node-repl program this morning -- works with simple files but falls down when those files require other files relatively. think maybe overloading require.extensions is breaking electron's module resolution or something","timestamp":1501696184512}
{"from":"ungoldman","message":"anyway it was fun to play around with https://github.com/hypermodules/electron-repl","timestamp":1501696190002}
{"from":"pfrazee","message":"nice","timestamp":1501696422350}
{"from":"obensource","message":"ungoldman: rad","timestamp":1501696591867}
{"from":"ungoldman","message":"actually bret I think the problem with electron-repl falling down is the renderer thread","timestamp":1501696883654}
{"from":"ungoldman","message":"anything you run in main is fine afaict","timestamp":1501696892787}
{"from":"ungoldman","message":"relative requires just don't seem to work in renderer for some reason","timestamp":1501696915258}
{"from":"creationix","message":"mafintosh: is your wasm implementation of blake2b faster than the pure JS one out there?","timestamp":1501697785255}
{"from":"creationix","message":"oh heh, found this https://github.com/mafintosh/blake2b-benchmark","timestamp":1501697838903}
{"from":"creationix","message":"oh yes, much faster. Thanks for making this","timestamp":1501698770705}
{"from":"ogd","message":"ungoldman: oh interesting","timestamp":1501699042177}
{"from":"ogd","message":"ungoldman: you probably discovered some weird electron bug or something","timestamp":1501699129044}
{"from":"mafintosh","message":"creationix: 64bit int make a *huge* diff :)","timestamp":1501699351081}
{"from":"barnie","message":"yeah, received an answer from NodeBase creator: https://stackoverflow.com/questions/45459909/compiling-nodejs-as-native-library-on-android","timestamp":1501700180784}
{"from":"pfrazee","message":"https://twitter.com/BeakerBrowser/status/892823458995941376","timestamp":1501700745317}
{"from":"ogd","message":"taravancil: pfrazee im working on a google drive dat storage module that would be good for o/","timestamp":1501700898104}
{"from":"pfrazee","message":"ogd: oh interesting","timestamp":1501701223106}
{"from":"cblgh","message":"lol pure javascript benchmark vs wasm is ridiculous","timestamp":1501704086730}
{"from":"mafintosh","message":"its not really a fair fight tho","timestamp":1501704281722}
{"from":"mafintosh","message":"the hash functions rely heavily on 64bit instructions","timestamp":1501704293256}
{"from":"mafintosh","message":"but so cool we can finally do that from js","timestamp":1501704301718}
{"from":"mafintosh","message":"cblgh: i'm writing tests for the hyperdb release candidate right now","timestamp":1501704348170}
{"from":"mafintosh","message":"only has .get/.put but they should be stable","timestamp":1501704358711}
{"from":"mafintosh","message":"i'm at 80 tests already","timestamp":1501704362850}
{"from":"cblgh","message":"yeah agree that about js part, is wasm supported on mobiles?","timestamp":1501704414136}
{"from":"cblgh","message":"also niceeee","timestamp":1501704421599}
{"from":"cblgh","message":"i went to a friend's place to debug hyperdungeon a bit and noticed i broke it pretty bad with my peer-network solution for keysharing lol","timestamp":1501704460291}
{"from":"cblgh","message":"also his network is incredibly secured, dmz'd to hell and back, so i think that might have had something to do with it","timestamp":1501704493966}
{"from":"cblgh","message":"mafintosh: any sneak peeks on what's going to be in the rc? :~","timestamp":1501704513575}
{"from":"mafintosh","message":"cblgh: yuh, wasm is in safari now","timestamp":1501704534256}
{"from":"mafintosh","message":"or at least the beta one","timestamp":1501704539928}
{"from":"cblgh","message":"rad","timestamp":1501704541220}
{"from":"mafintosh","message":"cblgh: its a lot faster when cached","timestamp":1501704564984}
{"from":"mafintosh","message":"and most of your gist should be implemented","timestamp":1501704575710}
{"from":"mafintosh","message":"and much, much more stable so far","timestamp":1501704582741}
{"from":"mafintosh","message":"i'm sure there is still plenty of bugs","timestamp":1501704589677}
{"from":"mafintosh","message":"plan is, rc out, then add .del and .createReadStream","timestamp":1501704602311}
{"from":"cblgh","message":"oh cool","timestamp":1501704667980}
{"from":"cblgh","message":"wasn't sure if you saw my gist","timestamp":1501704673873}
{"from":"cblgh","message":"and to stability i say: <3","timestamp":1501704694525}
{"from":"creationix","message":"no service-worker in safari yet though :(","timestamp":1501704708212}
{"from":"creationix","message":"and html5 streams seem to be chrome-only at the moment","timestamp":1501704724616}
{"from":"creationix","message":"with service workers and streaming, we could stream large video files in browsers using custom storage and syncing (like dat, for example)","timestamp":1501704785854}
{"from":"creationix","message":"though there is still the problem browsers don't speak utp. I wish webrtc data channel wasn't such a beast","timestamp":1501704857720}
{"from":"mafintosh","message":"same","timestamp":1501704990852}
{"from":"mafintosh","message":"i wonder if safari will ever get service workers","timestamp":1501704999303}
{"from":"creationix","message":"yep, just tried my app on safari, no wasm yet","timestamp":1501705004692}
{"from":"mafintosh","message":"ah ok, its in beta then","timestamp":1501705016931}
{"from":"creationix","message":"I guess I should include the pure-js version of blake2b as fallback","timestamp":1501705017067}
{"from":"mafintosh","message":"they announced at the last event they did","timestamp":1501705026517}
{"from":"mafintosh","message":"wasm + webrtc","timestamp":1501705030344}
{"from":"creationix","message":"fun","timestamp":1501705033949}
{"from":"creationix","message":"https://trac.webkit.org/wiki/FiveYearPlanFall2015","timestamp":1501705103168}
{"from":"mafintosh","message":"https://usercontent.irccloud-cdn.com/file/fZkMlYmT/Screen%20Shot%202017-08-02%20at%2013.28.40.png","timestamp":1501705742614}
{"from":"mafintosh","message":"will clean it up and push it to github","timestamp":1501705753532}
{"from":"creationix","message":"\\o/","timestamp":1501706011706}
{"from":"karissa","message":"mafintosh: :O nice","timestamp":1501706048539}
{"from":"cblgh","message":"mafintosh: lol 42 tests in 23 minutes","timestamp":1501706431805}
{"from":"cblgh","message":"you beast","timestamp":1501706433152}
{"from":"mafintosh","message":"cblgh: i love writing tests","timestamp":1501706449196}
{"from":"cblgh","message":"mafintosh: why?","timestamp":1501706455702}
{"from":"mafintosh","message":"cblgh: cause i normally deal with head scratching decentralised algorithms","timestamp":1501706474910}
{"from":"mafintosh","message":"writing tests just puts me on autopilot :D","timestamp":1501706486859}
{"from":"mafintosh","message":"pure meditation","timestamp":1501706492853}
{"from":"cblgh","message":":3","timestamp":1501706507021}
{"from":"dat-gitter","message":"(lukeburns) any service worker + dat projects out there?","timestamp":1501706717260}
{"from":"pfrazee","message":"creationix: ^","timestamp":1501706755435}
{"from":"creationix","message":"@lukeburns: I'm trying to make one","timestamp":1501706781850}
{"from":"creationix","message":"at the moment, it's not real dat, but just a simple system that chops up large files into a merkle tree similar to dat's internal structure","timestamp":1501706809149}
{"from":"creationix","message":"but I've got it rendering via service workers so I can stream large media files to the html5 video player","timestamp":1501706833437}
{"from":"dat-gitter","message":"(lukeburns) creationix: on github?","timestamp":1501706865081}
{"from":"creationix","message":"not yet, sorry. Still internal prototype","timestamp":1501706906721}
{"from":"dat-gitter","message":"(lukeburns) nw","timestamp":1501706912063}
{"from":"dat-gitter","message":"(lukeburns) what are you using service workers for?","timestamp":1501706955729}
{"from":"dat-gitter","message":"(lukeburns) creationix: i was thinking along the lines of peer-assisted content delivery, but it sounds like you're doing something different","timestamp":1501707074153}
{"from":"creationix","message":"service workers act at a local http server","timestamp":1501707098115}
{"from":"creationix","message":"they take the chunks out of indexedDB and combine them to respond to http requests","timestamp":1501707114630}
{"from":"creationix","message":"200 and 206 responses for http range requests","timestamp":1501707121703}
{"from":"creationix","message":"I don't want to ever have to load large files into memory","timestamp":1501707143757}
{"from":"creationix","message":"also want to be able to swarm chunks from multiple peers or sources","timestamp":1501707158423}
{"from":"dat-gitter","message":"(lukeburns) creationix: is there a reason you're not using hyperdrive?","timestamp":1501707428839}
{"from":"creationix","message":"this is simpler","timestamp":1501707493270}
{"from":"creationix","message":"I tried to dig into hyperdrive and see how it works at a low level, but got lost in the code","timestamp":1501707510234}
{"from":"creationix","message":"the public API isn't fine-grained enough for the kind of streaming I'm doing","timestamp":1501707529678}
{"from":"substack","message":"what about hypercore using createReadStream() and createWriteStream()?","timestamp":1501707681274}
{"from":"dat-gitter","message":"(lukeburns) creationix: so your goal is streaming videos over swarms by intercepting video requests with service workers?","timestamp":1501707750556}
{"from":"creationix","message":"substack: that might work if it supports ranges","timestamp":1501707775578}
{"from":"creationix","message":"also I'm not using browserify so the dat ecosystem is harder to consume","timestamp":1501707798799}
{"from":"substack","message":"createReadStream() takes a start and end index","timestamp":1501707830633}
{"from":"substack","message":"I think createWriteStream() also takes an offset if the feed isn't finalized","timestamp":1501707860472}
{"from":"dat-gitter","message":"(lukeburns) creationix: also if you needed the fs structure, you could do the same with hyperdrive to stream files in the archive -- e.g. archive.createReadStream(name, { start, end })","timestamp":1501708027188}
{"from":"karissa","message":"jhand: ralphtheninja[m] made your changes to the selective sync pr, good suggestions!","timestamp":1501708641253}
{"from":"ralphtheninja[m]","message":"karissa++","timestamp":1501709307832}
{"from":"todrobbins","message":"Anybody familiar with this? https://medium.com/@WebReflection/hyperhtml-is-killing-it-d19119ea7d22","timestamp":1501710117477}
{"from":"todrobbins","message":"*Anybody here","timestamp":1501710124961}
{"from":"yoshuawuyts","message":"toddself: yeah, seen it","timestamp":1501711141456}
{"from":"toddself","message":"o hai different todd :-D","timestamp":1501711166422}
{"from":"yoshuawuyts","message":"Oops sorry, meant todrobbins","timestamp":1501711175836}
{"from":"yoshuawuyts","message":"toddself: darn autocomplete","timestamp":1501711187774}
{"from":"toddself","message":"lol yeah i f that up in chat all the time!","timestamp":1501711202136}
{"from":"ralphtheninja[m]","message":"it has hyper in the name, so must be awesome :D","timestamp":1501711332165}
{"from":"todrobbins","message":"LOL","timestamp":1501712387059}
{"from":"todrobbins","message":"cool","timestamp":1501712388364}
{"from":"mafintosh","message":"Whats the diff between hyperhtml and bel?","timestamp":1501712627266}
{"from":"yoshuawuyts","message":"mafintosh: hyperhtml was built later, think they used to use setInnerHTML heavily or smth","timestamp":1501712792793}
{"from":"mafintosh","message":"It says it works in old IEs tho haha","timestamp":1501712841193}
{"from":"yoshuawuyts","message":"lmao","timestamp":1501712912485}
{"from":"barnie","message":"another person directed to dat :) https://stackoverflow.com/questions/45471023/p2p-video-audio-chat-on-android-using-webview/45476871#45476871","timestamp":1501743405703}
{"from":"barnie","message":"cool app about trust: http://ncase.me/trust/ (also on github)","timestamp":1501766469964}
{"from":"pfrazee","message":"creationix: hey so Im finally debugging your connectivity issue","timestamp":1501775963553}
{"from":"pfrazee","message":"creationix: remind me, did you build your beaker from master or use the prebuild?","timestamp":1501775989137}
{"from":"creationix","message":"prebuild had the most trouble","timestamp":1501776005640}
{"from":"pfrazee","message":"ok","timestamp":1501776009801}
{"from":"creationix","message":"also fwiw, I can't `dat share` on my laptop and clone it from remote machines","timestamp":1501776027320}
{"from":"pfrazee","message":"99% sure the issue is that you failed to connect to the publicbits.org trackers","timestamp":1501776031190}
{"from":"pfrazee","message":"so I gotta figure out why that might have failed","timestamp":1501776054593}
{"from":"pfrazee","message":"mafintosh: ping","timestamp":1501777183055}
{"from":"creationix","message":"but if I add another peer on my lan (a linux box exposed to the internet), then remote peers can find my dat","timestamp":1501777953612}
{"from":"creationix","message":"the linux box is in my router's DMZ so all ports are open","timestamp":1501777966064}
{"from":"pfrazee","message":"creationix: it's possible your linux box is acting as a proxy","timestamp":1501777982765}
{"from":"creationix","message":"I'm sure it is. I see it download bytes from my laptop and upload them to the remote server","timestamp":1501778002170}
{"from":"creationix","message":"my laptop only ever connects to the linux box as peer","timestamp":1501778012227}
{"from":"creationix","message":"network is Google Wifi mesh network","timestamp":1501778039228}
{"from":"creationix","message":"only thing I've changed from default is enable upnp and port forward to my linux box","timestamp":1501778062618}
{"from":"pfrazee","message":"Im adding some extra debug()s to sort out why the dns tracker isnt kicking in","timestamp":1501778141291}
{"from":"pfrazee","message":"Im also going to add a way to capture the DEBUG logs automatically within the beaker ui","timestamp":1501778161016}
{"from":"cblgh","message":"is this connectivity issue beaker only or dat in general?","timestamp":1501778372382}
{"from":"pfrazee","message":"it appears to be beaker only","timestamp":1501778422360}
{"from":"mafintosh","message":"pfrazee: whats up?","timestamp":1501778750671}
{"from":"pfrazee","message":"mafintosh: nm I thought I found a bug but I didnt. Filed PR for more debug()s if you get a chance to publish","timestamp":1501778777815}
{"from":"mafintosh","message":"Ah ok","timestamp":1501778806744}
{"from":"mafintosh","message":"Driving towards civilization now. Will take a look","timestamp":1501778825758}
{"from":"pfrazee","message":"cool thanks","timestamp":1501778920097}
{"from":"creationix","message":"pfrazee: well, I have two issues. One is beaker and the other is CLI dat","timestamp":1501783524298}
{"from":"pfrazee","message":"creationix: just made grabbing the logs simpler https://github.com/beakerbrowser/beaker/pull/633","timestamp":1501784741832}
{"from":"creationix","message":"nice","timestamp":1501784812461}
{"from":"pfrazee","message":"creationix: soon as my PRs land that add more logging relevant to your issue, I'm going to ask you to try again from master","timestamp":1501785010825}
{"from":"creationix","message":"ok","timestamp":1501785065665}
{"from":"creationix","message":"though it was harder to reproduce from master","timestamp":1501785072787}
{"from":"pfrazee","message":"if that ends up being the case, I could produce a build for you, or we could wait for the next release","timestamp":1501785177523}
{"from":"creationix","message":"btw, this is how I update beaker from source https://gist.github.com/creationix/f247e4f7c9916904395e1171eb4478c6","timestamp":1501785218413}
{"from":"creationix","message":"not sure the `rm package-lock.json app/package-lock.json` line is a good idea though","timestamp":1501785234821}
{"from":"pfrazee","message":"probably doesnt matter that much tbh","timestamp":1501785270090}
{"from":"creationix","message":"yep, beaker from master connects to remote peers just fine at the moment","timestamp":1501785282265}
{"from":"jhand","message":"pfrazee: now just make it so you can share that page via dat =)","timestamp":1501785303220}
{"from":"pfrazee","message":"jhand: hah that'd be good except it's most useful when dat aint workin","timestamp":1501785325683}
{"from":"mafintosh","message":"jhand: haha, got you hooked on remote debugging","timestamp":1501785325968}
{"from":"jhand","message":"oh right","timestamp":1501785339311}
{"from":"mafintosh","message":"pfrazee: would still be useful","timestamp":1501785351464}
{"from":"pfrazee","message":"eh, someday later","timestamp":1501785380887}
{"from":"mafintosh","message":"just sync debug dats to a central endpoint","timestamp":1501785382140}
{"from":"jhand","message":"mafintosh: is there a way in hypercore to tell how mnay blocks you've requested to download?","timestamp":1501785384135}
{"from":"jhand","message":"this._selections I guess?","timestamp":1501785409295}
{"from":"mafintosh","message":"jhand: ohhh, good question. like how many inflight requests for blocks you have?","timestamp":1501785413978}
{"from":"jhand","message":"mafintosh: ya for the sparse downloading, so we can show progress/stats","timestamp":1501785426121}
{"from":"jhand","message":"does this make a big difference? https://github.com/mafintosh/hypercore/blob/master/index.js#L107","timestamp":1501785504303}
{"from":"jhand","message":"(lazy loading ./lib/replicate","timestamp":1501785516466}
{"from":"mafintosh","message":"jhand: yea actually","timestamp":1501785524824}
{"from":"mafintosh","message":"for use cases where you don't replicate obviously","timestamp":1501785538765}
{"from":"jhand","message":"huh cool.","timestamp":1501785544037}
{"from":"jhand","message":"right","timestamp":1501785544656}
{"from":"jhand","message":"I guess it made a big difference on cli stuff too","timestamp":1501785554677}
{"from":"jhand","message":"used some module that took like 5 seconds for --help the other day and it made me want to not use it. glad you told me to fix that =)","timestamp":1501785579951}
{"from":"mafintosh","message":"jhand: there is some tool that tells you what is heavy to laod","timestamp":1501785602310}
{"from":"mafintosh","message":"can't remember right now what it is","timestamp":1501785607825}
{"from":"mafintosh","message":"maybe ogd or yoshuawuyts knows","timestamp":1501785613127}
{"from":"mafintosh","message":"jhand: each peer object has an inflightRequests array, https://github.com/mafintosh/hypercore/blob/master/lib/replicate.js#L53","timestamp":1501785636844}
{"from":"ogd","message":"https://github.com/maxogden/require-times","timestamp":1501785642964}
{"from":"jhand","message":"ya ive seen a few that check dep install times so I guess thats similar","timestamp":1501785645552}
{"from":"jhand","message":"oh nice","timestamp":1501785653055}
{"from":"mafintosh","message":"jhand: we can add a getter on the feed instance that adds the legnth of all those together","timestamp":1501785666759}
{"from":"jhand","message":"mafintosh: ah so thats all active requests. but I still want to know the total size of the requested downloads.","timestamp":1501785705512}
{"from":"creationix","message":"pfrazee: I'm thinking that instead of using service workers in beaker, I should use the dat API directly. So then I just need to re-implement the dat API in other browsers using service workers. I can setup servers that listen on websockets and give my https sites the ability to serve DATs through them?","timestamp":1501785767850}
{"from":"pfrazee","message":"creationix: yeah, I was talking about that with jondashkyle yesterday","timestamp":1501785790121}
{"from":"creationix","message":"because all I need it the ability to write a file in the browser and then read it back in the browser (and sync it with other machines)","timestamp":1501785792204}
{"from":"mafintosh","message":"jhand: like all selections basically","timestamp":1501785814511}
{"from":"mafintosh","message":"makes sense","timestamp":1501785815846}
{"from":"pfrazee","message":"creationix: I've been thinking about a dat-rpc interface in general, and that could be used over websockets","timestamp":1501785820061}
{"from":"jhand","message":"mafintosh: right","timestamp":1501785825239}
{"from":"mafintosh","message":"jhand: i don't think there is a public api for that, but usecase makes sense and it should be easy","timestamp":1501785830778}
{"from":"mafintosh","message":"jhand: so open an issue or pr for it","timestamp":1501785840326}
{"from":"jhand","message":"pfrazee: cool! if you do that then we can make it so you can send messages via chat.datproject.org","timestamp":1501785852180}
{"from":"mafintosh","message":"might be as easy to making ._selections -> .selections","timestamp":1501785855305}
{"from":"pfrazee","message":"jhand: yeah totally","timestamp":1501785858448}
{"from":"creationix","message":"pfrazee: any notes on this. I might just prototype it","timestamp":1501785860068}
{"from":"jhand","message":"mafintosh: ya I'll take a look","timestamp":1501785864936}
{"from":"pfrazee","message":"creationix: no notes, but the interface I'll want to match will be https://github.com/beakerbrowser/node-dat-archive","timestamp":1501785892464}
{"from":"creationix","message":"ahh yes, I saw that","timestamp":1501785915418}
{"from":"creationix","message":"so who owns the dat? The https site or the server?","timestamp":1501785930870}
{"from":"creationix","message":"I could see use cases for both","timestamp":1501785937833}
{"from":"pfrazee","message":"yeah I think you need boh","timestamp":1501785944351}
{"from":"pfrazee","message":"both","timestamp":1501785946053}
{"from":"creationix","message":"(but if the server owns the dat, it needs auth on the rpc)","timestamp":1501785949012}
{"from":"pfrazee","message":"yeah it does","timestamp":1501785955127}
{"from":"pfrazee","message":"the 2 usecases I have are: 1) fallback for old browsers, and 2) collaboration without using multiwriter","timestamp":1501785973375}
{"from":"creationix","message":"which my hardware could easily do (assuming the browser can reach it)","timestamp":1501785973480}
{"from":"pfrazee","message":"jondashkyle: hey try uploading to the lily account again on hashbase","timestamp":1501786379835}
{"from":"jondashkyle","message":"pfrazee: that's on my her machine, so will try syncing this eve!","timestamp":1501786588550}
{"from":"pfrazee","message":"jondashkyle: ok cool","timestamp":1501786597504}
{"from":"jondashkyle","message":"pfrazee: looks like it fixed it for IA though! https://informational-affairs-jkm.hashbase.io/","timestamp":1501786861254}
{"from":"pfrazee","message":"jondashkyle: ok cool","timestamp":1501786895294}
{"from":"yoshuawuyts","message":"mafintosh jhand: sorry, don't know it; it's https://github.com/hughsk/disc for frontend tho","timestamp":1501787041225}
{"from":"yoshuawuyts","message":"guess you could use it for Node code too, as it only resolves deps - no need to actually use the browserify bundle","timestamp":1501787066753}
{"from":"yoshuawuyts","message":"mafintosh: fixed a lil nit in ansi-diff-stream :D https://github.com/mafintosh/ansi-diff-stream/pull/3/files","timestamp":1501789569458}
{"from":"mafintosh","message":"yoshuawuyts: i'm feeling lazy. mind removing 0.10/0.12 from travis in that pr so it doesn't fail?","timestamp":1501789709120}
{"from":"yoshuawuyts","message":"mafintosh: on it","timestamp":1501789716956}
{"from":"yoshuawuyts","message":"\\o/","timestamp":1501789786029}
{"from":"mafintosh","message":"yoshuawuyts: reminds of this gist i wrote the other day, https://gist.github.com/mafintosh/59007a69bddebe9c59a2943780fb77bc","timestamp":1501789848043}
{"from":"mafintosh","message":"yoshuawuyts: for doing interactive inputs with ansi diff stream","timestamp":1501789858478}
{"from":"mafintosh","message":"need to put that in a module","timestamp":1501789864902}
{"from":"yoshuawuyts","message":"mafintosh: ohhh, nice","timestamp":1501789874486}
{"from":"yoshuawuyts","message":"mafintosh: might actually just use that for my module :D","timestamp":1501789884374}
{"from":"mafintosh","message":"yoshuawuyts: hmm, i think your pr broke something","timestamp":1501789919443}
{"from":"mafintosh","message":"running the example it messing up things","timestamp":1501789928034}
{"from":"yoshuawuyts","message":"mafintosh: oh no, what broke?","timestamp":1501789929833}
{"from":"mafintosh","message":"yoshuawuyts: try running example.js in the repo","timestamp":1501789963080}
{"from":"yoshuawuyts","message":"https://usercontent.irccloud-cdn.com/file/aSeOdMls/Screen%20Shot%202017-08-03%20at%2021.53.06.png","timestamp":1501790005700}
{"from":"yoshuawuyts","message":"isn't this what it's supposed to look like?","timestamp":1501790011463}
{"from":"mafintosh","message":"yoshuawuyts: no see the first line is moving up","timestamp":1501790026678}
{"from":"mafintosh","message":"and the random data one is duplicated","timestamp":1501790035782}
{"from":"mafintosh","message":"try reverting your fix and you'll see the correct output","timestamp":1501790050406}
{"from":"yoshuawuyts","message":"mafintosh: same output :/","timestamp":1501790083837}
{"from":"yoshuawuyts","message":"https://usercontent.irccloud-cdn.com/file/I4Mv8aii/Screen%20Shot%202017-08-03%20at%2021.55.04.png","timestamp":1501790123577}
{"from":"mafintosh","message":"yoshuawuyts: thats different that the first image","timestamp":1501790136458}
{"from":"mafintosh","message":"yoshuawuyts: see the line above \"random data\" in the first one","timestamp":1501790151390}
{"from":"yoshuawuyts","message":"ah!","timestamp":1501790169937}
{"from":"mafintosh","message":"yoshuawuyts: my guess is that the differ assumes every line to contain a newline","timestamp":1501790195963}
{"from":"mafintosh","message":"and there moved the cursor up too far","timestamp":1501790203811}
{"from":"mafintosh","message":"yoshuawuyts: cool :)","timestamp":1501790401076}
{"from":"mafintosh","message":"i like the fix tho, so would be cool to fix that bug","timestamp":1501790414436}
{"from":"yoshuawuyts","message":":D","timestamp":1501790422159}
{"from":"yoshuawuyts","message":"mafintosh: oh, do you mind if I s/Buffer()/Buffer.from()/ - standard is filling half my screen with warnings haha","timestamp":1501790442699}
{"from":"yoshuawuyts","message":"~~ scope creep ~~","timestamp":1501790464039}
{"from":"mafintosh","message":"npm test","timestamp":1501790687978}
{"from":"mafintosh","message":"uses old standard","timestamp":1501790690990}
{"from":"mafintosh","message":"muhahaha","timestamp":1501790692445}
{"from":"yoshuawuyts","message":"lmao","timestamp":1501790695981}
{"from":"mafintosh","message":"yoshuawuyts: but you can change it if you want, once you fix the bug","timestamp":1501791532884}
{"from":"mafintosh","message":"#motivation","timestamp":1501791540362}
{"from":"yoshuawuyts","message":"lmao","timestamp":1501791542237}
{"from":"yoshuawuyts","message":"yeah, was about to say: been at it for 30 mins now - kinda feeling it might not be worth it haha","timestamp":1501791560044}
{"from":"mafintosh","message":"yoshuawuyts: make an issue for it","timestamp":1501791579763}
{"from":"mafintosh","message":"yoshuawuyts: then i'll revert and take a stab at it at some point :)","timestamp":1501791592752}
{"from":"yoshuawuyts","message":"mafintosh: :D","timestamp":1501791601944}
{"from":"yoshuawuyts","message":"mafintosh: https://github.com/mafintosh/ansi-diff-stream/issues/4","timestamp":1501791663496}
{"from":"yoshuawuyts","message":"sorry I couldn't get it to work","timestamp":1501791678054}
{"from":"mafintosh","message":"haha, no worries","timestamp":1501791790823}
{"from":"mafintosh","message":"i think it might be non trivial in my code base honestly","timestamp":1501791799692}
{"from":"mafintosh","message":"and its hard to test","timestamp":1501791804121}
{"from":"mafintosh","message":"yoshuawuyts: need to turn my gist into a module tho","timestamp":1501791814826}
{"from":"mafintosh","message":"like prompt-live or something","timestamp":1501791833360}
{"from":"yoshuawuyts","message":"mafintosh: would be neat","timestamp":1501791908073}
{"from":"yoshuawuyts","message":"https://usercontent.irccloud-cdn.com/file/wYcjIM7l/Screen%20Shot%202017-08-03%20at%2022.25.35.png","timestamp":1501791956206}
{"from":"yoshuawuyts","message":"^ WIP screenshot from new bankai CLI I'm building","timestamp":1501791964308}
{"from":"yoshuawuyts","message":"want to have live keys to navigate between the different files, and drop down for more details","timestamp":1501791995517}
{"from":"yoshuawuyts","message":"prompt-live would be neat for that :D","timestamp":1501792006714}
{"from":"mafintosh","message":"oh perfect","timestamp":1501792047122}
{"from":"mafintosh","message":"i need to figure out if you can draw the cursor yourself somehow","timestamp":1501792059433}
{"from":"mafintosh","message":"i guess you can use ansi to move it","timestamp":1501792070474}
{"from":"pfrazee","message":"yoshuawuyts: that's hot","timestamp":1501792193057}
{"from":"yoshuawuyts","message":"mafintosh: yeah that'd be cool C:","timestamp":1501792319641}
{"from":"yoshuawuyts","message":"pfrazee: :D yay, glad you like it - collaborating with louisc and shibacomputer","timestamp":1501792329915}
{"from":"pfrazee","message":"nice","timestamp":1501792350930}
{"from":"karissa","message":"jhand: ok selective-sync pr updated with your suggestions, and now it works with pull too. i added a little 'warnings' ui that persists even after exiting. wondering what you think","timestamp":1501793333144}
{"from":"jhand","message":"karissa: awesome!! i'll check it out","timestamp":1501793360151}
{"from":"karissa","message":"jhand: i think the only thing now is being able to tell the user when it's done syncing.","timestamp":1501793360396}
{"from":"jhand","message":"karissa: right. opened issue for us to start tracking that https://github.com/datproject/dat-node/issues/166","timestamp":1501793384809}
{"from":"dat-gitter","message":"(scriptjs) @karissa Good work on this. It’s going to be great to have.","timestamp":1501793472156}
{"from":"karissa","message":"scriptjs thanks!","timestamp":1501793538662}
{"from":"karissa","message":"jhand: i am gonna squash","timestamp":1501793544091}
{"from":"dat-gitter","message":"(scriptjs) Has anyone compiled a 32bit node for android?","timestamp":1501793838799}
{"from":"mafintosh","message":"@scriptjs no but it shouldn't be hard","timestamp":1501793936423}
{"from":"mafintosh","message":"If you have a 32 bit phone or can figure out how to cross compile","timestamp":1501793966820}
{"from":"louisc","message":"yoshuawuyts: nice! 😍","timestamp":1501794057111}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh Yup. I saw the write up. Will try this later tonight.","timestamp":1501794185231}
{"from":"bret","message":"yoshuawuyts: are you all cowering in Berlin?","timestamp":1501794787533}
{"from":"yoshuawuyts","message":"bret: everybody's moving here :D","timestamp":1501796326009}
{"from":"bret","message":"mafintosh: I keep getting empty responses from http://node-modules.com/search?q=querystring when I'm logged in","timestamp":1501802421641}
{"from":"ogd","message":"looking into putting http://help.nla.gov.au/trove/building-with-trove/api on dat","timestamp":1501814327501}
{"from":"ogd","message":"bret: have you tried restarting your router","timestamp":1501814729798}
{"from":"bret","message":"ogd: no why?","timestamp":1501814775882}
{"from":"ogd","message":"bret: lol i was joking","timestamp":1501814790495}
{"from":"bret","message":"oh I said something before that","timestamp":1501816259974}
{"from":"bret","message":"😬","timestamp":1501816265619}
{"from":"dat-gitter","message":"(e-e-e) ogd: I have been having similiar thoughts about trove. Was planing on implementing it in dat-library to fetch optional metadata associate with texts. What are your thought with transfering it to a dat?","timestamp":1501821017208}
{"from":"dat-gitter","message":"(e-e-e) I have used trove a lot in the past - it is an amazing resource.","timestamp":1501821058265}
{"from":"shibacomputer","message":"I really want to sink my teeth into some code today but the weather is sehr gut today ;_;","timestamp":1501829566709}
{"from":"TheLink","message":"Oy https://mobile.twitter.com/ebidel/status/893250174083923969","timestamp":1501837641485}
{"from":"barnie","message":"dat team, i created an issue on improving documentation procedure: https://github.com/datproject/discussions/issues/73","timestamp":1501839563451}
{"from":"barnie","message":"latest/greatest awesome-dat PR: https://github.com/aschrijver/awesome-dat/blob/fresh/awesome/readme.md","timestamp":1501840163679}
{"from":"ralphtheninja[m]","message":"greetings from holland","timestamp":1501840233507}
{"from":"barnie","message":"hee, i am also from holland. Nog net goedemorgen!","timestamp":1501840737453}
{"from":"ralphtheninja[m]","message":"I'm not from holland, I'm just here :)","timestamp":1501841343479}
{"from":"barnie","message":"aha, amsterdam?","timestamp":1501841376053}
{"from":"ralphtheninja[m]","message":"nijkerk","timestamp":1501841442510}
{"from":"barnie","message":"cool. enjoy your stay, man!!","timestamp":1501841455728}
{"from":"ralphtheninja[m]","message":"thanks! I am enjoying it :)","timestamp":1501841485252}
{"from":"ralphtheninja[m]","message":"lots of wind and clouds but not cold","timestamp":1501841497890}
{"from":"barnie","message":"nijkerk vs amsterdam, no pot, red-lights, sex museum and wild hordes of tourists :D","timestamp":1501841522670}
{"from":"barnie","message":"well, maybe pot","timestamp":1501841537251}
{"from":"ralphtheninja[m]","message":"we're about 3000 hackers here","timestamp":1501841568865}
{"from":"barnie","message":"cooool, shao2017, forgot about it. jeez, 5 days is it?","timestamp":1501841645724}
{"from":"ralphtheninja[m]","message":"yep!","timestamp":1501841655809}
{"from":"barnie","message":"interesting stuff going on i saw on the sessions list","timestamp":1501841671788}
{"from":"ralphtheninja[m]","message":"not really about dat, sorry about off topic","timestamp":1501841677610}
{"from":"ralphtheninja[m]","message":"indeed, a lot of things to discover .. will try to promote dat at the event","timestamp":1501841714065}
{"from":"barnie","message":"yeah!!","timestamp":1501841721851}
{"from":"ralphtheninja[m]","message":"people often copy data between each other, using usb sticks / hard drives and what not, can use just dat instead","timestamp":1501841756230}
{"from":"barnie","message":"yea, could create a perfect app for such conferences, once stuff runs on android","timestamp":1501841797382}
{"from":"barnie","message":"perfectly in theme","timestamp":1501841815396}
{"from":"barnie","message":"test the security in the process, he he","timestamp":1501841857242}
{"from":"ralphtheninja[m]","message":"hehe aye :)","timestamp":1501842162375}
{"from":"barnie","message":"could scan QR codes from paper, leading to small beaker browser interactive apps","timestamp":1501842201331}
{"from":"barnie","message":"besides just automatically streaming in the sessions","timestamp":1501842222799}
{"from":"barnie","message":"so any participant can create their own presentation, print out a bunch of QR cards and spread them around","timestamp":1501842357234}
{"from":"ralphtheninja[m]","message":"can also use the badges for that, they have wifi and bluetooth","timestamp":1501842414215}
{"from":"ralphtheninja[m]","message":"also run dat on the badges would be nice (micropython)","timestamp":1501842424734}
{"from":"barnie","message":"can have nice branding on cards, themes (magic, mysterious)","timestamp":1501842428679}
{"from":"barnie","message":"yes","timestamp":1501842432580}
{"from":"barnie","message":"cool","timestamp":1501842439743}
{"from":"ralphtheninja[m]","message":"barnie's marketing brain started again :P","timestamp":1501842491542}
{"from":"barnie","message":"i'll stop here, ha ha","timestamp":1501842501829}
{"from":"barnie","message":"i gotta go. enjoy your hacking!","timestamp":1501842647541}
{"from":"ralphtheninja[m]","message":"cc mafintosh","timestamp":1501842737363}
{"from":"ralphtheninja[m]","message":"I heard someone put gameboy on it yesterday","timestamp":1501842817389}
{"from":"ralphtheninja[m]","message":"ok, enough off topic :)","timestamp":1501842821592}
{"from":"barnie","message":"ralphtheninja[m]: wow, great badges!","timestamp":1501845830155}
{"from":"pfrazee_","message":"ogd: hey could you merge and publish https://github.com/maxogden/discovery-channel/pull/16 for me today? I want to put out a new beaker today to do some debugging","timestamp":1501858672095}
{"from":"pfrazee_","message":"mafintosh: same story with https://github.com/mafintosh/dns-discovery/pull/8","timestamp":1501858679235}
{"from":"mafintosh","message":"pfrazee_: will do after i get my laptop to work again","timestamp":1501861166797}
{"from":"pfrazee_","message":"mafintosh: thanks","timestamp":1501861179114}
{"from":"pfrazee_","message":"did it break?","timestamp":1501861182757}
{"from":"mafintosh","message":"pfrazee_: it's drying out after i spilled some milk in it hehe","timestamp":1501861195435}
{"from":"pfrazee_","message":"haha","timestamp":1501861201569}
{"from":"barnie","message":"jhand: hi! i improved the awesome per your feedback: https://github.com/aschrijver/awesome-dat/blob/fresh/awesome/readme.md","timestamp":1501862504039}
{"from":"barnie","message":"and a proposal for handling docs https://github.com/datproject/discussions/issues/73","timestamp":1501862784201}
{"from":"ogd","message":"@e-e-e: i found http://api.trove.nla.gov.au/result?q=lastupdated:[*%20TO%20*]&zone=all&encoding=json which is the only way i can find to get all metadata in one paginated query, was gonna start downloading that","timestamp":1501865709742}
{"from":"pfrazee_","message":"ogd: thanks for the merge","timestamp":1501865951464}
{"from":"cblgh","message":"api.trove is a good subdomain","timestamp":1501866447362}
{"from":"cel","message":"fg","timestamp":1501868046487}
{"from":"jhand","message":"karissa: awesome to see the video preview working, nice! https://master.datproject.org/dat://2e56732218ae0792f026ae9769e7de72a83449963952b771dbacde22ee58cb8e/contents/selective-sync-demo.mp4","timestamp":1501870806124}
{"from":"jhand","message":"dat cli v13.8.0 released! Adds selective syncing and some basic key management: https://github.com/datproject/dat/releases/tag/v13.8.0","timestamp":1501871739967}
{"from":"karissa","message":"jhand: makes me think it'd be nice to have a way to do this in one command","timestamp":1501872120200}
{"from":"jhand","message":"karissa: the selective sync?","timestamp":1501872137867}
{"from":"karissa","message":"yeah","timestamp":1501872214469}
{"from":"karissa","message":"jhand: maybe it should be in the clone","timestamp":1501872219442}
{"from":"karissa","message":"too","timestamp":1501872227516}
{"from":"jhand","message":"karissa: ya that'd be cool. we can do that along with `dat clone dat://<key>/sub-dir`.","timestamp":1501872277691}
{"from":"karissa","message":"yeah","timestamp":1501872284037}
{"from":"karissa","message":"the people from the sky survey wanted to be able to specify multiple paths in the subset","timestamp":1501872304596}
{"from":"jhand","message":"yea makes sense.","timestamp":1501872337509}
{"from":"pfrazee_","message":"jhand: nice. We just released beaker 0.7.5","timestamp":1501872738052}
{"from":"jhand","message":"pfrazee_: yay release day!","timestamp":1501872745441}
{"from":"pfrazee_","message":"jhand: huzzah!","timestamp":1501872775438}
{"from":"mafintosh","message":"https://usercontent.irccloud-cdn.com/file/vQdBGvuw/irccloudcapture898422029.jpg","timestamp":1501872784929}
{"from":"mafintosh","message":"yoshuawuyts: o/ after a fatal milk accident my laptop looks like yours now","timestamp":1501872797768}
{"from":"bret","message":"doh","timestamp":1501873138037}
{"from":"bret","message":"you spilled milk on your keyboard?","timestamp":1501873159045}
{"from":"mafintosh","message":"milk tea ya","timestamp":1501873361879}
{"from":"mafintosh","message":"i still kinda works tho","timestamp":1501873366104}
{"from":"mafintosh","message":"you just have to punch the keys","timestamp":1501873376594}
{"from":"mafintosh","message":"i just ran the hypercore benchmark on noffle's xps 13 and it was 3x my macbook","timestamp":1501873507868}
{"from":"mafintosh","message":"perf wise","timestamp":1501873512709}
{"from":"mafintosh","message":"so i'm thinking about buying that one as a replacement","timestamp":1501873532228}
{"from":"bret","message":"mafintosh: I recommend the Kabby lake MacBook pros","timestamp":1501875732545}
{"from":"bret","message":"you can try mine out if you want","timestamp":1501875740809}
{"from":"mafintosh","message":"bret: the pro is too big for me","timestamp":1501875748355}
{"from":"bret","message":"13 inch","timestamp":1501875753747}
{"from":"mafintosh","message":":)","timestamp":1501875753853}
{"from":"bret","message":"its really small","timestamp":1501875762187}
{"from":"mafintosh","message":"hmm ok","timestamp":1501875782562}
{"from":"mafintosh","message":"(i'll prob get a linux one)","timestamp":1501875790859}
{"from":"mafintosh","message":"noffle has conviced me","timestamp":1501875797063}
{"from":"bret","message":"the new MacBook pro is like the MacBook basically","timestamp":1501875800037}
{"from":"bret","message":"mafintosh: run arch!","timestamp":1501875813257}
{"from":"bret","message":"your gonna hate the desktop environment","timestamp":1501875828596}
{"from":"mafintosh","message":"i only use the terminal and a browser anyway","timestamp":1501875832853}
{"from":"mafintosh","message":"and the fully spec'ed xps is 300$ cheaper than the 12'' macbook","timestamp":1501875866374}
{"from":"bret","message":"all the 1337 hackers use https://i3wm.org/screenshots/ but I gnome was okay","timestamp":1501875892497}
{"from":"bret","message":"just way buggier than OS X ime","timestamp":1501875905157}
{"from":"bret","message":"I thought*","timestamp":1501875914121}
{"from":"bret","message":"I just can't live without my http://osxdaily.com/2010/03/04/compare-two-files-with-filemerge/ or https://rowanj.github.io/gitx/","timestamp":1501875952628}
{"from":"creationix","message":"have you all heard about chronicle? Sounds a lot like dat, but from the PHP world https://paragonie.com/blog/2017/07/chronicle-will-make-you-question-need-for-blockchain-technology","timestamp":1501876502706}
{"from":"creationix","message":"slightly different use case too","timestamp":1501876513787}
{"from":"pfrazee_","message":"creationix: that's a lot like what Im suggesting with the node smart contract vm","timestamp":1501878801242}
{"from":"creationix","message":"their \"blakechain\" is different than hypercore's sideways tree. It's a lot simpler, but I don't think you can get random access *and* verify hashes without first verifying all parent nodes.","timestamp":1501878869743}
{"from":"creationix","message":"but it sounds like they only intend to store metadata in the chain and keep the bulk of data externally","timestamp":1501878902698}
{"from":"pfrazee_","message":"creationix: correct","timestamp":1501879000419}
{"from":"creationix","message":"so with hypercore, I need to fetch up to log(n) nodes to verify the hash of any given random node?","timestamp":1501879142314}
{"from":"mafintosh","message":"*up to","timestamp":1501879153787}
{"from":"creationix","message":"but most those are not leaf nodes and thus very small?","timestamp":1501879154189}
{"from":"mafintosh","message":"they are all small","timestamp":1501879162558}
{"from":"mafintosh","message":"just a hash + size for each","timestamp":1501879168028}
{"from":"mafintosh","message":"so up to 40 bytes each basically","timestamp":1501879181980}
{"from":"pfrazee_","message":"Im going to write up that smart contract PoC this weekend","timestamp":1501879194708}
{"from":"creationix","message":"pfrazee_: cool, I'm designing stuff today, will be prototyping next week","timestamp":1501879221516}
{"from":"pfrazee_","message":"creationix: cool","timestamp":1501879230019}
{"from":"pfrazee_","message":"I'm in a meeting right now for a 4-6 week 20/hr week contract Im taking :|","timestamp":1501879245232}
{"from":"creationix","message":"I've got a greenfield project to work on in the next month, mostly file sharing and syncing in browsers","timestamp":1501879248691}
{"from":"pfrazee_","message":"looking forward to being capitalized","timestamp":1501879283661}
{"from":"creationix","message":"mafintosh: there is enough detail in the dat white-paper to reimplement hypercore now right? And the only crypto primitives I need are blake2b and ed25519 signatures?","timestamp":1501879332904}
{"from":"mafintosh","message":"creationix: i'd say so yea","timestamp":1501879372157}
{"from":"mafintosh","message":"plus asking questions here","timestamp":1501879379281}
{"from":"creationix","message":"I really want a luvit server that can speak and verify hypercore","timestamp":1501879383410}
{"from":"creationix","message":"are you back in Europe time zone?","timestamp":1501879389851}
{"from":"creationix","message":"no utp or dht for now, I'll just manually maintain websocket connections to known servers","timestamp":1501879422516}
{"from":"mafintosh","message":"creationix: i'm in the us","timestamp":1501879429192}
{"from":"mafintosh","message":"for the next 1-2 weeks","timestamp":1501879434210}
{"from":"mafintosh","message":"ffs, cannot buy a xps in the states as a company expense","timestamp":1501879452986}
{"from":"mafintosh","message":"cause dell only deals with us credit cards","timestamp":1501879460975}
{"from":"creationix","message":"basically I want to upload files to a browser, chunk and hash them there (basically create the hypercore log manually or use browserify dat) and then sync to other browsers using a public server as relay","timestamp":1501879471359}
{"from":"mafintosh","message":"creationix: sounds good","timestamp":1501879531466}
{"from":"creationix","message":"mafintosh: :(. I'd sell you mine, but not sure that's going to be easier","timestamp":1501879542300}
{"from":"mafintosh","message":"creationix: so you wanna use hypercore from lua basically?","timestamp":1501879542406}
{"from":"creationix","message":"eventually I want a lua server that can act as a dat peer and act as a relay for browsers","timestamp":1501879565965}
{"from":"creationix","message":"but at a network level, it only needs to know hypercore, not hyperdrive if I understand it right","timestamp":1501879584961}
{"from":"mafintosh","message":"correct","timestamp":1501879592089}
{"from":"creationix","message":"also I want a smart-contract thing like pfrazee_ in lua","timestamp":1501879611079}
{"from":"mafintosh","message":"and yes you only need ed25519 and blake2b","timestamp":1501879619165}
{"from":"pfrazee_","message":"I mean, creationix Im not against a reimplementation, but I do have to ask why","timestamp":1501879641869}
{"from":"mafintosh","message":"creationix: most of the time consuming parts of the protocol (impl wise) are optimisations","timestamp":1501879652687}
{"from":"mafintosh","message":"creationix: which you can add later","timestamp":1501879657373}
{"from":"pfrazee_","message":"why not just use the js stuff? what's the upside of a lua version?","timestamp":1501879659714}
{"from":"creationix","message":"node is pretty heavy on our hardware","timestamp":1501879668908}
{"from":"creationix","message":"V8 was not designed for embedded","timestamp":1501879675509}
{"from":"creationix","message":"luajit uses 20x less memory and starts up instantly","timestamp":1501879687176}
{"from":"creationix","message":"node's repl takes *seconds* to start up sometimes on arm","timestamp":1501879699077}
{"from":"creationix","message":"also I like coroutines","timestamp":1501879710196}
{"from":"creationix","message":"\"P","timestamp":1501879712093}
{"from":"pfrazee_","message":"any chance you could do your implementation in C?","timestamp":1501879733701}
{"from":"creationix","message":"maybe later after I understand it better","timestamp":1501879799806}
{"from":"mafintosh","message":"i wish i had time to do a c impl :)","timestamp":1501879984821}
{"from":"mafintosh","message":"sounds like a lot of fun","timestamp":1501879989478}
{"from":"creationix","message":"https://usercontent.irccloud-cdn.com/file/6Ef6QwXe/typo%20in%20whitepaper%3F","timestamp":1501880078441}
{"from":"creationix","message":"missing 7 and hash for 9?^","timestamp":1501880097086}
{"from":"mafintosh","message":"creationix: hash 7 is the parent of 3 and 11","timestamp":1501880220250}
{"from":"mafintosh","message":"creationix: so not created yet","timestamp":1501880227236}
{"from":"creationix","message":"so the nodes aren't created in order? How does that work","timestamp":1501880247437}
{"from":"mafintosh","message":"9 is 8 + 10 (looks like a typo)","timestamp":1501880249101}
{"from":"ogd","message":"oops","timestamp":1501880254071}
{"from":"mafintosh","message":"creationix: the number is just their id","timestamp":1501880273098}
{"from":"ogd","message":"i merely transcribed mafintosh's stream of consciousness and therefore typos cannot be attributed to me :D","timestamp":1501880276033}
{"from":"creationix","message":"lol","timestamp":1501880282854}
{"from":"mafintosh","message":"creationix: all even ones are leafs","timestamp":1501880285745}
{"from":"creationix","message":"mafintosh: ok, so they aren't written in order, that's just their logical id. Makes sense","timestamp":1501880307159}
{"from":"mafintosh","message":"creationix: the odd ones parents that are filled in as their children are available","timestamp":1501880309862}
{"from":"mafintosh","message":"creationix: yea exactly. and they are all fixed sizes (leaf data is stored else where)","timestamp":1501880326962}
{"from":"creationix","message":"I see","timestamp":1501880349190}
{"from":"mafintosh","message":"creationix: so writing them to a file is easy (offset = 40 * index)","timestamp":1501880350506}
{"from":"creationix","message":"oh, so you just leave holes in the file for not-yet created nodes then?","timestamp":1501880367701}
{"from":"mafintosh","message":"yea","timestamp":1501880372020}
{"from":"mafintosh","message":"all modern file systems have sparse files so the overhead for this is small","timestamp":1501880390353}
{"from":"mafintosh","message":"and makes the impl a lot easier","timestamp":1501880395689}
{"from":"mafintosh","message":"creationix: here is my js impl for the id scheme, https://github.com/mafintosh/flat-tree","timestamp":1501880436863}
{"from":"mafintosh","message":"you basically just count trailing 1s in binary notation to get the depth","timestamp":1501880457148}
{"from":"creationix","message":"yep, I've looked at that before","timestamp":1501880469858}
{"from":"mafintosh","message":"and the rest of the number is the breath","timestamp":1501880471959}
{"from":"creationix","message":"where do signatures live?","timestamp":1501880508533}
{"from":"creationix","message":"I guess I should keep reading till I get to the sleep section","timestamp":1501880519205}
{"from":"mafintosh","message":"creationix: is a different file","timestamp":1501880677327}
{"from":"creationix","message":"too bad browsers don't have good random-access binary storage","timestamp":1501880702863}
{"from":"mafintosh","message":"creationix: so any leaf node can have a signature attached to it","timestamp":1501880703121}
{"from":"mafintosh","message":"creationix: ya, i'm lobbying for that :)","timestamp":1501880720064}
{"from":"mafintosh","message":"and all signatures are also fixed size","timestamp":1501880737819}
{"from":"creationix","message":"now I don't *have* to use sleep format as long as my network protocol is the same","timestamp":1501880740601}
{"from":"mafintosh","message":"also true","timestamp":1501880747046}
{"from":"mafintosh","message":"but the format is straight forward to implment","timestamp":1501880763740}
{"from":"creationix","message":"yep, and a SLEEP client using HTTP range requests doesn't look too hard","timestamp":1501880794694}
{"from":"mafintosh","message":"except for the bitfield probably (could be simpler i think)","timestamp":1501880795067}
{"from":"mafintosh","message":"not at all","timestamp":1501880802015}
{"from":"creationix","message":"so currently dat has two hypercore streams, but isn't the new design based on hyperdb just one stream?","timestamp":1501880901103}
{"from":"mafintosh","message":"one per writer yes","timestamp":1501880923338}
{"from":"mafintosh","message":"but that would be one of the streams in dat","timestamp":1501880931967}
{"from":"mafintosh","message":"so one (a hyperdb) that references file content stored in the otherone","timestamp":1501880945832}
{"from":"mafintosh","message":"so you can replicate the hyperdb completely without getting the file content","timestamp":1501880964425}
{"from":"creationix","message":"hmm","timestamp":1501881119029}
{"from":"creationix","message":"so in the wire protocol, the varint length header includes the 1 byte for the type and channel header?","timestamp":1501881203245}
{"from":"creationix","message":"(assuming the 5 bits encodes as 1 byte in the varint encoding)","timestamp":1501881222399}
{"from":"mafintosh","message":"yea","timestamp":1501881236503}
{"from":"mafintosh","message":"a wire message looks like this: <varint-length>(<varint-header><binary payload)","timestamp":1501881294296}
{"from":"mafintosh","message":"the varint-length is the combined length of the header and payload","timestamp":1501881309107}
{"from":"creationix","message":"and blake2b hashes are always 32-bytes right?","timestamp":1501881311113}
{"from":"mafintosh","message":"ya","timestamp":1501881314796}
{"from":"creationix","message":"cool","timestamp":1501881320003}
{"from":"mafintosh","message":"the protocol could easily be made to work with any other hash size as long as it is fixed though","timestamp":1501881344767}
{"from":"mafintosh","message":"but for now we just support 32byte blake2b","timestamp":1501881352547}
{"from":"creationix","message":"this doesn't look too bad. I've implemented much nastier protocols (like git packfile streams!)","timestamp":1501881371001}
{"from":"mafintosh","message":"<varint-header> is channel-id << 4 | payload-type","timestamp":1501881371194}
{"from":"mafintosh","message":"so payload-type is a 4 bit number","timestamp":1501881387315}
{"from":"creationix","message":"yep, got that part","timestamp":1501881393615}
{"from":"mafintosh","message":":)","timestamp":1501881397832}
{"from":"mafintosh","message":"creationix: oh! we use xsalsa20 for encrypting the wire stream","timestamp":1501881420556}
{"from":"mafintosh","message":"forgot about that part","timestamp":1501881425012}
{"from":"mafintosh","message":"in the crypto","timestamp":1501881428691}
{"from":"mafintosh","message":"all of those are in sodium","timestamp":1501881442797}
{"from":"creationix","message":"in git packfiles, each object has a length header with a weird non-standard varint encoding. But the catch is the length is the *uncompressed* length and the data is included inline as raw deflate!","timestamp":1501881444655}
{"from":"mafintosh","message":"haha","timestamp":1501881456298}
{"from":"mafintosh","message":"but ... why??","timestamp":1501881459844}
{"from":"creationix","message":"so the only way to know how much to read is to run the stream through a inflate state machine","timestamp":1501881460855}
{"from":"creationix","message":"so where does the xsalsa20 encryption fit into the stream?","timestamp":1501881802942}
{"from":"creationix","message":"Surely there is some handshake first for the key exchange","timestamp":1501881818636}
{"from":"creationix","message":"or not perhaps, we do have the dat key itself","timestamp":1501881839793}
{"from":"creationix","message":"I see the nonce in the Feed message, but nowhere does it document how the encryption works","timestamp":1501881927692}
{"from":"mafintosh","message":"oh maybe that was added later sorry","timestamp":1501882064517}
{"from":"mafintosh","message":"note to self to add that","timestamp":1501882068206}
{"from":"mafintosh","message":"creationix: the first message sent is the nonce","timestamp":1501882079966}
{"from":"mafintosh","message":"creationix: all data after is encrypted using xsalsa20 with the dat key as key and the nonce as nonce","timestamp":1501882118046}
{"from":"creationix","message":"cool, I figured it was something like that","timestamp":1501882130917}
{"from":"creationix","message":"how does xsalsa20 work with unordered udp packets? I guess it's not a stream cipher?","timestamp":1501882157778}
{"from":"creationix","message":"or do you need an ordered transport?","timestamp":1501882172352}
{"from":"creationix","message":"yep, a stream cipher. But utp is ordered and reliable so I guess it works","timestamp":1501882295861}
{"from":"mafintosh","message":"yea its a stream cipher","timestamp":1501882409926}
{"from":"mafintosh","message":"the protocol assumes an ordered stream","timestamp":1501882422418}
{"from":"creationix","message":"ok, so if I connect manually to a dat peer using utp, I can start speaking this protocol?","timestamp":1501882479096}
{"from":"mafintosh","message":"yup","timestamp":1501882495811}
{"from":"mafintosh","message":"or utp","timestamp":1501882497262}
{"from":"mafintosh","message":"tcp","timestamp":1501882499699}
{"from":"creationix","message":"I wonder how I'll get the port. I'd rather avoid the DHT for now","timestamp":1501882515167}
{"from":"creationix","message":"mdns maybe?","timestamp":1501882518033}
{"from":"creationix","message":"I can test over loopback or on a lan","timestamp":1501882531632}
{"from":"mafintosh","message":"creationix: it tries to bind to 3282 first","timestamp":1501882563703}
{"from":"mafintosh","message":"so if you have nothing running on that you should be able to localhost:3282","timestamp":1501882576458}
{"from":"creationix","message":"I see, that should work for testing","timestamp":1501882586358}
{"from":"creationix","message":"and it listens on both TCP and UDP (UTP)?","timestamp":1501882597367}
{"from":"noffle","message":"mafintosh, Q: with hyperdb, will replication require a full round-trip on each hypercore feed? or is there some way to batch them together?","timestamp":1501882651889}
{"from":"noffle","message":"a hyperdb with 100 hypercore feeds hopefuly won't need 100+ RTs","timestamp":1501882675552}
{"from":"mafintosh","message":"noffle: they are done in parallel","timestamp":1501882675987}
{"from":"noffle","message":"N connections?","timestamp":1501882707698}
{"from":"mafintosh","message":"noffle: the protocol uses the same stream","timestamp":1501882720663}
{"from":"mafintosh","message":"noffle: and technically you only need a round trip for each fork! :)","timestamp":1501882762267}
{"from":"mafintosh","message":"but the current one does one for each log as that was easier to impl","timestamp":1501882783077}
{"from":"mafintosh","message":"noffle: so version 1 does 1 rt to get the head of all hypercore","timestamp":1501882870937}
{"from":"mafintosh","message":"s","timestamp":1501882871717}
{"from":"mafintosh","message":"similar to ssb actually","timestamp":1501882878511}
{"from":"mafintosh","message":"but it has data to do it hyperlog style","timestamp":1501882888165}
{"from":"mafintosh","message":"where it only has to fetch the latest updated","timestamp":1501882900082}
{"from":"mafintosh","message":"which is much much better for 10000s of peers","timestamp":1501882909379}
{"from":"creationix","message":"mafintosh: does this stream speak the network wire protocol? https://github.com/mafintosh/hypercore#var-stream--feedreplicateoptions","timestamp":1501884519941}
{"from":"mafintosh","message":"creationix: yes indeed","timestamp":1501884599033}
{"from":"mafintosh","message":"creationix: you can disable encryption is you want to add that later","timestamp":1501884632041}
{"from":"mafintosh","message":"by setting {encrypt: false}","timestamp":1501884639846}
{"from":"ralphtheninja[m]","message":"mafintosh: going linux? :) can really recommend looking at tmux and/or awesome window manager","timestamp":1501884644562}
{"from":"mafintosh","message":"ralphtheninja[m]: yea if i can ever buy a laptop haha","timestamp":1501884659998}
{"from":"mafintosh","message":"it is non trivial","timestamp":1501884665766}
{"from":"jhand","message":"Friday news dump! https://blog.datproject.org/2017/08/04/recently/","timestamp":1501885079766}
{"from":"ralphtheninja[m]","message":"mafintosh: which computer are you looking at?","timestamp":1501885109629}
{"from":"mafintosh","message":"ralphtheninja[m]: the dell xps 13","timestamp":1501885124626}
{"from":"ralphtheninja[m]","message":"I didn't understand the reference","timestamp":1501885125750}
{"from":"mafintosh","message":"ralphtheninja[m]: i tried to buy one on dells website but i can't since i don't have us credit card","timestamp":1501885147924}
{"from":"mafintosh","message":"ralphtheninja[m]: and i need a invoice to my cph address (for tax) and they force me to put in 'state' as well","timestamp":1501885175153}
{"from":"mafintosh","message":"i'll prob just buy it in denmark","timestamp":1501885189659}
{"from":"mafintosh","message":"cool laptop tho","timestamp":1501885194657}
{"from":"ralphtheninja[m]","message":"I don't want to mess with your choice but if you are optimizing for battery time you might want to consider a thinkpad","timestamp":1501885195950}
{"from":"ralphtheninja[m]","message":"with double batteries, the one I use have an internal battery and you can optionally plug in one more, and you can have more of those external and hot swap them on the computer","timestamp":1501885249406}
{"from":"mafintosh","message":"ralphtheninja[m]: the xps supports usbc batteries","timestamp":1501885366381}
{"from":"mafintosh","message":"ralphtheninja[m]: and i can charge my phone with them as well :)","timestamp":1501885385532}
{"from":"mafintosh","message":"usbc all the way","timestamp":1501885393068}
{"from":"mafintosh","message":"ogd: https://github.com/mafintosh/hypercore/pull/112 <-- noffle's got you covered","timestamp":1501885402122}
{"from":"noffle","message":"mafintosh: can close https://github.com/mafintosh/hypercore/issues/111","timestamp":1501885449767}
{"from":"mafintosh","message":"done","timestamp":1501885461130}
{"from":"ralphtheninja[m]","message":"mafintosh: aah nice","timestamp":1501885490205}
{"from":"creationix","message":"mafintosh: so a simple proxy is to run node on the server, stream the protocol without encryption over wss to a browser","timestamp":1501885570322}
{"from":"ogd","message":"noffle: lol i literally asked for that method yesterday","timestamp":1501885581492}
{"from":"noffle","message":"ogd: I've been watching for low-hanging fruit to learn the codebase better","timestamp":1501885596437}
{"from":"noffle","message":"yours sounded easy enough","timestamp":1501885602173}
{"from":"mafintosh","message":"its very helpful, thanks","timestamp":1501885611085}
{"from":"mafintosh","message":"noffle: only major downside is that it's not on 6.6.6 anymore","timestamp":1501885636873}
{"from":"noffle","message":"make 60 more major revs and you'll be good again","timestamp":1501885663076}
{"from":"noffle","message":"66.6.6","timestamp":1501885667575}
{"from":"noffle","message":"lots and lots of tiny breaking changes, just to stay stable on 6s","timestamp":1501885686866}
{"from":"noffle","message":"you could write a bot","timestamp":1501885692154}
{"from":"noffle","message":":)","timestamp":1501885699009}
{"from":"jondashkyle","message":"lol","timestamp":1501885868501}
{"from":"karissa","message":"jhand: thinking it'd be nice to have some file i can add to a github repo that would include the key and the subset of files to select (or even multiple dats). then run one command to get the data","timestamp":1501886009527}
{"from":"jhand","message":"karissa: ooh ya. would having the dat.json + .datdownload work?","timestamp":1501886061459}
{"from":"jhand","message":"or the .dat folder","timestamp":1501886085768}
{"from":"mafintosh","message":"noffle: at our pace we'll get there soon enough hehe","timestamp":1501886093071}
{"from":"mafintosh","message":"creationix: that approach sounds straight forward yes","timestamp":1501886106049}
{"from":"karissa","message":"jhand: they'd still have to do multiple commands i think","timestamp":1501886144092}
{"from":"creationix","message":"mafintosh: do both sides send the initial message with the nonce?","timestamp":1501886149708}
{"from":"mafintosh","message":"creationix: yea","timestamp":1501886157380}
{"from":"creationix","message":"so peers, not client/server","timestamp":1501886165325}
{"from":"mafintosh","message":"yup","timestamp":1501886174857}
{"from":"creationix","message":"cool","timestamp":1501886177641}
{"from":"creationix","message":"mafintosh: what's the encoding for the varint?","timestamp":1501886278372}
{"from":"mafintosh","message":"creationix: it's protobuf style","timestamp":1501886290227}
{"from":"jhand","message":"karissa: yea, I think we could have `dat sync` read the dat.json. What were you thinking though for what the user would run?","timestamp":1501886300605}
{"from":"creationix","message":"cool, so probably this one https://developers.google.com/protocol-buffers/docs/encoding#varints","timestamp":1501886303007}
{"from":"mafintosh","message":"https://www.sigmainfy.com/blog/protocol-buffers-encoding-and-message-structure.html","timestamp":1501886319684}
{"from":"mafintosh","message":"ya","timestamp":1501886320826}
{"from":"karissa","message":"jhand: i just want to git clone <git repo> && dat clone and be up to date","timestamp":1501886353380}
{"from":"jhand","message":"karissa: yea so we could read dat.json (or another file) to get key and then read .datdownload to figure out what to download","timestamp":1501886397637}
{"from":"jhand","message":"ogd: you write a google cloud storage thing for dat yet? https://cloud.google.com/storage/docs/public-datasets/nexrad","timestamp":1501886415270}
{"from":"ogd","message":"jhand: at one point....","timestamp":1501886506235}
{"from":"ogd","message":"jhand: https://github.com/maxogden/google-cloud-storage","timestamp":1501886536381}
{"from":"ralphtheninja[m]","message":"a quick off topic thing, if any of you own any bitcoin you also now have the same amount of bitcoin cash, just fyi","timestamp":1501886537955}
{"from":"jhand","message":"haha nice","timestamp":1501886551716}
{"from":"ralphtheninja[m]","message":"and given that you have control over your private keys","timestamp":1501886568795}
{"from":"ogd","message":"mafintosh: we should revisit this, https://github.com/mafintosh/hyperdrive/issues/113, i have more use cases","timestamp":1501886907970}
{"from":"ogd","message":"mafintosh: i kinda asked you about it the other day i think too","timestamp":1501886919408}
{"from":"bret","message":"you guys can get around to it tomorrow","timestamp":1501886963043}
{"from":"bret","message":"thats a good feature btw","timestamp":1501887000182}
{"from":"bret","message":"bittorrent sync calls it selective sync","timestamp":1501887008926}
{"from":"ogd","message":"lazy trees> happy trees","timestamp":1501887011296}
{"from":"ogd","message":"bret: its diff than selective sync, which we already support","timestamp":1501887017278}
{"from":"bret","message":"oh is ee","timestamp":1501887038184}
{"from":"bret","message":"I get it","timestamp":1501887056490}
{"from":"bret","message":"thats neat actually","timestamp":1501887061255}
{"from":"bret","message":"you can hyperdrive like crazy and only work when it matters","timestamp":1501887074922}
{"from":"bret","message":"that would be handy for creating lots of small hyperdrives","timestamp":1501887088700}
{"from":"ogd","message":"yea lets you dat-ify something without downloading it up front","timestamp":1501887091308}
{"from":"ogd","message":"pfrazee: actually i have a use case for your ethereum thing","timestamp":1501887114874}
{"from":"bret","message":"I'm thinking about selling my etherium","timestamp":1501887131415}
{"from":"ogd","message":"pfrazee: say i write a dat that claims to use some code to download data from NASA and redistributes it","timestamp":1501887132829}
{"from":"ogd","message":"pfrazee: but if i was malicious i would change climate change values before redistributing","timestamp":1501887155654}
{"from":"ogd","message":"pfrazee: but if i could prove all code run on my dat was provably the same code that was posted to github it would alleviate trust issues","timestamp":1501887179022}
{"from":"ogd","message":"pfrazee: still open to MITM, and ultimately NASA should publish the dat as the primary provider, but this would be a middle ground","timestamp":1501887194899}
{"from":"ogd","message":"pfrazee: can you ethereumify a node program?","timestamp":1501887206477}
{"from":"ogd","message":"pfrazee: theres prob a browserify transform for that by now right","timestamp":1501887225021}
{"from":"jhand","message":"karissa: kinda works https://github.com/joehand/dat-clone-sparse-test","timestamp":1501887232296}
{"from":"pfrazee","message":"ogd: it's a large enough improvement on trust over plain REST that I think it's a compelling usecase","timestamp":1501887252655}
{"from":"jhand","message":"karissa: need to run dat sync twice for some reason","timestamp":1501887267268}
{"from":"karissa","message":"hm","timestamp":1501887273736}
{"from":"ogd","message":"pfrazee: whats REST have to do with it","timestamp":1501887273897}
{"from":"karissa","message":"cool. so .dat/metadata.key means it doesn't need a dat.json","timestamp":1501887295389}
{"from":"pfrazee","message":"ogd: \"REST\" as in, any normal service","timestamp":1501887296277}
{"from":"jhand","message":"karissa: but just need the .dat/metadata.key which is cool, except its hard to publish without publishign all the .dat stuff","timestamp":1501887305574}
{"from":"karissa","message":"yeah i was going to say..","timestamp":1501887312023}
{"from":"ogd","message":"pfrazee: ah. yea so ethereum adds the ability to trust that someone else executed some code without tampering with the code right?","timestamp":1501887317651}
{"from":"karissa","message":"i kinda want like a .yml file","timestamp":1501887319692}
{"from":"jhand","message":"karissa: we talked about going back to .dat/metadata/key but thats not much easier","timestamp":1501887321634}
{"from":"jhand","message":"karissa: ya thats the other issue, its not human readable","timestamp":1501887338023}
{"from":"karissa","message":"i think it should be human readable","timestamp":1501887349627}
{"from":"jhand","message":"right","timestamp":1501887358860}
{"from":"karissa","message":"also too easy to commit the entire .dat folder","timestamp":1501887359944}
{"from":"karissa","message":"should put .dat in .gitignore","timestamp":1501887364496}
{"from":"jhand","message":"right","timestamp":1501887370701}
{"from":"pfrazee","message":"ogd: yes. The code is executed in parallel by every other node in the system, and (AFAIK) any node can receive a transaction.","timestamp":1501887380455}
{"from":"pfrazee","message":"ogd: very high automated auditing","timestamp":1501887392883}
{"from":"ogd","message":"pfrazee: so unless theres some other method of proving you ran some code, assuming ethereum is the MVP for doing that, then i think its an interesting use case","timestamp":1501887422628}
{"from":"ogd","message":"pfrazee: but i guess i could just DNS spoof nasa on my local box and return fake data lol","timestamp":1501887449392}
{"from":"ogd","message":"pfrazee: hard to get trust without the original provider publishing hashes/signatures","timestamp":1501887459623}
{"from":"pfrazee","message":"ogd: yes, and that's a weakness of ethereum and my project","timestamp":1501887465566}
{"from":"pfrazee","message":"ogd: the \"oracle\" problem","timestamp":1501887479296}
{"from":"ogd","message":"java?","timestamp":1501887485142}
{"from":"pfrazee","message":"ogd: hah","timestamp":1501887490022}
{"from":"ralphtheninja[m]","message":"jhand: I get an error, posting issue","timestamp":1501887490880}
{"from":"ogd","message":"heheheh","timestamp":1501887493141}
{"from":"jhand","message":"ralphtheninja[m]: ya is it this? https://github.com/datproject/dat/issues/838","timestamp":1501887503187}
{"from":"pfrazee","message":"ogd: in recordrun (which is what I'm calling my PoC) you can add an auditor as the client","timestamp":1501887540703}
{"from":"ogd","message":"pfrazee: ah so you have a pool of independent executors?","timestamp":1501887567648}
{"from":"ogd","message":"pfrazee: yea that would be good","timestamp":1501887571019}
{"from":"pfrazee","message":"ogd: yeah, computers which replay the log to check the output state, and to archive the log to stop future tampering","timestamp":1501887587327}
{"from":"ralphtheninja[m]","message":"jhand: yes, identical error","timestamp":1501887630593}
{"from":"pfrazee","message":"ogd: but I'll need a way to model the oracles","timestamp":1501887663180}
{"from":"ogd","message":"pfrazee: i prob dont need this level of trust (quorum of independent trusted verifiers) for my data archiving projects but its fun to think about","timestamp":1501887673227}
{"from":"pfrazee","message":"ogd: yeah and it's fun that it could be easy to do","timestamp":1501887691457}
{"from":"jhand","message":"ralphtheninja[m]: update dat & try again =)","timestamp":1501887758024}
{"from":"ralphtheninja[m]","message":"k","timestamp":1501887766074}
{"from":"jhand","message":"still have to run it twice which is weird","timestamp":1501887787705}
{"from":"ralphtheninja[m]","message":"$ dat sync","timestamp":1501887923306}
{"from":"ralphtheninja[m]","message":"Dat successfully created in empty mode. Download files using pull or sync.","timestamp":1501887924007}
{"from":"jhand","message":"ralphtheninja[m]: ya run it again and you should get a few pics","timestamp":1501887942201}
{"from":"ralphtheninja[m]","message":"hanging on second sync","timestamp":1501887962562}
{"from":"ralphtheninja[m]","message":"but getting the files .. maybe just taking lon","timestamp":1501887990610}
{"from":"ralphtheninja[m]","message":"long","timestamp":1501887992132}
{"from":"jhand","message":"ralphtheninja[m]: if the download progress is 0 then its done, still need to add progress","timestamp":1501888032711}
{"from":"jhand","message":"er download speed","timestamp":1501888037249}
{"from":"ralphtheninja[m]","message":"oh it's the DEBUG env var again ..","timestamp":1501888044727}
{"from":"jhand","message":"gah","timestamp":1501888050961}
{"from":"substack","message":"pfrazee: nice article about that","timestamp":1501888055429}
{"from":"substack","message":"woops, scrollback","timestamp":1501888061444}
{"from":"substack","message":"\"that\" meaning the oracle problem of ethereum \"smart\" contracts","timestamp":1501888084858}
{"from":"pfrazee","message":"substack: thanks","timestamp":1501888105197}
{"from":"ralphtheninja[m]","message":"jhand: aye need two attempts","timestamp":1501888275172}
{"from":"jhand","message":"think that may be a simple fix. But I'm gonna walk home now","timestamp":1501888375200}
{"from":"jhand","message":"karissa: have you tried selective sync with an archive that gets updated? think https://github.com/datproject/dat/blob/master/src/lib/archive.js#L74-L76 may be a problem for when archives live sync","timestamp":1501888472072}
{"from":"karissa","message":"¡hm","timestamp":1501889233903}
{"from":"jhand","message":"If we only run that block on opts.empty maybe that'd be enough.","timestamp":1501889612731}
{"from":"jhand","message":"Oh that may make it so you can clone sparse right away to like we were talking about","timestamp":1501889657323}
{"from":"dat-gitter","message":"(e-e-e) odg: ah - thats a hard one, as the data you will get back is not ordered as a series of changes. So the question is how to transform that data into something usable in the form of a dat. I am stumped on that - as ideally it would be nice to build a log of all the data and its mutations over time.","timestamp":1501893823573}
{"from":"dat-gitter","message":"(e-e-e) odg","timestamp":1501893847898}
{"from":"dat-gitter","message":"(sdockray) @e-e-e do you know anyone who works for trove? someone recently said to me they'd put me in touch with the tech person there, but now i forget who it was (doh)","timestamp":1501895055082}
{"from":"dat-gitter","message":"(e-e-e) ah no. I knew someone who knew someone","timestamp":1501895974254}
{"from":"dat-gitter","message":"(e-e-e) but I think that person moved on","timestamp":1501895984311}
{"from":"dat-gitter","message":"(e-e-e) they are pretty responsive I have heard - if you try reaching them on twitter.","timestamp":1501896020150}
{"from":"dat-gitter","message":"(sdockray) i'll try and wrack my brain... someone definitely had \"the\" connection... i hate when i dont write things down! it would be the easiest way to figure out how to sell them on - or facilitate export for - dat, but then again tweets sometimes move mountains :)","timestamp":1501896127971}
{"from":"dat-gitter","message":"(e-e-e) true! :smile:","timestamp":1501896159739}
{"from":"mafintosh","message":"i have succesfully repaired my macbook","timestamp":1501896728229}
{"from":"mafintosh","message":"compressed air + drying it out did the trick","timestamp":1501896739218}
{"from":"dat-gitter","message":"(e-e-e) mafintosh: :+1:","timestamp":1501896748092}
{"from":"ogd","message":"sdockray i think i can get all their stuff out of the API","timestamp":1501901687995}
{"from":"ogd","message":"sdockray but obv if they wanted to produce the dat themselves it would probably be easier :)","timestamp":1501901697863}
{"from":"dat-gitter","message":"(e-e-e) ogd: how do you see a potential trove dat being used? Or are you thinking of it mostly as a backup archive if trove ever goes down?","timestamp":1501902375671}
{"from":"ogd","message":"e-e-e my use case is to give researchers bulk access to the full metadata so they dont have to scrape it","timestamp":1501902425506}
{"from":"ogd","message":"e-e-e if someone was interested in a backup this would probably help too","timestamp":1501902450915}
{"from":"dat-gitter","message":"(e-e-e) cool. the api in that case will only provide you some of the info - in the past I wrote a scrapper to get the bibliographic data out of the html pages too - i will see if I can did up the code - it might be useful for you","timestamp":1501902571418}
{"from":"dat-gitter","message":"(e-e-e) excuse the tabbed js and poor strucutre - it was before i knew better","timestamp":1501902761394}
{"from":"dat-gitter","message":"(e-e-e) https://github.com/e-e-e/frontyard-library-catalogue/blob/master/models/library.js","timestamp":1501902761555}
{"from":"dat-gitter","message":"(e-e-e) the first few functions are used to get info from the static html","timestamp":1501902835177}
{"from":"mafintosh","message":"creationix: https://webkit.org/status/#specification-service-workers","timestamp":1501915352752}
{"from":"creationix","message":"mafintosh: where in your code does it use xchacha20 via sodium. I've got my lua code decoding the prototbufs, but after the first message it's encrypted as expected","timestamp":1501938504819}
{"from":"creationix","message":"(I just manually implemented the subset of protobuf needed instead on pulling in a dependency)","timestamp":1501938525372}
{"from":"creationix","message":"I also found some pretty good sodium bindings for luajit that use the ffi. https://github.com/daurnimator/luasodium","timestamp":1501938565953}
{"from":"creationix","message":"found part of it here https://github.com/mafintosh/hypercore-protocol/blob/master/index.js#L118-L124","timestamp":1501938898631}
{"from":"mafintosh","message":"pfrazee: around?","timestamp":1501956907081}
{"from":"pfrazee","message":"mafintosh: yep, what's up","timestamp":1501956916773}
{"from":"mafintosh","message":"creationix: all the crypto encryption is in that file","timestamp":1501956930925}
{"from":"mafintosh","message":"pfrazee: i could use some feedback on the multi writer protocol","timestamp":1501956944349}
{"from":"mafintosh","message":"pfrazee: have time today?","timestamp":1501956949346}
{"from":"pfrazee","message":"mafintosh: yeah I can spare a few minutes","timestamp":1501956955535}
{"from":"mafintosh","message":"pfrazee: okay thanks, will ping you later the","timestamp":1501956969375}
{"from":"mafintosh","message":"n","timestamp":1501956971239}
{"from":"pfrazee","message":"mafintosh: sounds good","timestamp":1501956975581}
{"from":"mafintosh","message":"creationix: https://github.com/mafintosh/hypercore-protocol/blob/master/index.js#L232 (this is run on every send data packet)","timestamp":1501957002925}
{"from":"mafintosh","message":"after the nonce","timestamp":1501957005518}
{"from":"mafintosh","message":"creationix: the same is ran on all incoming traffic, https://github.com/mafintosh/hypercore-protocol/blob/master/index.js#L133","timestamp":1501957028245}
{"from":"bret","message":"mafintosh: did you want an iPhone 5s for node on mobile research?","timestamp":1501965119691}
{"from":"cblgh","message":"5s dat stuff would be so ace","timestamp":1501969123496}
{"from":"cblgh","message":"pfrazee: how do i add titles & descriptions to stuff hosted at hashbase?","timestamp":1501969228598}
{"from":"cblgh","message":"right now all of my tests are regarded as untitled","timestamp":1501969242726}
{"from":"cblgh","message":"guessing it's some kind of .file?","timestamp":1501969253312}
{"from":"substack","message":"with `dat init` ?","timestamp":1501969262622}
{"from":"pfrazee","message":"cblgh: yeah it's looking for a dat.json","timestamp":1501969277488}
{"from":"pfrazee","message":"{\"title\":\"foo\",\"description\":\"bar\"}","timestamp":1501969288399}
{"from":"cblgh","message":"substack: oh no i am using hyperdrive and using hashbase as a peer","timestamp":1501969300410}
{"from":"cblgh","message":"pfrazee: ah thanks","timestamp":1501969304438}
{"from":"creationix","message":"mafintosh: hmm, my libsodium bindings are closer to the C interface. Any idea how to use this? https://github.com/daurnimator/luasodium/blob/master/sodium/crypto_stream.lua#L58-L68","timestamp":1501969897871}
{"from":"creationix","message":"I don't have an `xor` function that creates an object with an `update` method. I get have an `xor` function with lots of args (some out args)","timestamp":1501969944451}
{"from":"mafintosh","message":"creationix: we use crypto_stream_xsalsa20_xor_ic under the hood in our bindigns for it","timestamp":1501969997023}
{"from":"mafintosh","message":"https://github.com/sodium-friends/sodium-native/blob/master/src/crypto_stream_xor_wrap.cc#L36","timestamp":1501970006317}
{"from":"mafintosh","message":"each xsalsa block is 64 bytes","timestamp":1501970033717}
{"from":"creationix","message":"so it's just a stream of random-looking stuff that you xor with the stream of data right?","timestamp":1501970085146}
{"from":"mafintosh","message":"ya","timestamp":1501970187763}
{"from":"creationix","message":"I wonder if I can use this as-is or do I need to send a PR to these bindings","timestamp":1501970211264}
{"from":"cblgh","message":"dat.json worked great","timestamp":1501970221023}
{"from":"creationix","message":"I just don't know this API very well, so it's hard to follow","timestamp":1501970223557}
{"from":"mafintosh","message":"the sodium?","timestamp":1501970258366}
{"from":"cblgh","message":"pfrazee / taravancil: description class on hashbase might want to break lines / restrict the width","timestamp":1501970275855}
{"from":"cblgh","message":"i initially wrote a long enough description that it started stretching hashbase to its sides ehe","timestamp":1501970302907}
{"from":"cblgh","message":"nothing broke, it just squashed recent activity together a bit","timestamp":1501970324076}
{"from":"creationix","message":"mafintosh: yeah, the crypto_stream in particular.","timestamp":1501971156119}
{"from":"mafintosh","message":"creationix: yea you might have to expose the _ic from the bindings to do it unless they already expose some sort of actual stream","timestamp":1501971443844}
{"from":"TheGillies","message":"dat share is not minding my .datignore file","timestamp":1501974034382}
{"from":"TheGillies","message":"what is the format for ignoring folders?","timestamp":1501974044889}
{"from":"TheGillies","message":"It's trying to upload my node_modules","timestamp":1501974053996}
{"from":"ralphtheninja[m]","message":"https://program.sha2017.org/events/199.html","timestamp":1501978087330}
{"from":"pfrazee","message":"nodevms demo https://youtu.be/Qp38HOYcqW4 cc mafintosh","timestamp":1501984136797}
{"from":"noffle","message":"mafintosh: how are things going on hyperdb?","timestamp":1501984597735}
{"from":"mafintosh","message":"noffle: will put up the pr today :)","timestamp":1501988823785}
{"from":"mafintosh","message":"It's been happening in my private repo","timestamp":1501988846757}
{"from":"TheGillies","message":"Why does hashbase have a content security policy on the response header?","timestamp":1502000593735}
{"from":"TheGillies","message":"It's killing my javascript includes","timestamp":1502000601441}
{"from":"dat-gitter","message":"(lukeburns) TheGillies: https://github.com/beakerbrowser/beaker/issues/552","timestamp":1502017733999}
{"from":"dat-gitter","message":"(lukeburns) actually, now seeing that csp was relaxed in beaker","timestamp":1502017953720}
{"from":"yoshuawuyts","message":"is there like a recommended way of truncating sha256 hashes? Doing it to create unique file names, no important crypto implications","timestamp":1502025213296}
{"from":"TheLink","message":"is there some gui app for youtube-dl?","timestamp":1502034908684}
{"from":"dat-gitter","message":"(matrixbot) `paul90` TheGillies: beakerbrowser/hashbase#43 will probably be the fix for your CSP issue.","timestamp":1502037502830}
{"from":"pfrazee","message":"https://github.com/pfrazee/libvms VMS api","timestamp":1502039117956}
{"from":"cblgh","message":"pfrazee: love the nodevms demo","timestamp":1502042665065}
{"from":"cblgh","message":"especially how you can inspect using beaker","timestamp":1502042679083}
{"from":"cblgh","message":"just like all the tools coming together","timestamp":1502042684712}
{"from":"cblgh","message":"so good","timestamp":1502042685918}
{"from":"pfrazee","message":"cblgh: yeah that is neat. we need hypercore rendering in Beaker too","timestamp":1502042789089}
{"from":"pfrazee","message":"I'm going to work on the VMS swarm today","timestamp":1502042836271}
{"from":"TheGillies","message":"Hrm in beaker browser kodo.fairuse.org doesn't say theres a dat site available","timestamp":1502056111216}
{"from":"TheGillies","message":"even though i have a well known dat file and a dat.json","timestamp":1502056126855}
{"from":"pfrazee","message":"TheGillies: let me take a look","timestamp":1502056850590}
{"from":"pfrazee","message":"TheGillies: oh you need to use HTTPS for the well-known dat file to work","timestamp":1502056865303}
{"from":"TheGillies","message":"ah yes I just realized that","timestamp":1502056889993}
{"from":"pfrazee","message":"TheGillies: sorry, I know that's a PITA","timestamp":1502056893526}
{"from":"TheGillies","message":"no worries","timestamp":1502056901404}
{"from":"TheGillies","message":"gives me a reason to get https on my domain","timestamp":1502057092662}
{"from":"pfrazee","message":":)","timestamp":1502057440204}
{"from":"TheGillies","message":"it works","timestamp":1502057525164}
{"from":"TheGillies","message":"yayayay","timestamp":1502057527079}
{"from":"pfrazee","message":"sweet","timestamp":1502057618410}
{"from":"mafintosh","message":"https://github.com/mafintosh/hyperdb/pull/1","timestamp":1502059168419}
{"from":"mafintosh","message":"pfrazee, ralphtheninja[m] o/","timestamp":1502059179520}
{"from":"mafintosh","message":"noffle: o/","timestamp":1502059181567}
{"from":"mafintosh","message":"it using the json storage still","timestamp":1502059194622}
{"from":"mafintosh","message":"workingn on that now","timestamp":1502059197073}
{"from":"TheGillies","message":"does adding a hash to library in beaker auto share with peers?","timestamp":1502064427924}
{"from":"noffle","message":"great","timestamp":1502081698079}
{"from":"noffle","message":"mafintosh: https://github.com/mafintosh/hyperdb/pull/3","timestamp":1502081700257}
{"from":"noffle","message":"nice; almost 10x faster than what's on master, for insertions","timestamp":1502081839068}
{"from":"mafintosh","message":"Code is simpler too","timestamp":1502082330980}
{"from":"noffle","message":"what are the differences?","timestamp":1502083130617}
{"from":"aaaaaaaaa____","message":"pfrazee (or anyone who has been trying the beaker tutorials) - over the past couple months, i've probably tried the basic tutorials 3 or 4 times and always give up because i get no console output... is there something obvious i'm not getting here? https://usercontent.irccloud-cdn.com/file/3Ibkwtk8/Screenshot%202017-08-07%2016.30.21.png","timestamp":1502087468475}
{"from":"ralphtheninja[m]","message":"mafintosh: so hyperdb is completely relying on hypercore?","timestamp":1502106287345}
{"from":"ralphtheninja[m]","message":"built on top of hypercore*","timestamp":1502106300009}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: yeah","timestamp":1502106855283}
{"from":"barnie","message":"anybody worked with http://eventstore.js.org/ ? would be cool to use with dat for local data storage","timestamp":1502111976891}
{"from":"millette","message":"In terms of disk space and bandwidth, is it preferable to have 1000's of little json files (adding new ones every minute or so) or is one large file that you update (every minute or so) better?","timestamp":1502122921082}
{"from":"millette","message":"... in a dat archive","timestamp":1502122951029}
{"from":"pfrazee","message":"millette: the former","timestamp":1502123277531}
{"from":"pfrazee","message":"unlike git, dat does not currently do any form of deduplication of its history","timestamp":1502123294987}
{"from":"pfrazee","message":"(at this time)","timestamp":1502123302108}
{"from":"millette","message":"ok, that's what I thought - and my current choice","timestamp":1502123315922}
{"from":"pfrazee","message":"yeah","timestamp":1502123322519}
{"from":"barnie","message":"jhand: hi, you there?","timestamp":1502125404113}
{"from":"jhand","message":"yes","timestamp":1502125446475}
{"from":"barnie","message":"i updated the awesome-dat PR.","timestamp":1502125465993}
{"from":"barnie","message":"most your comments are addressed, and i created a proposal for working with docs","timestamp":1502125488732}
{"from":"barnie","message":"proposal is here: https://github.com/datproject/discussions/issues/73","timestamp":1502125531466}
{"from":"jhand","message":"ya i saw it via github notification, thanks. haven't had time to respond","timestamp":1502125571896}
{"from":"pfrazee","message":"barnie: thanks again for doing that","timestamp":1502125593109}
{"from":"barnie","message":"if you could look at the docs proposal and give your opinion. I still have time to help with the changes","timestamp":1502125605460}
{"from":"jhand","message":"In general I think we are hesitant for auto generated docs stuff. I tried it in dat-node and wasn't happy with JSdoc, need to remove those comments.","timestamp":1502125645450}
{"from":"barnie","message":"do the boring stuff, so you can code ;)","timestamp":1502125646178}
{"from":"barnie","message":"the separate readme as the doc is also not working very well","timestamp":1502125676384}
{"from":"jhand","message":"I've not seen a good example that is easy to read","timestamp":1502125681029}
{"from":"barnie","message":"you mean of an auto-generated JSDoc?","timestamp":1502125696243}
{"from":"jhand","message":"And easy to maintain for people who don't do JSDoc all the time","timestamp":1502125701181}
{"from":"jhand","message":"Yes","timestamp":1502125702882}
{"from":"barnie","message":"there are alternatives besides jsdoc.","timestamp":1502125722602}
{"from":"barnie","message":"the gist of the proposal is having the docs inline with the code they refer to","timestamp":1502125739471}
{"from":"barnie","message":"jsdoc is just the most used one currently, i think","timestamp":1502125777744}
{"from":"jondashkyle","message":"barnie: yea i personally find inline docs unhelpful, and if anything a distraction when becoming familiar and working with a codebase.","timestamp":1502126194637}
{"from":"jondashkyle","message":"much prefer docs authored for readability and have a nice cadence and flow from intro to extensive. i think choo is a super solid example of that: https://github.com/choojs/choo","timestamp":1502126271484}
{"from":"barnie","message":"i personally find reading jsdoc comments helpful when working in the code","timestamp":1502126274091}
{"from":"jondashkyle","message":"¯\\_(ツ)_/¯","timestamp":1502126310373}
{"from":"barnie","message":"problem is that api docs now are often outdated and require system-icons in dat-awesome for the aggregation","timestamp":1502126351360}
{"from":"barnie","message":"jhand: instead of parsing the documented modules from awesome-dat README it could also parse from a separate file","timestamp":1502126435196}
{"from":"pfrazee","message":"barnie: yeah but docs are docs. If they fall behind in the readme, arent they just as likely to fall behind in the code?","timestamp":1502126593263}
{"from":"barnie","message":"well, its an concession, if we don't want inline jsdoc or similar, then keep the procedure as-is but do not derive the list of project README's to parse from the awesome-dat but from some other location","timestamp":1502126662236}
{"from":"barnie","message":"parse --> i mean aggregate","timestamp":1502126683884}
{"from":"barnie","message":"the docs project does this in the package.json","timestamp":1502126712834}
{"from":"barnie","message":"https://github.com/datproject/docs/blob/master/package.json#L9","timestamp":1502126759081}
{"from":"pfrazee","message":"ogd: just reviewed the multiwriter diff on the docs, so far looking good","timestamp":1502126782253}
{"from":"barnie","message":"pfrazee: in https://github.com/datproject/discussions/issues/73 i described a number of pro's/con's. inline comments go in one flow with both coding and reviewing and the text is right above the code","timestamp":1502127021905}
{"from":"barnie","message":"instead of updating a README after the fact of coding and separate file checking + review process (plus lookups)","timestamp":1502127068931}
{"from":"jhand","message":"barnie: not really understanding why the parsing of awesome-dat for modules to bring into docs is an issue. We can add it as a file to docs too, doesn't matter to me. Feel free to PR that.","timestamp":1502127375919}
{"from":"barnie","message":"the only issue is that there should be strange icons that awesome readers do not understand. they are system-markers for the docs project, and they are ugly-looking :)","timestamp":1502127456940}
{"from":"jhand","message":"I'd agree with pfrazee. Its not an issue of *how* docs are being written. More than they are falling behind. I'd think we're less likely to keep inline docs that require special syntax updated.","timestamp":1502127473909}
{"from":"barnie","message":"i can understand. i thought jsdocs where quite common. i am used to javadocs myself.","timestamp":1502127534050}
{"from":"barnie","message":"maybe there is an inline markdown commenter..","timestamp":1502127558790}
{"from":"jhand","message":"Also jondashkyle has a point that inline docs make working with the code harder.","timestamp":1502127604704}
{"from":"jhand","message":"barnie: I agree we need to improve our docs situation. I doubt moving everything to inline docs will be effective in doing that, knowing how things our modularized and how we work.","timestamp":1502127649705}
{"from":"barnie","message":"yea, no problem","timestamp":1502127671491}
{"from":"jhand","message":"re: how we pull in the readme from awesome-dat, I am happy with anything that works =).","timestamp":1502127715165}
{"from":"barnie","message":"yes, that will solve the not-so-awesome icons at least :D","timestamp":1502127758116}
{"from":"barnie","message":"i will PR something i think","timestamp":1502127788543}
{"from":"karissa","message":"Jsdocs are great, I use them on my projects. They're industry standard as far as I can tell","timestamp":1502130001265}
{"from":"karissa","message":"it's nice because you're reminded to update the docs when you update the code cause they are next to each other","timestamp":1502130078922}
{"from":"mafintosh","message":"whatever works for the individual hacker","timestamp":1502130273505}
{"from":"mafintosh","message":"thats my mantra","timestamp":1502130277257}
{"from":"mafintosh","message":"karissa: did you buy a linux laptop? i'm getting the dell xps most likely","timestamp":1502130320882}
{"from":"karissa","message":"mafintosh: no I haven't","timestamp":1502130437470}
{"from":"creationix","message":"mafintosh: I've got the xps13 btw","timestamp":1502130922240}
{"from":"creationix","message":"the older one with the buggy USB-C though. Make sure to get the newer one","timestamp":1502130949871}
{"from":"creationix","message":"or xps15 if you want power","timestamp":1502130956906}
{"from":"mafintosh","message":"creationix: getting the newest xps 13","timestamp":1502130990144}
{"from":"mafintosh","message":"creationix: i'm hoping they fixed that issue","timestamp":1502131006035}
{"from":"mafintosh","message":"usbc is essential to me :)","timestamp":1502131012221}
{"from":"creationix","message":"yeah, I have the XPS 13 (9350, Late 2015) and it's buggy when using the dell dock","timestamp":1502131039067}
{"from":"creationix","message":"but the XPS 13 (9360, Late 2016) has a newer chipset with more mature usb-c","timestamp":1502131062616}
{"from":"creationix","message":"when I was looking for a new laptop, my shortlist was:","timestamp":1502131489845}
{"from":"creationix","message":"- https://www.razerzone.com/gaming-systems/razer-blade-stealth","timestamp":1502131489950}
{"from":"creationix","message":"- latest XPS15","timestamp":1502131490125}
{"from":"creationix","message":"- one of the lenovo X1 laptops","timestamp":1502131490229}
{"from":"ungoldman","message":"\"razer blade stealth\" sounds like it would sneak up on me shiv me","timestamp":1502131838466}
{"from":"ungoldman","message":"*and","timestamp":1502131843238}
{"from":"creationix","message":"the xps probably has the best linux support","timestamp":1502131880170}
{"from":"bret","message":"TSA Agent: Sir, is this your Razor Blade Stealth?","timestamp":1502132475495}
{"from":"pfrazee","message":"haha","timestamp":1502132486709}
{"from":"pfrazee","message":"where's this guy from, Denmark? Take him away!","timestamp":1502132496444}
{"from":"bret","message":"Don't let him near any Rød grød, it gives super powers","timestamp":1502132558498}
{"from":"bret","message":"Props to razor for doing nice custom hardware design","timestamp":1502132595772}
{"from":"mafintosh","message":"noffle: thanks for the pr to my pr","timestamp":1502133524714}
{"from":"barnie","message":"mafintosh: on jsdocs 'whatever works for the individual hacker' works for the individual hacker, but not best for a consistent dat ecosystem :)","timestamp":1502134120696}
{"from":"mafintosh","message":"446 hyperdb tests and counting","timestamp":1502134309582}
{"from":"creationix","message":"I'm starting to question my plan to expose the dat sync protocol directly to browsers","timestamp":1502134600554}
{"from":"creationix","message":"maybe a rpc channel where dat lives in the server would work better","timestamp":1502134618563}
{"from":"creationix","message":"mafintosh: I was able to use the node libraries to setup a proxy that exposes a dat without encryption. (connect to network, create tcp server and replicate to tcp socket without encryption)","timestamp":1502134693183}
{"from":"mafintosh","message":"creationix: nice","timestamp":1502134719854}
{"from":"creationix","message":"and now I've decoded the messages in lua, but am unsure what to so with it","timestamp":1502134727122}
{"from":"creationix","message":"I want both my server and browser client to have sparse options by default and sync data on-demand","timestamp":1502134756077}
{"from":"creationix","message":"but I don't think the browser acting as a peer can tell the server what it (the server) should sync","timestamp":1502134772709}
{"from":"creationix","message":"but the hypercore JS api does allow selecting syncing and prioritizing certain chunks","timestamp":1502134793419}
{"from":"mafintosh","message":"creationix: me and jhand just talked about that the other day","timestamp":1502134899286}
{"from":"mafintosh","message":"creationix: and a way to do it would be to listen for request events on the server connection (peer -> server) and then if the server doesn't have the block it would the fetch it","timestamp":1502134938046}
{"from":"mafintosh","message":"*then","timestamp":1502134945994}
{"from":"mafintosh","message":"from the network","timestamp":1502134948065}
{"from":"creationix","message":"so the server would just need custom logic","timestamp":1502135173231}
{"from":"creationix","message":"if it gets a want for a block it doesn't have, to somehow tell the client it will get it asap","timestamp":1502135189817}
{"from":"creationix","message":"I did notice the protocol times out pretty fast if you ignore a want","timestamp":1502135212615}
{"from":"creationix","message":"or maybe I didn't complete the handshake","timestamp":1502135221584}
{"from":"creationix","message":"something makes it timeout","timestamp":1502135230705}
{"from":"creationix","message":"btw, this is how proxy repositories in my lit package management system work","timestamp":1502135308398}
{"from":"creationix","message":"it's like npm for luvit, but it's core is a git compatible merkle tree","timestamp":1502135327466}
{"from":"creationix","message":"if you request an object or hash from the proxy server that it doesn't have, it requests it from upstream and caches it locally and responds","timestamp":1502135352798}
{"from":"creationix","message":"so setting up a private repository that falls-back to the public repo is literally just `lit serve` and pointing your client to the local server","timestamp":1502135379832}
{"from":"jhand","message":"mafintosh: taravancil sweet sxsw panel! I worked with Mikel a bit in my last project too.","timestamp":1502136695311}
{"from":"taravancil","message":"yeah i'm excited! fingers crossed it's chosen","timestamp":1502138279444}
{"from":"taravancil","message":"mafintosh i actually didn't know you were on the panel too. nice!","timestamp":1502138329482}
{"from":"mafintosh","message":"Oh yea they emailed me","timestamp":1502138353391}
{"from":"mafintosh","message":"Looked like fun and an excuse to go hangout with you and paul","timestamp":1502138377755}
{"from":"ralphtheninja[m]","message":"dat clone dat://0a604eaaf36eb22fc60c537e0c45cff1cd8c535600d640afbe6cae653d9110a5","timestamp":1502147679778}
{"from":"ralphtheninja[m]","message":"cc mafintosh","timestamp":1502147683277}
{"from":"mafintosh","message":"surprise dat","timestamp":1502147717176}
{"from":"mafintosh","message":"ralphtheninja[m]: that's some great flying","timestamp":1502147898827}
{"from":"ogd","message":"bret: https://github.com/egoroof/browser-id3-writer","timestamp":1502148284711}
{"from":"ralphtheninja[m]","message":"mafintosh: the guy sits in a chair with goggles on him, small cam installed on the drone","timestamp":1502148526414}
{"from":"mafintosh","message":"ralphtheninja[m]: whoa nice, have you tried it?","timestamp":1502149935824}
{"from":"ralphtheninja[m]","message":"nope, not yet","timestamp":1502160266860}
{"from":"TheGillies","message":"mafintosh: u from pdx or just visiting?","timestamp":1502160418767}
{"from":"lgierth","message":"hey is there a way to donate by wire transfer too?","timestamp":1502160618499}
{"from":"lgierth","message":"(based in .de)","timestamp":1502160675193}
{"from":"TheGillies","message":"ralphtheninja: I love that someone can't just share a hash and have it link to random drone footage","timestamp":1502160783112}
{"from":"TheGillies","message":"can just*","timestamp":1502160789423}
{"from":"TheGillies","message":"do you think the MPAA will ever honeypot dat archives?","timestamp":1502160924737}
{"from":"mafintosh","message":"TheGillies: you can also read the dat if you have the dat key so harder for 3rd parties to troll authors in general","timestamp":1502161148256}
{"from":"mafintosh","message":"TheGillies: and I'm from Copenhagen but hangout with ogd here a couple of times a year","timestamp":1502161176068}
{"from":"mafintosh","message":"TheGillies: we are sitting at the foodcarts where max bought a burrito for 16 BTC once","timestamp":1502161233108}
{"from":"mafintosh","message":"jhand: i'm importign *all* dois into hyperdb now","timestamp":1502166733695}
{"from":"barnie","message":"ralphtheninja: i edited 2 dat-related tags on SO that are now pending, probably you should be the reviewer (as tag creator). i can't make any more edits :(","timestamp":1502168225214}
{"from":"TheGillies","message":"Is there a tool to dat mirror a website?","timestamp":1502168404663}
{"from":"TheGillies","message":"Or so i have to find a third party tool and sat share the download folder?","timestamp":1502168420066}
{"from":"TheGillies","message":"dat share*","timestamp":1502168424186}
{"from":"mafintosh","message":"i think jhand made one","timestamp":1502168459353}
{"from":"mafintosh","message":"but unsure","timestamp":1502168461603}
{"from":"mafintosh","message":"wget'ing it and the dat'ing it would work tho","timestamp":1502168475010}
{"from":"dat-gitter","message":"(sdockray) how many dois are there?","timestamp":1502168506759}
{"from":"TheGillies","message":"TIL wget downloads websites","timestamp":1502168511928}
{"from":"mafintosh","message":"@sdockray 81 mio","timestamp":1502168615951}
{"from":"lgierth","message":"wget --mirror --convert-links --no-parent rocks","timestamp":1502168617225}
{"from":"mafintosh","message":"TheGillies: wget is one of those unix tools that can do anything","timestamp":1502168646936}
{"from":"TheGillies","message":"https://beaker-kodo.hashbase.io/","timestamp":1502168818779}
{"from":"TheGillies","message":"custom ripped site","timestamp":1502168822981}
{"from":"TheGillies","message":"woot woot","timestamp":1502168824382}
{"from":"mafintosh","message":"nice","timestamp":1502168891919}
{"from":"mafintosh","message":"its a dat -> http -> dat -> http","timestamp":1502169464289}
{"from":"mafintosh","message":"you went full meta","timestamp":1502169468443}
{"from":"TheGillies","message":"full meta jacket","timestamp":1502169582406}
{"from":"ogd","message":"tyler what cryptocurrencies should i buy","timestamp":1502169618724}
{"from":"TheGillies","message":"if you are an accredited investor buy filecoin","timestamp":1502169634923}
{"from":"TheGillies","message":"I would actually take out some leverage on bitcoin and short it","timestamp":1502169690493}
{"from":"TheGillies","message":"whale make the market go up and down so they can make money on both sides","timestamp":1502169732597}
{"from":"TheGillies","message":"whales*","timestamp":1502169736113}
{"from":"TheGillies","message":"if you bought $1000 dollars worth of btc and waited until price sunk to $2,857 per bitcoin you would make $450 profit on a short","timestamp":1502170820803}
{"from":"TheGillies","message":"minus interest","timestamp":1502170858063}
{"from":"mafintosh","message":"11 mio entries in the hyperdb now and counting","timestamp":1502175054071}
{"from":"mafintosh","message":"cblgh: o/ (using rc1)","timestamp":1502175062350}
{"from":"cblgh","message":"mafintosh: ohhhhhhhh","timestamp":1502176904981}
{"from":"cblgh","message":"mafintosh: looking forward to playing around it it in hyperdungeon","timestamp":1502176966306}
{"from":"cblgh","message":"with*","timestamp":1502176998505}
{"from":"cblgh","message":"this week i'm finishing up an electron app that uses hyperdrive for a decentralized social network me and a friend made","timestamp":1502177028584}
{"from":"cblgh","message":"https://github.com/Rotonde/Specs","timestamp":1502177043602}
{"from":"cblgh","message":"mafintosh: any way to add peers to a running hyperdb instance?","timestamp":1502177489533}
{"from":"mafintosh","message":"cblgh: ya thats in rc1 (adding the final touches for that tmw)","timestamp":1502177514530}
{"from":"cblgh","message":"oh shit no way","timestamp":1502177541216}
{"from":"cblgh","message":"mafintosh: is discovery also solved?","timestamp":1502177566653}
{"from":"cblgh","message":"i.e. hyperdiscovery but for hyperdb somehow","timestamp":1502177579104}
{"from":"mafintosh","message":"cblgh: ya thats gonna be compatible as well","timestamp":1502177675313}
{"from":"cblgh","message":"wow","timestamp":1502177678857}
{"from":"mafintosh","message":"unsure if that'll get in there tmw, but not tricky at all","timestamp":1502177689694}
{"from":"cblgh","message":"christmas come early","timestamp":1502177693535}
{"from":"mafintosh","message":"cblgh: it's a lot faster as well, really keen to see how fast i can do lookups in this 81mio db i'm making","timestamp":1502177730159}
{"from":"cblgh","message":"damn that's big","timestamp":1502177753272}
{"from":"cblgh","message":"how are you constructing it?","timestamp":1502177756967}
{"from":"cblgh","message":"mafintosh: also good job lol","timestamp":1502177790099}
{"from":"mafintosh","message":"cblgh: i have a csv file containingn all science DOIs","timestamp":1502177794520}
{"from":"mafintosh","message":"i'm streamingn that into it (parsing it first using csv-parser)","timestamp":1502177812259}
{"from":"cblgh","message":"ah neat","timestamp":1502178063297}
{"from":"cblgh","message":"mafintosh: and splitting it up between many hypercores?","timestamp":1502178075100}
{"from":"mafintosh","message":"cblgh: its a hypercore per writer. I'm just writing it all on one machine so only one is needed","timestamp":1502178259824}
{"from":"mafintosh","message":"cblgh: but if imported it on two it wouls use two!","timestamp":1502178279450}
{"from":"cblgh","message":"gotcha","timestamp":1502178368088}
{"from":"cblgh","message":"wasn't sure if diff hypercores would affect performance","timestamp":1502178386657}
{"from":"mafintosh","message":"but lookup perf is the same no matter how many hypercores are used","timestamp":1502178390276}
{"from":"mafintosh","message":"and write perf is the same","timestamp":1502178400464}
{"from":"mafintosh","message":"there is a tiny storage over (one extra varint per hypercore)","timestamp":1502178422426}
{"from":"mafintosh","message":"now 13.5 mio","timestamp":1502179308430}
{"from":"dat-gitter","message":"(sdockray) ~ 1.5 days to import?","timestamp":1502184646230}
{"from":"mafintosh","message":"@sdockray it's importing around 1000/s so less than a day for 81 mio","timestamp":1502186529926}
{"from":"yoshuawuyts","message":"mafintosh: https://github.com/nodejs/node-addon-api/pull/103 mcollina just pointed me to this","timestamp":1502186703145}
{"from":"yoshuawuyts","message":"mafintosh: never compile again (apart from windows... for now)","timestamp":1502186724325}
{"from":"dat-gitter","message":"(sdockray) mafintosh: nice!!! also curious to see how the other way (iterator) fares","timestamp":1502186799589}
{"from":"lgierth","message":"hey hey","timestamp":1502206516779}
{"from":"lgierth","message":"are there more donation options than just cc?","timestamp":1502206523839}
{"from":"lgierth","message":"my cc company and i have a troubled history","timestamp":1502206534094}
{"from":"pfrazee","message":"lgierth: I think I saw that open collective takes BtC","timestamp":1502207713984}
{"from":"pfrazee","message":"https://opencollective.com/beaker","timestamp":1502207726524}
{"from":"pfrazee","message":"omg I thought this was #beaker","timestamp":1502207743026}
{"from":"pfrazee","message":"embarrassed","timestamp":1502207752108}
{"from":"pfrazee","message":"ogd or karissa ^","timestamp":1502207766463}
{"from":"barnie","message":"jhand: hi! can you look at https://github.com/datproject/docs/pull/79 netlify says 'failed' but i see no logs","timestamp":1502207783862}
{"from":"karissa","message":"https://www.npmjs.com/package/pkg","timestamp":1502211586518}
{"from":"ogd","message":"karissa: doesn't do native modules :(","timestamp":1502212998536}
{"from":"barnie","message":"karissa: ogd: shall i add the list of documentation projects to the docs readme instead of separate file? (see: https://github.com/datproject/docs/pull/79) Probably better, will make the PR not fail too.","timestamp":1502213497679}
{"from":"ogd","message":"barnie: yea having them in the readme makes sense to me","timestamp":1502213590503}
{"from":"barnie","message":"i make them 2 PR's then, the README first, so the build does not fail","timestamp":1502213630804}
{"from":"jhand","message":"ogd, karissa: I just tried pkg and think it worked, they fixed the bug I was having before. You just need to bundle the native modules when we distribute","timestamp":1502213958725}
{"from":"karissa","message":"woah cool","timestamp":1502213970901}
{"from":"ogd","message":"jhand: ooooo","timestamp":1502213980184}
{"from":"jhand","message":"it'd be great if someone could figure out the scripts for doing the bundles and what not on travis","timestamp":1502213993394}
{"from":"jhand","message":"some on https://github.com/resin-io/etcher but couldn't quite figure out what they're doing","timestamp":1502214013584}
{"from":"jhand","message":"barnie: added logs to your PR. not sure what was wrong but you may be able to see more if you build locally","timestamp":1502214106879}
{"from":"barnie","message":"the error was the package.json referring to a file that was not yet on master, but part of my pr","timestamp":1502214149172}
{"from":"barnie","message":"but i just added the list of documented projects to the Docs readme, and will have package.json point to there","timestamp":1502214183568}
{"from":"barnie","message":"and make 2 pr's","timestamp":1502214188597}
{"from":"jhand","message":"sweet talk title karissa =) https://2017.nodeconf.com.ar/karissa-mckelvey.html","timestamp":1502214603831}
{"from":"karissa","message":"thanks!","timestamp":1502214655171}
{"from":"barnie","message":"jhand: if you could merge https://github.com/datproject/docs/pull/79 then i'll directly follow up with the package.json change","timestamp":1502214921671}
{"from":"cblgh","message":"yaa my dat x electron thing can upload media files now","timestamp":1502215012638}
{"from":"cblgh","message":"pfrazee: beakerbrowser is pretty essential for testing stuff like this easily, thx a lot for making it","timestamp":1502215049959}
{"from":"pfrazee","message":"cblgh: sure thing","timestamp":1502215154364}
{"from":"cblgh","message":"felt really cool to do an update in my app, and see it automatically update in beaker","timestamp":1502215210735}
{"from":"cblgh","message":"like woahh","timestamp":1502215213715}
{"from":"cblgh","message":"the p2p web is real","timestamp":1502215217959}
{"from":"pfrazee","message":":D","timestamp":1502215231193}
{"from":"taravancil","message":"i love that feeling","timestamp":1502215259870}
{"from":"barnie","message":"Could anyone with 5k+ stackoverflow reputation please review my dat-project tag info changes? I can't make any edits now on SO..","timestamp":1502216637441}
{"from":"barnie","message":"https://stackoverflow.com/review/suggested-edits/16958059","timestamp":1502216649543}
{"from":"barnie","message":"https://stackoverflow.com/review/suggested-edits/16958060","timestamp":1502216663778}
{"from":"barnie","message":"https://stackoverflow.com/review/suggested-edits/16957433","timestamp":1502216673563}
{"from":"barnie","message":"https://stackoverflow.com/review/suggested-edits/16957434","timestamp":1502216686042}
{"from":"bret","message":"My irc points don't exchange on stackexchange 😫","timestamp":1502216819448}
{"from":"barnie","message":"thx, for trying anyway :)","timestamp":1502216852019}
{"from":"mafintosh","message":"karissa: ya awesome talk :)","timestamp":1502217874827}
{"from":"ogd","message":"gates foundation data hosting guidelines gates foundation data guidlines","timestamp":1502218114611}
{"from":"ogd","message":"oops","timestamp":1502218116407}
{"from":"ogd","message":"https://gatesopenresearch.s3.amazonaws.com/resources/Data_Guidelines.pdf","timestamp":1502218118831}
{"from":"barnie","message":"dat project team, any of you, please. 5 min. for SO tag info review + couple of minutes for awesome-dat and docs PR's? i want to get this from my plate","timestamp":1502218200469}
{"from":"ogd","message":"I don't have a stackoverflow account","timestamp":1502218221153}
{"from":"barnie","message":"the PR's are also fine :)","timestamp":1502218232976}
{"from":"pfrazee","message":"I think ralphtheninja[m] is our resident SO power user","timestamp":1502218252316}
{"from":"barnie","message":"ya, already asked, but he is at sha2017","timestamp":1502218274787}
{"from":"jhand","message":"barnie: I'll look at the awesome page this afternoon. I need to spend a bit of time on it before merging still and have a few other things to get done first.","timestamp":1502218503105}
{"from":"barnie","message":"jhand: thanks. but the links are all up-to-date and list much more complete. you could also do edits in follow-up commits","timestamp":1502218556909}
{"from":"jhand","message":"barnie: does a few hours really make a difference? It'll be easier for me to review via PR","timestamp":1502218581297}
{"from":"barnie","message":"no problem. i overread you would look later today.. oops :)","timestamp":1502218611014}
{"from":"ogd","message":"interesting file upload cli https://github.com/mcrapet/plowshare","timestamp":1502225127231}
{"from":"creationix","message":"jhand: what would be a good way to run hyperdrive in a browser. I don't want webrtc or dht","timestamp":1502226613464}
{"from":"creationix","message":"I just want to be able to websocket to a server and speak the replication protocol over that, but have the dat API locally in the browser","timestamp":1502226631251}
{"from":"jhand","message":"creationix: I've been using websocket stream to do that.","timestamp":1502226655263}
{"from":"creationix","message":"so just browserify hyperdrive and hook it to a websocket stream?","timestamp":1502226678560}
{"from":"jhand","message":"creationix: https://github.com/joehand/trimet-map/blob/master/client/map.js","timestamp":1502226683930}
{"from":"jhand","message":"creationix: yep!","timestamp":1502226688866}
{"from":"jhand","message":"creationix: and use the random-access-memory storage, i think the only option in browser for now","timestamp":1502226718105}
{"from":"creationix","message":"is there a module for storing data in indexedDB?","timestamp":1502226722397}
{"from":"creationix","message":"I really need it persistent so the browser can read from it's cache when offline","timestamp":1502226742435}
{"from":"jhand","message":"creationix: ah hmm. not that I know of but let me look","timestamp":1502226760892}
{"from":"jhand","message":"you could read the archive and store it separately, but not ideal","timestamp":1502226777251}
{"from":"creationix","message":"I might be able to write one. The random-access part is tricky, but as I understand, the offsets are fixed","timestamp":1502226780811}
{"from":"creationix","message":"I could just make the offsets part of the key","timestamp":1502226803946}
{"from":"cblgh","message":"mafintosh: lol torrent-mount is amazing","timestamp":1502226808389}
{"from":"jhand","message":"creationix: ya here is the base implementation: https://github.com/juliangruber/abstract-random-access","timestamp":1502226817794}
{"from":"creationix","message":"so it would be like `metadata.data.0.43234` or something (range from 0 to 43234)","timestamp":1502226888414}
{"from":"jhand","message":"creationix: oh https://www.npmjs.com/package/random-access-idb","timestamp":1502226904905}
{"from":"creationix","message":"looks like he supports moving boundaries. I can probaby optimize a lot more since I know the ranges are fixed","timestamp":1502226975603}
{"from":"ralphtheninja[m]","message":"Just landed. Something I should look at? cc barnie","timestamp":1502226977068}
{"from":"creationix","message":"well, time to use browserify I guess (never really used it much since I hate the overhead of simulating node APIs in browsers)","timestamp":1502227017281}
{"from":"jhand","message":"yea not sure there is an easy way around it with hyperdrive right now.","timestamp":1502227095654}
{"from":"mafintosh","message":"cblgh: i wrote to do a demo where i live install Ubuntu from a torrent without downloading it first","timestamp":1502227170937}
{"from":"cblgh","message":"mafintosh: lol that's great","timestamp":1502227210107}
{"from":"mafintosh","message":"cblgh: we need dat mount","timestamp":1502227287035}
{"from":"cblgh","message":"hell yeah","timestamp":1502227342308}
{"from":"creationix","message":":( Browserify bundle for just a couple modules is already 600kb","timestamp":1502227490926}
{"from":"creationix","message":"(not to mention the 34MB in node_modules)","timestamp":1502227506008}
{"from":"creationix","message":"well, get it working first, optimize later, right?","timestamp":1502227534427}
{"from":"substack","message":"I have a random-access-idb module but I haven't got it working with hyperdb yet https://npmjs.com/package/random-access-idb","timestamp":1502227633196}
{"from":"ogd","message":"mafintosh: https://github.com/orcproject/protocol/blob/master/protocol.md","timestamp":1502227897996}
{"from":"mafintosh","message":"substack: i fixed that . length issue","timestamp":1502228065260}
{"from":"mafintosh","message":"substack: is there another bug?","timestamp":1502228079634}
{"from":"substack","message":"still doesn't work","timestamp":1502228573650}
{"from":"substack","message":"and I'm completely stumped about why","timestamp":1502228582531}
{"from":"substack","message":"heaps of tests, all of them pass","timestamp":1502228608985}
{"from":"substack","message":"dat://72b9cc9c97c0b09425385f2af9851d33695861be3215e60e8ac89c8d4ad2d7a9/getput-idb-hyperdb-example.js","timestamp":1502228849906}
{"from":"substack","message":"going to start using dat as a pastebin","timestamp":1502228858258}
{"from":"creationix","message":"substack: how do I convert a Uint8Array into a Buffer that my browserified code expects?","timestamp":1502229511904}
{"from":"creationix","message":"I got a Uint8Array from IDB and hypercore is trying to call node methods on it","timestamp":1502229524875}
{"from":"substack","message":"Buffers are augmented Uint8Arrays","timestamp":1502229682129}
{"from":"substack","message":"in browser land","timestamp":1502229685015}
{"from":"substack","message":"you can also do Buffer(u8array)","timestamp":1502229722950}
{"from":"mafintosh","message":"substack: oh okay, i'll take a look","timestamp":1502229890076}
{"from":"creationix","message":"mafintosh: substack. I just wrote this. It seems to work great for persistent storage in the browser https://gist.github.com/creationix/c415b7583f4a71237fd216517960939c","timestamp":1502232058132}
{"from":"creationix","message":"takes advantage of the fact that hypercore always asks for the same ranges and never overlaps them.","timestamp":1502232076129}
{"from":"mafintosh","message":"creationix: nice","timestamp":1502232118420}
{"from":"creationix","message":"Running the hyperdrive README example in browser. https://usercontent.irccloud-cdn.com/file/oiI1EpbX/hyperdrive-in-idb.png","timestamp":1502232193915}
{"from":"creationix","message":"mafintosh: is bitfield's second value really 3k large? Seems odd","timestamp":1502232232615}
{"from":"mafintosh","message":"creationix: it always writes the bitfield to disk in 3kb pages","timestamp":1502232305548}
{"from":"creationix","message":"then it's working","timestamp":1502232317048}
{"from":"creationix","message":":)","timestamp":1502232318838}
{"from":"ogd","message":"blahah: i cant get the crossref-cli to stream multiple pages... it just does the first page","timestamp":1502235588416}
{"from":"ogd","message":"blahah: i wrote a thing instead https://gist.github.com/maxogden/95b16e2bcd4c295d77ccfd6b5615370f","timestamp":1502236879460}
{"from":"dat-gitter","message":"(e-e-e) jhand: where you asking about auto building/packaging of electron apps by travis, earlier?","timestamp":1502237318738}
{"from":"jhand","message":"@e-e-e auto building yes, but for our CLI =) https://github.com/datproject/dat/issues/842","timestamp":1502237363016}
{"from":"dat-gitter","message":"(e-e-e) I will have a quick look - see if any of my recent experience could help.","timestamp":1502237396044}
{"from":"jhand","message":"sweet!","timestamp":1502237496785}
{"from":"pfrazee","message":"libvms 2.0.0 should be stable https://github.com/pfrazee/libvms","timestamp":1502241825943}
{"from":"creationix","message":"I got sparse hyperdrives syncing to the browser over websocket","timestamp":1502244126480}
{"from":"creationix","message":"and persisting to IDB","timestamp":1502244131208}
{"from":"creationix","message":"how do I watch the progress of the sync. I see events on hypercore, but how do I get `feed` from `archive`","timestamp":1502244153042}
{"from":"ogd","message":"mafintosh: | sort | uniq -c | sort -rn","timestamp":1502244190367}
{"from":"creationix","message":"ahh, `archive.metadata` is the feed I want for the sparse sync. Seems it syncs the entire metadata up front?","timestamp":1502244270213}
{"from":"mafintosh","message":"creationix: its not synced up front but sparse mode only applies for content atm","timestamp":1502244334355}
{"from":"mafintosh","message":"but we should change that","timestamp":1502244338720}
{"from":"mafintosh","message":"pr welcome, should be easy","timestamp":1502244343506}
{"from":"mafintosh","message":"(just a flag)","timestamp":1502244348186}
{"from":"creationix","message":"so both could be sparse, nice","timestamp":1502244355263}
{"from":"creationix","message":"mafintosh: something like this? https://github.com/mafintosh/hyperdrive/pull/180","timestamp":1502244581592}
{"from":"mafintosh","message":"the .latest thing is not needed for metadata but cool","timestamp":1502244629010}
{"from":"creationix","message":"ok, wasn't sure if it was just an alias for `sparse`. I'll take it out if it's not relevent","timestamp":1502244667186}
{"from":"mafintosh","message":"creationix: cool, thanks. i'll release this later today (just ran out of power)","timestamp":1502244891653}
{"from":"creationix","message":"no worries, I'm testing now in my project to see if it helps","timestamp":1502244907436}
{"from":"creationix","message":"hmm, `archive.content` is null right after `archive#ready` event. But `archive.metadata` is a feed","timestamp":1502245148893}
{"from":"mafintosh","message":"Yea content is populated lazily","timestamp":1502245203261}
{"from":"mafintosh","message":"Cause it needs to replicate first","timestamp":1502245215971}
{"from":"mafintosh","message":"creationix: on('content') is emitted tho when it is set","timestamp":1502245256430}
{"from":"creationix","message":"thanks","timestamp":1502245271383}
{"from":"TheGillies","message":"Am I not supposed to share keys across computers?","timestamp":1502247063456}
{"from":"TheGillies","message":"I'm unclear on this","timestamp":1502247075368}
{"from":"creationix","message":"TheGillies: no, if multiple writers diverge, bad things happen","timestamp":1502248741805}
{"from":"creationix","message":"though the upcoming hyperdb seems to help with this","timestamp":1502248753180}
{"from":"mafintosh","message":"Keys are per machine ya. Support for this is coming","timestamp":1502251926897}
{"from":"jondashkyle","message":"mafintosh: super excited for this !","timestamp":1502252534005}
{"from":"TheGillies","message":"oh","timestamp":1502252995339}
{"from":"TheGillies","message":"can't wait then","timestamp":1502253000235}
{"from":"TheGillies","message":"I have my website reverse proxied to my dat","timestamp":1502253021260}
{"from":"TheGillies","message":"so would be nice to publish from multiple computers","timestamp":1502253040949}
{"from":"karissa","message":"you can do dat keys import and dat keys export but it's experimental","timestamp":1502254082978}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: ping","timestamp":1502285933487}
{"from":"ralphtheninja[m]","message":"yoshuawuyts: pong","timestamp":1502286034227}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: have you ever tried creating shared memory between Node processes?","timestamp":1502286070633}
{"from":"ralphtheninja[m]","message":"yoshuawuyts: nope, but sounds cool :)","timestamp":1502286088357}
{"from":"ralphtheninja[m]","message":"memory mapped files?","timestamp":1502286107270}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: haha, yeah I'm doing a lil science project","timestamp":1502286117781}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: was thinking more like: a shared JS object","timestamp":1502286139387}
{"from":"yoshuawuyts","message":"not sure if it's a really bad idea, or a great idea","timestamp":1502286177086}
{"from":"ralphtheninja[m]","message":"it can be both :)","timestamp":1502286191541}
{"from":"yoshuawuyts","message":":D","timestamp":1502286209920}
{"from":"ralphtheninja[m]","message":"exploring is never a bad idea","timestamp":1502286238641}
{"from":"yoshuawuyts","message":"yeah, v true","timestamp":1502286269174}
{"from":"ralphtheninja[m]","message":"yoshuawuyts: what would the api look like?","timestamp":1502286294106}
{"from":"yoshuawuyts","message":"`var sharedObject = require('shared-object')()`","timestamp":1502286327534}
{"from":"yoshuawuyts","message":"and then you can mutate it","timestamp":1502286334740}
{"from":"yoshuawuyts","message":"probably pass it a unique ID in the constructor so you can connect two objects","timestamp":1502286348004}
{"from":"yoshuawuyts","message":"well, connect 'em between processes","timestamp":1502286358127}
{"from":"yoshuawuyts","message":"might race super hard - but that's part of the tradeoff","timestamp":1502286374102}
{"from":"ralphtheninja[m]","message":"`hyperobject` :)","timestamp":1502286386719}
{"from":"yoshuawuyts","message":"`hyper-ohno-bject`","timestamp":1502286418593}
{"from":"ralphtheninja[m]","message":"lol","timestamp":1502286435928}
{"from":"cblgh","message":"oh yeah ralphtheninja[m] someone wanted a dat SO change to go through and apparently you're stacked with SO points, so mentioning it :~","timestamp":1502288483347}
{"from":"cblgh","message":"20:24:09 < barnie> https://stackoverflow.com/review/suggested-edits/16958059","timestamp":1502288499677}
{"from":"cblgh","message":"20:24:23 < barnie> https://stackoverflow.com/review/suggested-edits/16958060","timestamp":1502288499782}
{"from":"cblgh","message":"20:24:33 < barnie> https://stackoverflow.com/review/suggested-edits/16957433","timestamp":1502288499783}
{"from":"cblgh","message":"20:24:45 < barnie> https://stackoverflow.com/review/suggested-edits/16957434","timestamp":1502288500576}
{"from":"ralphtheninja[m]","message":"approved the first one, seems the other was already handled by others (even one rejected which I dont understand)","timestamp":1502289362618}
{"from":"barnie","message":"ralphtheninja[m] : thanks. yea saw one was downvoted, np.","timestamp":1502290044782}
{"from":"barnie","message":"thanks cblgh:","timestamp":1502290143136}
{"from":"cblgh","message":":3","timestamp":1502292468089}
{"from":"bret","message":"I want that hyperdb readme","timestamp":1502297303767}
{"from":"ogd","message":"bret: i added some hyperdb info to https://github.com/datproject/docs/blob/master/papers/dat-paper.md","timestamp":1502301955703}
{"from":"ogd","message":"well, multiwriter","timestamp":1502301964956}
{"from":"bret","message":"TY 🙏","timestamp":1502301972903}
{"from":"bret","message":"https://github.com/datproject/docs/blob/master/papers/dat-paper.md#5-multi-writer","timestamp":1502302012975}
{"from":"bret","message":"nice","timestamp":1502302013762}
{"from":"mafintosh","message":"ralphtheninja[m]:","timestamp":1502302026007}
{"from":"cblgh","message":"mafintosh: ogd ohhhh cool, so hyperdb will rather be seen as a single dat archive key now?","timestamp":1502302762309}
{"from":"cblgh","message":"going from the multiwriter docs above","timestamp":1502302769428}
{"from":"cblgh","message":"also since bob has write access, does that mean that he can add others to the repository?","timestamp":1502302787618}
{"from":"cblgh","message":"i.e. repository access rights have potential to become viral, which i think is a really interesting idea","timestamp":1502302812861}
{"from":"creationix","message":"ogd: is there anything in dat that you wish you could change now that you've learned more and used it, but can't because it would be too much of a breaking change?","timestamp":1502302919561}
{"from":"creationix","message":"also, is the markdown `dat-paper.md` always up to date? Is the PDF version built from this?","timestamp":1502303531739}
{"from":"mafintosh","message":"cblgh: there will be two permissions, writer and owner","timestamp":1502303868439}
{"from":"mafintosh","message":"owner can add new owner and writers","timestamp":1502303874742}
{"from":"mafintosh","message":"writers can only write","timestamp":1502303878347}
{"from":"cblgh","message":"mafintosh: o","timestamp":1502303885055}
{"from":"cblgh","message":"mafintosh: can i designate owners easily?","timestamp":1502303897684}
{"from":"mafintosh","message":"creationix: we are super happy with it as is and would want to change anything","timestamp":1502303916469}
{"from":"mafintosh","message":"cblgh: yea using a js api (there will be a cli flow as well)","timestamp":1502303940715}
{"from":"creationix","message":"mafintosh: *wouldn't* want to change anything?","timestamp":1502303954503}
{"from":"mafintosh","message":"creationix: *wouldnt haha","timestamp":1502303962799}
{"from":"mafintosh","message":"sorry my n key is sticky","timestamp":1502303968832}
{"from":"creationix","message":"I understand, half my keys are sticky (2012 MBP) *and* I'm a little dyslexic","timestamp":1502304015122}
{"from":"cblgh","message":"mafintosh: alright nice","timestamp":1502304015228}
{"from":"mafintosh","message":"the paper reflects 3 years of work where we went through multiple breaking changes","timestamp":1502304027781}
{"from":"cblgh","message":"mafintosh: these points apply to hyperdb as-is as well?","timestamp":1502304028843}
{"from":"mafintosh","message":"cblgh: ya","timestamp":1502304033714}
{"from":"cblgh","message":"<3","timestamp":1502304036278}
{"from":"creationix","message":"I think the dat design will work well for my needs, but I keep hitting issues with the implementation that make it not feasible. I just want to make sure everyone is happy with the design before I dive into a faithful reimplementation or fix my issues in the current code.","timestamp":1502304145318}
{"from":"creationix","message":"Every time I go back to my own design, it keeps converging with dat's design. I think it's a sign I should just use dat :P","timestamp":1502304232612}
{"from":"karissa","message":"creationix: we have gone through that exercise with 3 or 4 full versions of dat :P","timestamp":1502304349849}
{"from":"karissa","message":"creationix: what's your use case?","timestamp":1502304354703}
{"from":"creationix","message":"I need a super tiny JS library in the browser for syncing subsets of datasets between various machines. I need to support slow and unreliable internet. Also I have server devices with fairly large storage, but still crappy internet and sitting behind NATs and dynamic IPs","timestamp":1502304453973}
{"from":"ralphtheninja","message":"mafintosh: yep?","timestamp":1502304510702}
{"from":"creationix","message":"I need key management so that various devices can be authorized to user identities which in turn manage dat keys for read-only and read-write access","timestamp":1502304511909}
{"from":"creationix","message":"karissa: the tiny library requirement in particular is hard. The existing code is only usable with browserify and any basic library starts at 500kb","timestamp":1502304576384}
{"from":"creationix","message":"but also I keep hitting bugs in the way I use it (always as sparse as possible, store feeds only, not real files, etc)","timestamp":1502304608356}
{"from":"karissa","message":" your use case is quite advanced!","timestamp":1502304663890}
{"from":"karissa","message":"cool","timestamp":1502304665221}
{"from":"creationix","message":"(for comparison, my design's implementation is currently at 65k not minified or optimized for size at all)","timestamp":1502304683578}
{"from":"creationix","message":"I worry the 500kb baseline for JS will hurt my customers (cheap android phones with 2g internet)","timestamp":1502304789192}
{"from":"karissa","message":"yeah.","timestamp":1502304811276}
{"from":"karissa","message":"makes sense","timestamp":1502304817610}
{"from":"creationix","message":"that said, I'm pretty good at implementing binary protocols and crypto stuff as tiny libraries. It just takes time","timestamp":1502304847787}
{"from":"creationix","message":"mafintosh: hyperdb replaces append-tree right?","timestamp":1502305820759}
{"from":"mafintosh","message":"creationix: exactly","timestamp":1502305922694}
{"from":"creationix","message":"and that's using the HAMT internally if I remember correctly","timestamp":1502305967920}
{"from":"mafintosh","message":"Correct","timestamp":1502305997617}
{"from":"substack","message":"perhaps hypercore internals could be split out into pieces that don't depend on buffer or stream","timestamp":1502306118050}
{"from":"creationix","message":"substack: that would help a lot, node streams in particular seems heavy.","timestamp":1502306176010}
{"from":"creationix","message":"(plus I just plain don't like them)","timestamp":1502306182210}
{"from":"mafintosh","message":"ralphtheninja[m]: was just gonna say thank you for all your pr help so far","timestamp":1502306201665}
{"from":"mafintosh","message":"Its more helpful thab you think","timestamp":1502306205015}
{"from":"TheGillies","message":"I have a love hate relationship with javascript","timestamp":1502306616918}
{"from":"TheGillies","message":"like, the ones closest to us, hurt us the most","timestamp":1502306649024}
{"from":"noffle","message":"mafintosh: yo where are you and andrew at today?","timestamp":1502306844351}
{"from":"noffle","message":"nm just saw your tweet :)","timestamp":1502306870525}
{"from":"mafintosh","message":"noffle: just replied on Twitter","timestamp":1502306879385}
{"from":"mafintosh","message":"noffle: omw to arbor","timestamp":1502306890588}
{"from":"mafintosh","message":"andrewosh is swinging by later today. He had some stuff he had to do first","timestamp":1502306990758}
{"from":"mafintosh","message":"karissa: o/ (omw to Oakland)","timestamp":1502307496139}
{"from":"TheGillies","message":"By looking at swarm is it possible to see what dats exist?","timestamp":1502307843792}
{"from":"bret","message":"TheGillies: good question","timestamp":1502307996333}
{"from":"creationix","message":"what is `mio` and `DOI`?","timestamp":1502308304218}
{"from":"mafintosh","message":"creationix: million (danish abbreviation)","timestamp":1502308477523}
{"from":"mafintosh","message":"DOI is a scientific paper/dataset identifier","timestamp":1502308503220}
{"from":"creationix","message":"so like a row of a csv or something?","timestamp":1502308524364}
{"from":"ralphtheninja","message":"mafintosh: cool, happy to help :)","timestamp":1502308674034}
{"from":"ralphtheninja","message":"if I do `dat .` inside a folder, is that the same as doing `dat sync .`?","timestamp":1502309787737}
{"from":"jhand","message":"ralphtheninja: yes mostly. it'll also create a new dat if one doesn't exist (sync doesn't)","timestamp":1502309845512}
{"from":"ralphtheninja[m]","message":"jhand: thx","timestamp":1502311213285}
{"from":"ralphtheninja[m]","message":"mafintosh: https://events.ccc.de/2017/08/09/34c3-presale/","timestamp":1502311224503}
{"from":"ogd","message":"mafintosh: https://www.npmjs.com/package/s2-geometry","timestamp":1502321034782}
{"from":"ogd","message":"creationix: haha you're a maintainer on that. would that be the best way to do 2d spatial queries on top of hyperdb?","timestamp":1502321077782}
{"from":"dat-gitter","message":"(e-e-e) just opened a small pr to fix typo’s I noticed up on while reading the dat-paper https://github.com/datproject/docs/pull/82","timestamp":1502324211271}
{"from":"dat-gitter","message":"(corntoole) Where could find links about the rust implementation of dat?","timestamp":1502326522519}
{"from":"ralphtheninja[m]","message":"is there a rust implementation of dat?","timestamp":1502326622348}
{"from":"dat-gitter","message":"(corntoole) I've heard mention of people in the community working on a rust implementation.","timestamp":1502327353680}
{"from":"TheGillies","message":"once dat has multi-writers I'm gonna try using it as a syncthing replacement","timestamp":1502331641402}
{"from":"TheGillies","message":"syncthing cli kinda sucks","timestamp":1502331689126}
{"from":"mafintosh","message":"juul: i'm at sudoroom. you around?","timestamp":1502332079794}
{"from":"juul","message":"mafintosh: yeah i'm on my way from downtown. be there in 20 mjnues","timestamp":1502332137829}
{"from":"ogd","message":"corntoole we dont have any rust code but we have been porting our c++ dependencies to web assembly","timestamp":1502332455850}
{"from":"dat-gitter","message":"(TimothyStiles) Does anyone know if there is a fast, synchronous way to check if a string could potentially be a valid dat key?","timestamp":1502332563359}
{"from":"ogd","message":"itd have to either start with dat:// or be 64 chars long","timestamp":1502332624206}
{"from":"dat-gitter","message":"(corntoole) ogd: would the wasm port of the C++ dependencies be to make dat easier run in the browser without browserify?","timestamp":1502334175659}
{"from":"ogd","message":"corntoole we use the wasm to make it possible to run dat in the browser *with* browserify, thats why browserify -r hyperdrive works today using the wasm code","timestamp":1502334217808}
{"from":"ogd","message":"thanks to browserify transforms on our wasm modules","timestamp":1502334226075}
{"from":"dat-gitter","message":"(corntoole) If one were to start porting the dat protocol stack, is hypercore-protocol a good place to start or somewhere else?","timestamp":1502335060385}
{"from":"bret","message":"mafintosh: ogd: what do you like using for presentation slides these days?","timestamp":1502335111633}
{"from":"dat-gitter","message":"(TimothyStiles) ogd: can a key be less than 64 characters long if it begins with dat:// ?","timestamp":1502335206010}
{"from":"ogd","message":"corntoole youd wanna start porting the hypercore tests, ya. mafintosh might have a better idea of the ordering","timestamp":1502335383471}
{"from":"ogd","message":"timothystiles if it doesnt start with dat, and it isnt 64 long, it definitely isnt a dat key. if it starts with dat:// then it would be 70 long","timestamp":1502335422092}
{"from":"mafintosh","message":"@corntoole what are you porting it to?","timestamp":1502335431727}
{"from":"ogd","message":"timothystiles although in beaker i think they do dat://coolname.com and do dns resolution","timestamp":1502335445896}
{"from":"ogd","message":"or like .well-known resolution over https","timestamp":1502335453909}
{"from":"mafintosh","message":"oh rust, i ported our flat-tree stuff to rust for fun a while ago, https://github.com/mafintosh/flat-tree-rs","timestamp":1502335467134}
{"from":"ogd","message":"oops forgot about that","timestamp":1502335536007}
{"from":"TheGillies","message":"I think dat cli works with well known name resolution","timestamp":1502335840903}
{"from":"TheGillies","message":"coulda swore i used it the other day","timestamp":1502335846530}
{"from":"mafintosh","message":"oh yea that works now thanks to jhandn","timestamp":1502335878497}
{"from":"dat-gitter","message":"(corntoole) @mafintosh a C or C++ library. I'm actually pretty open to which language, but something native that opens the path to other language bindings. I'm just working in UML right now, trying to understand the protocol and the javascript implementation.","timestamp":1502335912229}
{"from":"mafintosh","message":"@corntool nice, super interested in a c impl","timestamp":1502336769290}
{"from":"mafintosh","message":"i'd be happy to contribute to that","timestamp":1502336775933}
{"from":"mafintosh","message":"looking at hypercore and hypercore-protocol is a good place to start","timestamp":1502336806568}
{"from":"jhand","message":"@TimothyStiles I think dat-encoding module will verify most links (not resolve) if you still looking for something","timestamp":1502338836486}
{"from":"karissa","message":"TimothyStiles check out dat-encoding","timestamp":1502339056843}
{"from":"dat-gitter","message":"(e-e-e) has anyone played with using aws s3 as storage for dat? Or is it a crazy idea because of the amount of get requests that might be needed.","timestamp":1502353372097}
{"from":"dat-gitter","message":"(paologf) Hi everyone,","timestamp":1502359809076}
{"from":"dat-gitter","message":"(paologf) I'm starting using Dat but I have some issue running hypercored.","timestamp":1502359809181}
{"from":"dat-gitter","message":"(paologf) I have two computer (OSX) on the same network and sync works fine till I use `share` and `clone` from terminal or in the app.","timestamp":1502359809287}
{"from":"dat-gitter","message":"(paologf) [full message: https://gitter.im/datproject/discussions?at=598c3100bc464729746d991c]","timestamp":1502359809287}
{"from":"yoshuawuyts","message":"pfrazee: alright, maybs I might be coming around on async/await","timestamp":1502363053032}
{"from":"yoshuawuyts","message":"pfrazee: with some proper hacking, and used in isolated environments it might be reasonable for application / glue code","timestamp":1502363079905}
{"from":"dat-gitter","message":"(sdockray) 1","timestamp":1502363080419}
{"from":"yoshuawuyts","message":"https://usercontent.irccloud-cdn.com/file/XHwIFCRQ/Screen%20Shot%202017-08-10%20at%2013.03.44.png","timestamp":1502363096325}
{"from":"ralphtheninja[m]","message":"yoshuawuyts: I've been wondering how error handling works with async/await","timestamp":1502363445050}
{"from":"ralphtheninja[m]","message":"is that ^ the preferred way to do it?","timestamp":1502363459777}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: lmao, it should be - the common route for most people is to try/catch everywhere","timestamp":1502363478884}
{"from":"ralphtheninja[m]","message":"ugh","timestamp":1502363521432}
{"from":"ralphtheninja[m]","message":"so you get try/catch-hell instead of callback-hell :D","timestamp":1502363574805}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: ya; in practice it also means few people show how to handle errors in examples, etc.","timestamp":1502363581109}
{"from":"ralphtheninja[m]","message":"aye","timestamp":1502363594545}
{"from":"dat-gitter","message":"(e-e-e) it depends on context too though","timestamp":1502363603473}
{"from":"dat-gitter","message":"(e-e-e) if your in an express app its gold","timestamp":1502363612104}
{"from":"dat-gitter","message":"(e-e-e) as any errors will be caught by middle ware","timestamp":1502363638696}
{"from":"dat-gitter","message":"(e-e-e) so no need for try catch","timestamp":1502363647117}
{"from":"dat-gitter","message":"(e-e-e) promises on the otherhand, in express, force .catch(next) everywhere.","timestamp":1502363714965}
{"from":"yoshuawuyts","message":"e-e-e in practice it means that you can't differentiate between expected / unexpected errors - no way to log at the right level, etc.","timestamp":1502363715789}
{"from":"dat-gitter","message":"(e-e-e) not sure I agree","timestamp":1502363741217}
{"from":"yoshuawuyts","message":"e-e-e pretty sure that won't make for a great debugging experience :/","timestamp":1502363743407}
{"from":"dat-gitter","message":"(e-e-e) stack trace","timestamp":1502363744652}
{"from":"yoshuawuyts","message":"e-e-e stack traces don't propagate intent, at best they show a code path - but unless you have long traces even that is a bare minimum","timestamp":1502363778398}
{"from":"dat-gitter","message":"(e-e-e) yes yes - but smart placement of error handling should deal with that. You dont need try catch everywhere.","timestamp":1502363827972}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: https://www.npmjs.com/package/asde is a lib that allows callbacks to syntax I posted ^","timestamp":1502363833281}
{"from":"ralphtheninja[m]","message":"yoshuawuyts: aah right, I've seen it before but forgot about it","timestamp":1502363872152}
{"from":"dat-gitter","message":"(e-e-e) yoshuawuyts: I am also not totally sold on await btw. just in some contexts its super useful.","timestamp":1502363880838}
{"from":"ralphtheninja[m]","message":"nice to see paolo is still around :)","timestamp":1502363888474}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: yeah; he and juliangruber are teaming up again ^__^","timestamp":1502363976812}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: if I got my facts right, Voltra is using SSB under the hood - sounds like a fun project to hack on","timestamp":1502364027714}
{"from":"ralphtheninja[m]","message":"def","timestamp":1502364227846}
{"from":"ralphtheninja[m]","message":"asde is a neat module","timestamp":1502364235662}
{"from":"yoshuawuyts","message":"ralphtheninja[m]: if :: bind syntax ever lands you could do `await fs.stat::asde('/tmp')` or rename it to `async` for `await fs.state::async('/tmp')`","timestamp":1502364337620}
{"from":"yoshuawuyts","message":"*fs.stat","timestamp":1502364345968}
{"from":"ralphtheninja","message":"yoshuawuyts: not famililar with the :: bind syntax","timestamp":1502367371309}
{"from":"yoshuawuyts","message":"ralphtheninja: it's so you can call a method as if it was a method on the prototype. E.g. arr::forEach() uses an external method named forEach, but like as if it was part of the prototype","timestamp":1502367511198}
{"from":"yoshuawuyts","message":"ralphtheninja: its one of those additions that if they'd landed on top of ES3, all array methods wouldn't have needed to be part of ES5","timestamp":1502367567829}
{"from":"yoshuawuyts","message":"If only","timestamp":1502367615510}
{"from":"ralphtheninja","message":"gotcha","timestamp":1502367669895}
{"from":"ralphtheninja","message":"yoshuawuyts: and I take it you still have access to the methods that are already on the prototyp","timestamp":1502367690303}
{"from":"ralphtheninja","message":"+e","timestamp":1502367693298}
{"from":"yoshuawuyts","message":"ralphtheninja: yep","timestamp":1502367696588}
{"from":"yoshuawuyts","message":"ralphtheninja: for perf reasons it'd have been super neat to get non-copy variants of .map, .reduce, etc. - this would allow defining and using them in userland","timestamp":1502367746667}
{"from":"ralphtheninja[m]","message":"yoshuawuyts: I found a nice es6 resource https://ponyfoo.com/articles/tagged/es6-in-depth","timestamp":1502369113043}
{"from":"pfrazee","message":"yoshuawuyts: join the bieber side!","timestamp":1502369515388}
{"from":"ralphtheninja[m]","message":"hehe","timestamp":1502369591903}
{"from":"blahah","message":"jhand you around?","timestamp":1502370881809}
{"from":"yoshuawuyts","message":"pfrazee: hey, was thinking - now that we got `create-choo-app`; it'd be neat if we could have a `npm run deploy` that drops it directly on hashbase","timestamp":1502373357120}
{"from":"yoshuawuyts","message":"pfrazee: what would be involved in making that happen?","timestamp":1502373364026}
{"from":"pfrazee","message":"yoshuawuyts: hmm. I'm guessing that would mean, make/update the dat, then post to hashbase and wait for the upload, right?","timestamp":1502373395308}
{"from":"yoshuawuyts","message":"pfrazee: yeah, sounds like it!","timestamp":1502373439834}
{"from":"pfrazee","message":"yoshuawuyts: yeah I think that'd be doable, I've been wanting better integration with Beaker too","timestamp":1502373478063}
{"from":"yoshuawuyts","message":"pfrazee: would be really neat. Would mean going from 0 to P2P websites would take no effort","timestamp":1502373533043}
{"from":"pfrazee","message":"yoshuawuyts: I think it's probably doable now, the web APIs are pretty well specced https://github.com/beakerbrowser/hashbase/blob/master/docs/webapis.md","timestamp":1502373537661}
{"from":"yoshuawuyts","message":"pfrazee: nice! Are you planning to release a CLI tool for this too? / Does one exist already?","timestamp":1502373664659}
{"from":"yoshuawuyts","message":"Basically trying to figure out a way to make this happen with as little work from me required haha","timestamp":1502373689318}
{"from":"pfrazee","message":"yoshuawuyts: that's not on my current todo list but I bet I could throw one together, you want me to put that on the todo list?","timestamp":1502373690894}
{"from":"pfrazee","message":"haha yeah","timestamp":1502373694607}
{"from":"yoshuawuyts","message":"Yeah, that'd be rad!","timestamp":1502373725295}
{"from":"pfrazee","message":"ok I'll add it to my list","timestamp":1502373758118}
{"from":"yoshuawuyts","message":"Woot!","timestamp":1502373763324}
{"from":"yoshuawuyts","message":"pfrazee: another one for your TODO list: twitter cards on link shares :P https://twitter.com/yoshuawuyts/status/895648213230772225","timestamp":1502374179212}
{"from":"yoshuawuyts","message":"pfrazee: we'll be adding those to bankai soon too; but it just makes it so much more easier to spot when sharing on social media haha","timestamp":1502374229336}
{"from":"pfrazee","message":"yeah that's weird, we're usually pretty good about that","timestamp":1502374274345}
{"from":"yoshuawuyts","message":"odd","timestamp":1502374389704}
{"from":"pfrazee","message":"yeah we just forgot it for this","timestamp":1502374490354}
{"from":"creationix","message":"ogd: sorry, no idea. He just added me as a maintainer","timestamp":1502376923426}
{"from":"yoshuawuyts","message":"pfrazee: oh, another question - does hashbase detect file extensions for compression? e.g. if bundle.js.gz exists, will it be served instead of bundle.js ?","timestamp":1502377538660}
{"from":"pfrazee","message":"yoshuawuyts: no","timestamp":1502377553146}
{"from":"yoshuawuyts","message":"pfrazee: does it compress on the fly?","timestamp":1502377565997}
{"from":"pfrazee","message":"yoshuawuyts: yeah IIRC we use compression over the wire","timestamp":1502377577142}
{"from":"yoshuawuyts","message":"pfrazee: ok cool! - will let that slide in bankai then for a bit","timestamp":1502377597413}
{"from":"pfrazee","message":"cool","timestamp":1502377618485}
{"from":"yoshuawuyts","message":"thanks for the quick replies :D","timestamp":1502377632795}
{"from":"pfrazee","message":"sure thing","timestamp":1502377669221}
{"from":"TheGillies","message":"I was trying to tell someone about dat \"Hey have you heard about dat protocol?\"; \"Which one\"; \"dat\"; \"huh\"","timestamp":1502377888290}
{"from":"pfrazee","message":"hah","timestamp":1502378139397}
{"from":"jondashkyle","message":"TheGillies: i’ve gotten this too!","timestamp":1502378551682}
{"from":"jondashkyle","message":"“dat p2p web”","timestamp":1502378560220}
{"from":"jondashkyle","message":"hahaha","timestamp":1502378561338}
{"from":"mafintosh","message":":)","timestamp":1502379161215}
{"from":"ralphtheninja","message":"explained dat to one at the camp and he got it instantly \"aah, so it's a dropbox\"","timestamp":1502380516097}
{"from":"ogd","message":"lol","timestamp":1502385122552}
{"from":"ogd","message":"TheGillies: lol","timestamp":1502385133278}
{"from":"jhand","message":"karissa: have you spent any time debugging the connection stuff on our docker boxes? Looking at https://github.com/datproject/dat/issues/841","timestamp":1502385249168}
{"from":"ogd","message":"jhand: is the data collection page you were workign on deployed anywhere i can preview","timestamp":1502385656075}
{"from":"jhand","message":"yes =)","timestamp":1502385682397}
{"from":"ogd","message":"jhand: mafintosh maybe we're binding to the wrong address for docker? https://github.com/datproject/dat/issues/841 e.g. like localhost vs 127.0.0.1 or something? i dunno","timestamp":1502385756318}
{"from":"karissa","message":"jhand: https://github.com/datproject/dat/pull/846","timestamp":1502386324168}
{"from":"ogd","message":"jhand: datacite dois by domain frequency","timestamp":1502387860225}
{"from":"ogd","message":"https://www.irccloud.com/pastebin/xPRiZKR3/","timestamp":1502387864388}
{"from":"ogd","message":"jhand: crossref dois by domain frequency","timestamp":1502387871540}
{"from":"ogd","message":"https://www.irccloud.com/pastebin/Sr0ZjxHs/","timestamp":1502387875360}
{"from":"ogd","message":"jhand: (crossref dois with type:dataset). about 1.5 million on crossref, about 8 million on datacite","timestamp":1502387894220}
{"from":"ralphtheninja[m]","message":"ogd: what does dois mean?","timestamp":1502388109591}
{"from":"ogd","message":"from doi.org","timestamp":1502388123290}
{"from":"ogd","message":"its an ID used to cite scholarly works","timestamp":1502388130678}
{"from":"ogd","message":"similar to DNS or short urls","timestamp":1502388136424}
{"from":"ralphtheninja[m]","message":"ah ok, thx","timestamp":1502388161621}
{"from":"jhand","message":"ogd: what is brillonline.com ?","timestamp":1502388478254}
{"from":"jhand","message":"website is down lol","timestamp":1502388494673}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh: Do you have a document that describes trie in the way you have implemented it for hyperdb?","timestamp":1502388626357}
{"from":"mafintosh","message":"@scriptjs there is gonna be a paper","timestamp":1502388646003}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh thanks, was there an original paper that you used in its creation?","timestamp":1502388691864}
{"from":"dat-gitter","message":"(scriptjs) to implement it","timestamp":1502388723687}
{"from":"mafintosh","message":"@scriptjs it's inspired from somem existing papers, adapted for a distributed system :)","timestamp":1502388921926}
{"from":"mafintosh","message":"@scriptjs https://idea.popcount.org/2012-07-25-introduction-to-hamt/","timestamp":1502388978622}
{"from":"mafintosh","message":"that has a good overview over the technique :)","timestamp":1502388990344}
{"from":"dat-gitter","message":"(scriptjs) super thanks, I was reading about suffix tree, a type of trie that can be used for full text search as well","timestamp":1502389057593}
{"from":"mafintosh","message":"tries are pretty neat","timestamp":1502389252885}
{"from":"dat-gitter","message":"(scriptjs) yeah for sure","timestamp":1502389278975}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh are you going furhter with apis for hyperdb?","timestamp":1502389304508}
{"from":"mafintosh","message":"@scriptjs what do you mean?","timestamp":1502389323289}
{"from":"dat-gitter","message":"(scriptjs) I think you had some form of iteration. But also perhaps getAll, compaction, bulk insertion etc","timestamp":1502389483717}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh Compaction be forking and rewriting for example","timestamp":1502389570727}
{"from":"mafintosh","message":"@scriptjs oh right. i'm adding iterator apis :)","timestamp":1502389599397}
{"from":"mafintosh","message":"and bulk insert/deletes","timestamp":1502389607672}
{"from":"mafintosh","message":"that are also transactional","timestamp":1502389612829}
{"from":"dat-gitter","message":"(scriptjs) oooooo. I like that","timestamp":1502389628695}
{"from":"mafintosh","message":"it is actually *auto* compacting","timestamp":1502389629705}
{"from":"dat-gitter","message":"(scriptjs) how does that work?","timestamp":1502389648852}
{"from":"mafintosh","message":"it can tell the underlying storage provider when a value is being unreferenced","timestamp":1502389684124}
{"from":"mafintosh","message":"and then the fs \"hole punches\" that disk page","timestamp":1502389696404}
{"from":"dat-gitter","message":"(scriptjs) this is much smarter than couch or pouch","timestamp":1502389719541}
{"from":"mafintosh","message":"on file systems that support hole punching","timestamp":1502389736554}
{"from":"mafintosh","message":"which is the new APFS, ext* and windows","timestamp":1502389750483}
{"from":"mafintosh","message":"that i know of","timestamp":1502389753943}
{"from":"dat-gitter","message":"(scriptjs) I’ve not done much with window yet with hypercore, hyperdrive. Not sure if there are windows people that have been using these yet","timestamp":1502389791866}
{"from":"dat-gitter","message":"(scriptjs) s/ windows","timestamp":1502389796794}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh so is what you’ve got for a base api for hyperdb reasonably stable","timestamp":1502389860427}
{"from":"dat-gitter","message":"(scriptjs) I mean I can expect those apis to stay","timestamp":1502389880121}
{"from":"mafintosh","message":"@scriptjs the apis in the rc pr are stable","timestamp":1502389930266}
{"from":"mafintosh","message":"Once it is merged","timestamp":1502389937744}
{"from":"mafintosh","message":"Which most likely will be today","timestamp":1502389950344}
{"from":"dat-gitter","message":"(scriptjs) kk. so that is the 1.0.0 there?","timestamp":1502389964062}
{"from":"dat-gitter","message":"(scriptjs) branch","timestamp":1502389966964}
{"from":"millette","message":"darn it, I wanted it yesterday","timestamp":1502389970394}
{"from":"dat-gitter","message":"(scriptjs) hehe","timestamp":1502389974270}
{"from":"mafintosh","message":"Yea","timestamp":1502389975383}
{"from":"dat-gitter","message":"(scriptjs) kk, cool","timestamp":1502389991398}
{"from":"mafintosh","message":"millette: software estimates haha","timestamp":1502389997378}
{"from":"millette","message":"I should write an app to do those estimates, wonder how long that would take...","timestamp":1502390033305}
{"from":"mafintosh","message":"There might be storage format tweaks before I declare it LTS","timestamp":1502390036681}
{"from":"mafintosh","message":"But I don't expect any atm","timestamp":1502390048669}
{"from":"dat-gitter","message":"(scriptjs) worse case it would mean rewriting the data I guess","timestamp":1502390064368}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh How did your run work with your 81m records","timestamp":1502390094886}
{"from":"mafintosh","message":"@scriptjs worked!!!","timestamp":1502390132387}
{"from":"mafintosh","message":"It can find any key in around 4ms","timestamp":1502390148284}
{"from":"dat-gitter","message":"(scriptjs) Sweet","timestamp":1502390158017}
{"from":"dat-gitter","message":"(scriptjs) That’s reallly awesome !","timestamp":1502390176400}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh So realistically, one should be running a multifeed on each end. Is that you see things?","timestamp":1502390354109}
{"from":"mafintosh","message":"@scriptjs hyperdb embeds a multifeed - it is multi writer :)","timestamp":1502390397001}
{"from":"dat-gitter","message":"(scriptjs) kk, I am thinking of a scenario where I want other hypercores to feed together with it","timestamp":1502390510736}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh So I have some things working in one direction and other in both directions","timestamp":1502390544445}
{"from":"dat-gitter","message":"(scriptjs) s/ others","timestamp":1502390558639}
{"from":"dat-gitter","message":"(scriptjs) s/ I’d like working. I will be experimenting with multifeed this week and more with hyperdb to see what can be done","timestamp":1502391009154}
{"from":"creationix","message":"mafintosh: what do you think about maintaining ES6 module versions of your libraries? https://github.com/mafintosh/flat-tree/pull/3","timestamp":1502391352768}
{"from":"creationix","message":"I started with an easy one that has no dependencies.","timestamp":1502391391993}
{"from":"mafintosh","message":"creationix: supporting two files with the same code sounds like unneeded work to me","timestamp":1502391855905}
{"from":"creationix","message":"yep, though one could build the other","timestamp":1502391967685}
{"from":"bret","message":"why on earth did es modules do it this way","timestamp":1502393065345}
{"from":"ogd","message":"creationix: pretty unlikely we'll switch off node modules till es6 modules work by default with all major run times we support without transpiling","timestamp":1502395038795}
{"from":"creationix","message":"fair enough","timestamp":1502395250431}
{"from":"pfrazee","message":"https://twitter.com/pfrazee/status/895748571718340608","timestamp":1502398296186}
{"from":"pfrazee","message":"^ mafintosh ogd karissa jhand","timestamp":1502398617602}
{"from":"karissa","message":"pfrazee: oh interesting","timestamp":1502400655103}
{"from":"jhand","message":"mafintosh: tried to fix that indexing bug but now I remember why we hadn't done it before, complicated!","timestamp":1502401526993}
{"from":"mafintosh","message":"jhand: haha","timestamp":1502401539983}
{"from":"mafintosh","message":"sorry","timestamp":1502401542480}
{"from":"jhand","message":"Ya seems like there is just no straightforward way to add it to the write APIs, since its also used on `.clear()`. I think maybe the `archive.defaults({indexing: false})` may be the solution for now.","timestamp":1502401717510}
{"from":"mafintosh","message":"jhand: remind me again why this is complicated :)","timestamp":1502401789478}
{"from":"mafintosh","message":"i remember it was, just not why","timestamp":1502401797602}
{"from":"jhand","message":"well we wanted to do it as an option on `.writeFile()` etc. but that means we have to support passing opts though in modules like mirror-folder (or any module that writes via hyperdrive). so it makes more sense to have it as a global option.","timestamp":1502401874043}
{"from":"jhand","message":"But if its a global option, you can't switch back and forth easily for each write","timestamp":1502401885160}
{"from":"jhand","message":"unless we implement that...","timestamp":1502401907263}
{"from":"mafintosh","message":"jhand: ah right, cause we want mirror-folder to use indexing: true","timestamp":1502401935938}
{"from":"mafintosh","message":"but everyone else to not","timestamp":1502401941654}
{"from":"mafintosh","message":"right?","timestamp":1502401943825}
{"from":"jhand","message":"mafintosh: *sometimes* we want mirror-folder to use indexing true","timestamp":1502401961910}
{"from":"jhand","message":"if src === dest","timestamp":1502401966702}
{"from":"mafintosh","message":"right yea okay","timestamp":1502401974477}
{"from":"mafintosh","message":"but never indexing true outside basically right?","timestamp":1502401985317}
{"from":"mafintosh","message":"jhand: the .defaults thing is the way to go","timestamp":1502402005564}
{"from":"jhand","message":"ya ok. and it isn't too bad if indexing: false is default. Just not sure what that means for the deletes","timestamp":1502402027991}
{"from":"jhand","message":"mafintosh: cool I'll start implementing that.","timestamp":1502402048593}
{"from":"jhand","message":"mafintosh: was trying to understand what it meant if indexing was false for archive.metadata","timestamp":1502402061769}
{"from":"mafintosh","message":"jhand: sweet, hopefully that is easy to add","timestamp":1502402063482}
{"from":"jhand","message":"or true. I don't understand how it works on the archive metadata","timestamp":1502402081931}
{"from":"mafintosh","message":"jhand: archive.metadata should always run with indexing: false","timestamp":1502402089430}
{"from":"mafintosh","message":"basically never touch that one","timestamp":1502402095364}
{"from":"mafintosh","message":"only the content one","timestamp":1502402097755}
{"from":"jhand","message":"mafintosh: ok thats not the case right now","timestamp":1502402100173}
{"from":"mafintosh","message":"you sure?","timestamp":1502402106826}
{"from":"jhand","message":"https://github.com/mafintosh/hyperdrive/blob/master/index.js#L677","timestamp":1502402125033}
{"from":"jhand","message":"ah wait those are content opts","timestamp":1502402142058}
{"from":"jhand","message":"nm https://github.com/mafintosh/hyperdrive/blob/master/index.js#L42","timestamp":1502402164358}
{"from":"mafintosh","message":"cool","timestamp":1502402172271}
{"from":"jhand","message":"was confusing because I saw those opts in two places","timestamp":1502402180637}
{"from":"jhand","message":"https://github.com/mafintosh/hyperdrive/blob/master/index.js#L722","timestamp":1502402197210}
{"from":"jhand","message":"mafintosh: we may want to put those in one place, looks like `opts.maxRequests` is getting passed through in one but not the other","timestamp":1502402281009}
{"from":"mafintosh","message":"jhand: oooohhhhhhhh","timestamp":1502402454230}
{"from":"mafintosh","message":"jhand: yea please do","timestamp":1502402457982}
{"from":"jhand","message":"mafintosh: will do, opened reminder issue. can you add me to the repo?","timestamp":1502402512960}
{"from":"mafintosh","message":"jhand: hyperdrive?","timestamp":1502402525789}
{"from":"jhand","message":"mafintosh: yea","timestamp":1502402530129}
{"from":"mafintosh","message":"jhand: done","timestamp":1502402644321}
{"from":"jhand","message":"thx","timestamp":1502402654409}
{"from":"ralphtheninja[m]","message":"jhand: are you working on https://github.com/mafintosh/hyperdrive/issues/181 ?","timestamp":1502404627555}
{"from":"jhand","message":"ralphtheninja[m]: not yet, looking at indexing stuff.","timestamp":1502404651302}
{"from":"ralphtheninja[m]","message":"I could give it a go otherwise","timestamp":1502404655743}
{"from":"jhand","message":"ralphtheninja[m]: all yours!","timestamp":1502404664401}
{"from":"ralphtheninja[m]","message":"sweet!","timestamp":1502404670536}
{"from":"ralphtheninja[m]","message":"jhand: I have some questions on this since I don't know the exact details, e.g. why is `keyPair` generated in different ways?","timestamp":1502405614820}
{"from":"ralphtheninja[m]","message":"and the public key is `index.content` in one case and `keyPair.publicKey` in the other","timestamp":1502405720984}
{"from":"jhand","message":"ralphtheninja[m]: you referring to the `self.metadata.writable && ` in the frist one?","timestamp":1502405726285}
{"from":"jhand","message":"oh where is that?","timestamp":1502405733525}
{"from":"ralphtheninja[m]","message":"aye","timestamp":1502405734942}
{"from":"ralphtheninja[m]","message":"saec","timestamp":1502405743494}
{"from":"ralphtheninja[m]","message":"sec","timestamp":1502405745026}
{"from":"ralphtheninja[m]","message":"jhand: https://github.com/mafintosh/hyperdrive/blob/master/index.js#L680","timestamp":1502405765354}
{"from":"jhand","message":"ralphtheninja[m]: oh I see, in hypercore() init","timestamp":1502405769062}
{"from":"jhand","message":"yea","timestamp":1502405771438}
{"from":"jhand","message":"ralphtheninja[m]: in the first instance, the archive.metadata(0) is written already, it just needs to be read.","timestamp":1502405827127}
{"from":"jhand","message":"second one it isn't written yet, so archive.content needs to be created first","timestamp":1502405842039}
{"from":"jhand","message":"to get the key","timestamp":1502405845854}
{"from":"jhand","message":"then its written https://github.com/mafintosh/hyperdrive/blob/master/index.js#L728","timestamp":1502405858723}
{"from":"ralphtheninja[m]","message":"ok, I'll write something and we'll see in the PR if it needs to be tweaked","timestamp":1502405996581}
{"from":"jhand","message":"sounds good","timestamp":1502406030151}
{"from":"mafintosh","message":"... it is written","timestamp":1502406169579}
{"from":"mafintosh","message":"creationix: forgot about your pr, now merged and about to be released","timestamp":1502406288214}
{"from":"blahah","message":"jhand: I've been using neat-log the last few days, super nice :)","timestamp":1502414122384}
{"from":"jhand","message":"blahah: awesome!","timestamp":1502414442409}
{"from":"blahah","message":"jhand: so cool to have choo feels in the cli","timestamp":1502414463226}
{"from":"blahah","message":"and paves the way for modules that serve both","timestamp":1502414473662}
{"from":"blahah","message":"jhand: have you had any trouble with slow updating? I find that if my process is eating close to 100% cpu the renders don't happen","timestamp":1502414522179}
{"from":"jhand","message":"blahah: haven't really noticed, my computer may not be slow enough :|","timestamp":1502414917539}
{"from":"blahah","message":"haha, fair","timestamp":1502414931728}
{"from":"jhand","message":"Need to get my pi plugged back in","timestamp":1502414985982}
{"from":"karissa","message":"yeah neat-log rules","timestamp":1502424766590}
{"from":"cblgh","message":"any technical hurdles to dat desktop coming to windows?","timestamp":1502452361105}
{"from":"cblgh","message":"wondering if i'll have the same problems with the hyperdrive+choo+electron app i've been working on","timestamp":1502452395966}
{"from":"yoshuawuyts","message":"cblgh: we were missing certain prebuilds previously - not sure if they've been added since","timestamp":1502452430593}
{"from":"cblgh","message":"yoshuawuyts: ah ok","timestamp":1502452504334}
{"from":"cblgh","message":"no technical impassé other than drudgery maybe to generating those prebuilds?","timestamp":1502452522489}
{"from":"yoshuawuyts","message":"cblgh: best to check the issue tracker of dat-node","timestamp":1502452572690}
{"from":"cblgh","message":"yoshuawuyts: will do","timestamp":1502452611189}
{"from":"noffle","message":"saw the url for this today and was hoping for somekind of crazy flumedb hypercore thing, but alas (https://github.com/dominictarr/hypermore)","timestamp":1502464870770}
{"from":"pfrazee","message":"flumecore - the latest hipster lumberjack style","timestamp":1502465229044}
{"from":"millette","message":"hypermore? there's medecine for that :-)","timestamp":1502465233818}
{"from":"noffle","message":"unrelated: https://twitter.com/FischaelaMeer/status/894570174778572800","timestamp":1502466412712}
{"from":"ralphtheninja[m]","message":"lol","timestamp":1502468333864}
{"from":"creationix","message":"hmm, dat doesn't like dangling symlinks. It's just crashes importing files","timestamp":1502476413536}
{"from":"creationix","message":"it would be nice if it could store symlinks as symbolic links (like git does)","timestamp":1502476432829}
{"from":"bret","message":"whats a dangling symlink again?","timestamp":1502483780720}
{"from":"millette","message":"a symlink to nowhere (file was deleted or never existed)","timestamp":1502483836571}
{"from":"millette","message":"the 404 of the filesystems","timestamp":1502483859395}
{"from":"jhand","message":"creationix: hmm ya. right now we set opts.dereference true here https://github.com/mafintosh/mirror-folder/blob/master/index.js#L50. I wasn't really sure what would be expected behavior but crashing is no good","timestamp":1502484672934}
{"from":"creationix","message":"it just silently stops importing files","timestamp":1502484724613}
{"from":"creationix","message":"the mirror goes to the first dangling symlink","timestamp":1502484732362}
{"from":"mafintosh","message":"There might be code missing for symlinks in dat storage","timestamp":1502484758820}
{"from":"mafintosh","message":"The metadata support it","timestamp":1502484770860}
{"from":"bret","message":"sounds like a good test","timestamp":1502485528135}
{"from":"mafintosh","message":"jhand: just tried to package cli using the pkg module and it seemed to work :)","timestamp":1502496053664}
{"from":"mafintosh","message":"the binary is big (55mb) but that can trimmed way down","timestamp":1502496069372}
{"from":"mafintosh","message":"i think its mostly because its including native modules for ALL platforms in the binary (it only needs one)","timestamp":1502496095263}
{"from":"ogd","message":"browserify dat-node is only 2mb","timestamp":1502496210310}
{"from":"ogd","message":"so ya prob all the native base64 strings","timestamp":1502496216649}
{"from":"jhand","message":"mafintosh: I tried running dat doctor and said utp couldn't be found","timestamp":1502496304284}
{"from":"mafintosh","message":"ah smart way of testing that","timestamp":1502496314843}
{"from":"jhand","message":"maybe I just had them in the wrong place?","timestamp":1502496318432}
{"from":"mafintosh","message":"jhand: oh it didn't load utp for me as ell","timestamp":1502496347659}
{"from":"mafintosh","message":"jhand: got it working!","timestamp":1502496560463}
{"from":"jhand","message":"sweet how?","timestamp":1502496591443}
{"from":"mafintosh","message":"jhand: magic flag in package.json","timestamp":1502496604452}
{"from":"mafintosh","message":"will put up a pr in progress","timestamp":1502496631606}
{"from":"jhand","message":"oh didn't know there was a magic flag.","timestamp":1502496632724}
{"from":"mafintosh","message":"jhand: https://github.com/datproject/dat/pull/849","timestamp":1502496896263}
{"from":"jhand","message":"mafintosh: cool, do you know how to set this up on travis so we can distribute it automagically?","timestamp":1502497025963}
{"from":"mafintosh","message":"jhand: running `pkg .` builds it for all platforms","timestamp":1502497103090}
{"from":"mafintosh","message":"so we can do it mannually for now","timestamp":1502497110261}
{"from":"jhand","message":"mafintosh: all this look okay? https://github.com/datproject/dat/pull/849/commits/1e474600df8ccdf83d150dd41cd4a3ac55cd057e","timestamp":1502497801949}
{"from":"jhand","message":"i'll look to see how desktop is doing build stuff on travis to see if we can copy it","timestamp":1502497874148}
{"from":"jhand","message":"its interesting that now-cli installs the pkg'd version when you do npm install","timestamp":1502497990096}
{"from":"mafintosh","message":"jhand: ya nice","timestamp":1502498733567}
{"from":"mafintosh","message":"jhand: we should put them in a dat","timestamp":1502499251499}
{"from":"jhand","message":"definitely","timestamp":1502499263832}
{"from":"karissa","message":"so ez","timestamp":1502499444630}
{"from":"karissa","message":"Nice","timestamp":1502499458660}
{"from":"ralphtheninja[m]","message":"didn't know travis had a deploy section in .travis.yml","timestamp":1502499684369}
{"from":"jhand","message":"anyone know where to see releases that aren't tagged? https://github.com/datproject/dat/pull/849","timestamp":1502499734963}
{"from":"jhand","message":"Not sure if that actually worked.","timestamp":1502499741152}
{"from":"ralphtheninja[m]","message":"how do you install packaged apps?","timestamp":1502499745463}
{"from":"jhand","message":"I guess it'd just put it in the releases thing, so maybe it didn't work","timestamp":1502499759029}
{"from":"ralphtheninja[m]","message":"it's almost what prebuild does right now, creates a release and attaches binaries to it","timestamp":1502499852617}
{"from":"ralphtheninja[m]","message":"as in uploads binaries, not the packaging :)","timestamp":1502499873055}
{"from":"jhand","message":"ralphtheninja[m]: yea you can download it and run the binary `./dat-macosx doctor`. But not sure how to \"install\" it, mafintosh?","timestamp":1502499898541}
{"from":"ralphtheninja[m]","message":"prob just download and make sure it ends up in `PATH` I guess","timestamp":1502499946699}
{"from":"jhand","message":"oh it defaults to master only, so it wouldn't have built that one.","timestamp":1502500067110}
{"from":"mafintosh","message":"jhand, ralphtheninja[m] you install it but putting in your bin ya","timestamp":1502500546282}
{"from":"jhand","message":"ah ya I was curious if there was a way we can do that when people download it","timestamp":1502500733102}
{"from":"mafintosh","message":"jhand: you need an installer for that","timestamp":1502501340541}
{"from":"jhand","message":"yay https://github.com/datproject/dat/releases","timestamp":1502501484610}
{"from":"jhand","message":"er https://github.com/datproject/dat/releases/tag/untagged-ed64493acaf1fa4e0619","timestamp":1502501490106}
{"from":"jhand","message":"mafintosh: still looks like utp failed though, do we have to include those in the download somehow?","timestamp":1502501520422}
{"from":"jhand","message":"\"You have to deploy native addons used by your project to the same directory as the executable.\" I guess that's what that means","timestamp":1502501577979}
{"from":"jhand","message":"oh and we have to make sure travis only builds with the same node version","timestamp":1502501621186}
{"from":"mafintosh","message":"jhand: it worked for me","timestamp":1502501634984}
{"from":"mafintosh","message":"jhand: let me check your builds","timestamp":1502501639584}
{"from":"jhand","message":"mafintosh: yea it worked when I did it in the dat directory but not when i downloaded the build","timestamp":1502501661775}
{"from":"mafintosh","message":"jhand: oh it probably unpacked it differently on travis!","timestamp":1502501825410}
{"from":"mafintosh","message":"jhand: and then my node_modules/utp-native wasn't there (cause it would be nested)","timestamp":1502501850795}
{"from":"mafintosh","message":"jhand: you need to build it on latest npm","timestamp":1502501859048}
{"from":"jhand","message":"mafintosh: ah ok let me see. ya and we need the node version to match too, I think right now it builds on the first one to finish","timestamp":1502501878796}
{"from":"jhand","message":"i think it was npm v5","timestamp":1502502036600}
{"from":"jhand","message":"https://travis-ci.org/datproject/dat/jobs/263714701","timestamp":1502502039295}
{"from":"jhand","message":"mafintosh: yea, I think we need to distribute the native modules with the package. if you move the one packaged manually to another dir, it fails to load the utp.","timestamp":1502502415957}
{"from":"jhand","message":"anyways, got it building on only node 8. so thats the last bit to figure out. gonna head out to eat now","timestamp":1502502473317}
{"from":"mafintosh","message":"jhand: i'll play around with it","timestamp":1502502764866}
{"from":"ogd","message":"mafintosh: interesting discussion https://twitter.com/lotharrr/status/896230936622252032","timestamp":1502514468957}
{"from":"mafintosh","message":"ogd: oh interesting","timestamp":1502514695409}
{"from":"blahah","message":"mafintosh: ooh","timestamp":1502531098929}
{"from":"blahah","message":"mafintosh: I did some tests of routing speed with my bare connection, shadowsocks proxy, and openvpn UDP wrapper","timestamp":1502531301781}
{"from":"blahah","message":"I thought UDP would win but it's neck-and-neck with the bare connection","timestamp":1502531314777}
{"from":"blahah","message":"shadowsocks drops the ping by an order of magnitude though","timestamp":1502531327867}
{"from":"blahah","message":"ever seen UDP throttling?","timestamp":1502531353039}
{"from":"pfrazee","message":"ogd: hey do you know of any apps using level-js currently?","timestamp":1502646316807}
{"from":"ogd","message":"hmm no but i also haven't been paying attention","timestamp":1502647125472}
{"from":"pfrazee","message":"ogd: no worries I found one","timestamp":1502647140511}
{"from":"pfrazee","message":"Im going to refactor injest to use level","timestamp":1502647157018}
{"from":"ralphtheninja","message":"pfrazee: there's an issue on creating a promise api in levelup","timestamp":1502647396778}
{"from":"ralphtheninja","message":"pfrazee: might fit your async/await well","timestamp":1502647409834}
{"from":"pfrazee","message":"ralphtheninja: nice","timestamp":1502647452212}
{"from":"ralphtheninja","message":"never understood why people want an explicit promise api though, since you can always promisify nodeback style right?","timestamp":1502647485495}
{"from":"ralphtheninja","message":"haha nice comment","timestamp":1502647544618}
{"from":"pfrazee","message":"yeah for the most part","timestamp":1502647545917}
{"from":"pfrazee","message":":)","timestamp":1502647551294}
{"from":"ralphtheninja","message":"I'm trying to push v2 but have been traveling and not feeling well lately, so still not done yet","timestamp":1502647605060}
{"from":"barnie","message":"hi all! I created an in-depth overview of 3 viable nodejs on android options, and node shared library compile adventures: https://stackoverflow.com/a/45649995/8295283","timestamp":1502651280617}
{"from":"barnie","message":"one of the options is of course @mafintosh: node-on-android :)","timestamp":1502651353855}
{"from":"garbados","message":"hello! i'm trying to get a local instance of hypercloud up and running, but i'm getting EACCESS issues. does hypercloud really expect to run as root in order to listen on 0.0.0.0?","timestamp":1502663998308}
{"from":"garbados","message":"well, if the `letsencrypt:` config block is specified i suppose","timestamp":1502664090397}
{"from":"TheGillies","message":"turns out dat is a good rsync replacement","timestamp":1502665923368}
{"from":"barnie","message":"any of you guys have an opinion on realm db (https://realm.io/products/realm-mobile-database/)? works with node.js, RN, java and others. looks like ideal for on mobile","timestamp":1502691440961}
{"from":"barnie","message":"broke the link. https://realm.io/products/realm-mobile-database/","timestamp":1502691478977}
{"from":"dat-gitter","message":"(reekoheek) hello, first time dat encounter here... how bout multi-party writers to dat?","timestamp":1502714755175}
{"from":"yoshuawuyts","message":"reekoheek o/ welcome! - currently dat is single-writer only, but there's experimental support for multiple feeds in https://github.com/mafintosh/multifeed, and similar functionality is currently being developed in the hyperdb repo","timestamp":1502714921603}
{"from":"barnie","message":"i know of https://github.com/substack/hyperdrive-multiwriter but dont know if that fits your use case","timestamp":1502714953604}
{"from":"yoshuawuyts","message":"barnie: 8 months means it's probably not been updated for latest hyperdrive","timestamp":1502715266868}
{"from":"barnie","message":"ha, goes that fast, does it :D","timestamp":1502715292599}
{"from":"barnie","message":"this is a bit of a problem in dat ecosystem","timestamp":1502715316934}
{"from":"barnie","message":"btw, my musings on a dat-on-android application design: https://github.com/realm/realm-java/issues/5103","timestamp":1502715720676}
{"from":"barnie","message":"https://i.stack.imgur.com/qGMIw.png","timestamp":1502715723761}
{"from":"dat-gitter","message":"(reekoheek) thanks guys","timestamp":1502715773001}
{"from":"ogd","message":"mafintosh: \"However, in this case, a right-skewed tree is used to provide a progressive integrity proof.\" https://tools.ietf.org/html/draft-thomson-http-mice-02","timestamp":1502736976397}
{"from":"ralphtheninja[m]","message":"ogd: looks like hypercore :)","timestamp":1502737445517}
{"from":"ralphtheninja[m]","message":"or rather .. like a merkle tree :D","timestamp":1502737499737}
{"from":"jhand","message":"karissa: you see this project funded by OTF? https://equalit.ie/introducing-n1sec-a-protocol-for-distributed-multiparty-chat-encryption/","timestamp":1502739854723}
{"from":"jhand","message":"encrypted multi-party chat whitepaper: https://github.com/equalitie/np1sec/blob/master/doc/protocol.pdf","timestamp":1502739942951}
{"from":"mafintosh","message":"ogd: cool! i need to read that","timestamp":1502740779710}
{"from":"pfrazee","message":"injestdb has been refactored to use leveldb, so it's now usable in node https://github.com/beakerbrowser/injestdb/pull/1","timestamp":1502749501820}
{"from":"ralphtheninja[m]","message":"nice","timestamp":1502749802245}
{"from":"jondashkyle","message":"pfrazee: awesome","timestamp":1502751085933}
{"from":"ralphtheninja[m]","message":"I'm thinking I want to get back to frontend a bit but I'm so terribly far behind now, I'd like to do something simple with electron + choo","timestamp":1502760265673}
{"from":"ralphtheninja[m]","message":"any pointers on what to use for css these days?","timestamp":1502760282857}
{"from":"jhand","message":"ralphtheninja[m]: I've been liking tachyons, tachyons.io, but its a very different approach to css (I barely write any actual css when I use it).","timestamp":1502764914681}
{"from":"jhand","message":"ralphtheninja[m]: for example, did this: donate.datproject.org without any CSS, only the tacyons classes. https://github.com/datproject/donate-page/blob/master/client/","timestamp":1502764983657}
{"from":"ralphtheninja[m]","message":"jhand: cool, if I can get away with not\u000f using css I'd be a happy camper","timestamp":1502765213463}
{"from":"ralphtheninja[m]","message":"always hated css lol","timestamp":1502765226171}
{"from":"jhand","message":"ralphtheninja[m]: ya I love/hate it. But tachyons is definitely hard at first. IMO it's worth the hurdle but I think some don't like it even after getting used to it.","timestamp":1502765274375}
{"from":"ralphtheninja[m]","message":"jhand: this seems to be more for \"normal\" websites?","timestamp":1502765747568}
{"from":"ralphtheninja[m]","message":"I like it though, minimalistic","timestamp":1502765781275}
{"from":"jhand","message":"ralphtheninja[m]: haha not sure what normal is. but yes? we use it in Dat desktop too","timestamp":1502765971184}
{"from":"ralphtheninja[m]","message":"ok cool","timestamp":1502766010353}
{"from":"ralphtheninja[m]","message":"thanks for input :)","timestamp":1502766029273}
{"from":"karissa","message":"ralphtheninja[m]: for writing custom CSS I like using sheetify with choo. Also in dat desktop","timestamp":1502767795647}
{"from":"ralphtheninja[m]","message":"seems dat desktop is a good starting point","timestamp":1502767872602}
{"from":"ralphtheninja[m]","message":"<3","timestamp":1502767879649}
{"from":"jhand","message":"whoa an analytics tracker build on hypercore: https://github.com/vesparny/fair-analytics","timestamp":1502771691879}
{"from":"cblgh","message":"ralphtheninja[m]: you could totally get started using grid","timestamp":1502782138703}
{"from":"cblgh","message":"esp if you're getting back into things","timestamp":1502782147357}
{"from":"cblgh","message":"https://css-tricks.com/snippets/css/complete-guide-grid/","timestamp":1502782164023}
{"from":"cblgh","message":"i haven't played yet but i'm super excited","timestamp":1502782181669}
{"from":"cblgh","message":"and support is everywhere except the MS browsers","timestamp":1502782194293}
{"from":"cblgh","message":"also for a variant on the tachyons style of functional css someone in #choo linked me https://github.com/jongacnik/gr8","timestamp":1502782223823}
{"from":"cblgh","message":"same idea, different execution","timestamp":1502782259797}
{"from":"ralphtheninja","message":"cblgh: I'm targeting electron so not really worried about IE","timestamp":1502789698741}
{"from":"ralphtheninja","message":"cblgh: thx!","timestamp":1502789704918}
{"from":"yoshuawuyts","message":"ralphtheninja: oh oh oh, did you know that Electron 1.7 beta is available!","timestamp":1502789724551}
{"from":"yoshuawuyts","message":"ralphtheninja: just found out! Uses chrome 58 which has so much better devtools :D","timestamp":1502789743373}
{"from":"ralphtheninja","message":"yoshuawuyts: nope, didn't know :)","timestamp":1502789816107}
{"from":"cblgh","message":"oh nice","timestamp":1502791752138}
{"from":"cblgh","message":"i am so bad at using the devtools lol","timestamp":1502791776097}
{"from":"cblgh","message":"i basically use & abuse inspect element + tweak css w/ the style editor","timestamp":1502791801598}
{"from":"cblgh","message":"ralphtheninja: here's the electron app i'm working on if you wanna steal some stuff! https://github.com/cblgh/rotonde-choo/tree/electro-train","timestamp":1502791835334}
{"from":"ralphtheninja","message":"cblgh: cool","timestamp":1502791878378}
{"from":"cblgh","message":"feedback welcome if you find weird stuff","timestamp":1502791882456}
{"from":"ralphtheninja","message":"cblgh: what's rotonde?","timestamp":1502792132427}
{"from":"ralphtheninja","message":"I really <3 npmhub browser extension, so helpful","timestamp":1502792413162}
{"from":"cblgh","message":"ralphtheninja: a social network experiment!","timestamp":1502792460742}
{"from":"cblgh","message":"https://github.com/Rotonde / https://gist.github.com/cblgh/5392d9fb0ab4de27adcc07ad1321ae43#file-rotonde-resources-md","timestamp":1502792478984}
{"from":"cblgh","message":"from the specs Rotonde is a self-hosted, platform and programming language agnostic social media experiment. Its purpose is simply to share feeds of daily activity logs between its members. See this feed for an example.","timestamp":1502792539628}
{"from":"cblgh","message":"at it's fundamental level it's just an agreed upon json structure","timestamp":1502792572022}
{"from":"cblgh","message":"i've been having fun writing a client that would allow people w/o servers or know-how to still host their json files, using dat + hashbase","timestamp":1502792714373}
{"from":"ralphtheninja","message":"cblgh: ✔","timestamp":1502792761676}
{"from":"ralphtheninja","message":"sounds like ssb a bit","timestamp":1502792767944}
{"from":"cblgh","message":"yeah that was my first thought too","timestamp":1502792780017}
{"from":"cblgh","message":"just that ssb is pretty hard to interface with","timestamp":1502792792269}
{"from":"cblgh","message":"and one of the goals is for people to build their own clients","timestamp":1502792833110}
{"from":"cblgh","message":"but ya i use patchwork daily","timestamp":1502792846520}
{"from":"cblgh","message":"damn poop post keeps coming up when i'm having food","timestamp":1502792853888}
{"from":"cblgh","message":"oh wow npmhub looks kinda cool","timestamp":1502792912685}
{"from":"cblgh","message":"*gets it*","timestamp":1502792915443}
{"from":"ralphtheninja","message":";)","timestamp":1502792928407}
{"from":"cblgh","message":"my dependency list is crazy","timestamp":1502792950149}
{"from":"cblgh","message":"lol","timestamp":1502792950403}
{"from":"ralphtheninja","message":"it saves time and mouse clicks first of all","timestamp":1502792952442}
{"from":"cblgh","message":"ya, but also really nice for discoverability!","timestamp":1502792973023}
{"from":"ralphtheninja","message":"hehe always a fun experience to see what modules people are using","timestamp":1502792978606}
{"from":"ralphtheninja","message":"and like \"oh man, 20 more modules I haven't heard of\" :D","timestamp":1502792989721}
{"from":"cblgh","message":"hahaha","timestamp":1502793098181}
{"from":"jhand","message":"also love octolinker for discovering modules/reading code https://github.com/OctoLinker/browser-extension","timestamp":1502809678836}
{"from":"pfrazee","message":"octolinker and npmhub","timestamp":1502811469419}
{"from":"pfrazee","message":"love em","timestamp":1502811471036}
{"from":"creationix","message":"does hyperdrive support empty folders?","timestamp":1502811858850}
{"from":"creationix","message":"what exactly are the change events in the stream? I see PUT and DEL, but am having trouble finding others","timestamp":1502811883882}
{"from":"dat-gitter","message":"(mezerotm) @creationix [If I create an empty folder in an existing dat, it won't get synced to a client. According to @mafintosh this is a dat issue as hyperdrive supports empty folders.](https://github.com/datproject/dat/issues/801)","timestamp":1502811985841}
{"from":"dat-gitter","message":"(mezerotm) \"hyperdrive supports empty folders.\"","timestamp":1502812010266}
{"from":"mafintosh","message":"Ya they are reflected in the metadata but not on disk","timestamp":1502812170218}
{"from":"mafintosh","message":"We need to support that in dat-storage","timestamp":1502812197528}
{"from":"pfrazee","message":"creationix: I think it's just put and del","timestamp":1502812237565}
{"from":"creationix","message":"I wonder how you put a folder vs a file","timestamp":1502812250996}
{"from":"TheLink","message":"hey mafintosh, just putting the \"folder not getting deleted with http option\" issue back on the stack","timestamp":1502813815278}
{"from":"creationix","message":"I can't use dat yet in my product (need a milestone deployed this week and dat is too heavy for now). But I really want to keep my implementation close to dat's design so I can switch out later","timestamp":1502814983660}
{"from":"creationix","message":"pfrazee: do you understand hyperdrive more beyond that?","timestamp":1502815035219}
{"from":"pfrazee","message":"creationix: a bit, what are you wondering about?","timestamp":1502815365098}
{"from":"creationix","message":"I'm wondering how close my design is to hyperdrive's.","timestamp":1502815426238}
{"from":"creationix","message":"I'm assuming an append-only-log like hypercore as the store (though for now it's just an array)","timestamp":1502815445048}
{"from":"creationix","message":"currently, I have PUT and DEL with some metadata for file puts . And indexes into previous versions for each segment of the path","timestamp":1502815476954}
{"from":"creationix","message":"pfrazee: Here is 7 events for some FS in my design https://gist.github.com/creationix/c5bbad788c446fd459436dfc99c82b22","timestamp":1502815626407}
{"from":"pfrazee","message":"creationix: oh yeah I dont know hyperdrive well enough to answer that","timestamp":1502815703298}
{"from":"creationix","message":"the first set of numbers is the indexes to the root folders/files in the log. Then zero or more folder segments with list of entry indexes for that folder, then optionally file data for putting a file","timestamp":1502815707101}
{"from":"creationix","message":"I think this is similar at least. It does allow random-access of the tree y lazy loading only the entries you need","timestamp":1502815725974}
{"from":"pfrazee","message":"yes it's definitely similar","timestamp":1502815731185}
{"from":"creationix","message":"thanks.","timestamp":1502815754226}
{"from":"pfrazee","message":"sure, sorry I cant answer more","timestamp":1502815793769}
{"from":"creationix","message":"no worries :)","timestamp":1502815822833}
{"from":"pfrazee","message":"https://twitter.com/bitstein/status/897499002119217152 mad libs champion 2017","timestamp":1502816619633}
{"from":"creationix","message":"oh my","timestamp":1502816680610}
{"from":"ralphtheninja[m]","message":"sweet","timestamp":1502818647444}
{"from":"ungoldman","message":"well I do agree that civilization has been sick for a long time, but civ 6 is pretty bloated","timestamp":1502819740679}
{"from":"pfrazee","message":"yeah I'm concerned adding bitcoin will just make it pay-to-win","timestamp":1502819942398}
{"from":"ralphtheninja[m]","message":"I just think satellites are cool :)","timestamp":1502820266189}
{"from":"mafintosh","message":"TheLink: i havent forgotten you :)","timestamp":1502820387636}
{"from":"pfrazee","message":"ralphtheninja[m]: yeah that part is super cool","timestamp":1502820388641}
{"from":"TheLink","message":"\\o/","timestamp":1502820416131}
{"from":"ralphtheninja[m]","message":"mafintosh: node-modules.com having some issues","timestamp":1502821467004}
{"from":"ralphtheninja[m]","message":"I can get to the main page but query fails","timestamp":1502821480334}
{"from":"ogd","message":" jhand mafintosh and i specced out an idea for a 'lazy storage provider' for a dat that can wrap some remote data source and fetch files on demand. i was gonna take a stab at a google drive version","timestamp":1502829176937}
{"from":"mafintosh","message":"ralphtheninja[m]: dang","timestamp":1502829767194}
{"from":"jhand","message":"sweet","timestamp":1502829769301}
{"from":"mafintosh","message":"ungoldman: haha","timestamp":1502829770204}
{"from":"mafintosh","message":"ogd: yea, talked to andrewosh about this and it is doable","timestamp":1502829802983}
{"from":"ogd","message":"mafintosh: i forget what we ended up deciding though lol","timestamp":1502829849913}
{"from":"ogd","message":"mafintosh: was i gonna try to write hyperdrive stat objects into hyperdrive ahead of time?","timestamp":1502829866019}
{"from":"ogd","message":"mafintosh: without hashes","timestamp":1502829872817}
{"from":"ogd","message":"mafintosh: and then use a protocol extension to request a missing hash or something?","timestamp":1502829901028}
{"from":"mafintosh","message":"ogd: yea thats fine for v1","timestamp":1502829912863}
{"from":"mafintosh","message":"ogd: we were discussing the *prime* version that writes random access","timestamp":1502829927691}
{"from":"ogd","message":"mafintosh: ah right","timestamp":1502829954816}
{"from":"ogd","message":"mafintosh: do you have example code that writes hyperdrive stat objects?","timestamp":1502829961475}
{"from":"ogd","message":"mafintosh: or was the idea to write empty files? i forget","timestamp":1502830004228}
{"from":"jhand","message":"ogd: you talked about writing the link/id too?","timestamp":1502830182102}
{"from":"ogd","message":"jhand: oh yea","timestamp":1502830242883}
{"from":"ogd","message":"jhand: this kinda like how git LFS works","timestamp":1502830251044}
{"from":"jhand","message":"oh nice","timestamp":1502830258015}
{"from":"ogd","message":"jhand: instead of writing a file it writes a metadata file with the filename","timestamp":1502830312130}
{"from":"ogd","message":"err url/hash i think","timestamp":1502830326526}
{"from":"ogd","message":"or something","timestamp":1502830328781}
{"from":"ogd","message":"but we should do the same, have a 'pointer file'","timestamp":1502830336127}
{"from":"mafintosh","message":"ogd: .writeFile should do the job for the initial version","timestamp":1502830502997}
{"from":"ogd","message":"mafintosh: what should the value be?","timestamp":1502830575414}
{"from":"mafintosh","message":"ogd: the extra metadata","timestamp":1502830617118}
{"from":"ogd","message":"mafintosh: alright i think i forgot what we decided for a v1, is the protocol extension included in the scheme youre thinking?","timestamp":1502830674629}
{"from":"mafintosh","message":"ogd: yea we can do that for v1 :)","timestamp":1502830694521}
{"from":"mafintosh","message":"the protocol extension support has shipped","timestamp":1502830712407}
{"from":"ogd","message":"mafintosh: i have a bunch of google drive file metadata objects, with ids and filenames and content types etc. i wanna write them into a hyperdrive such that when the hyperdrive receives a write request it can lazily fetch the file contents from google drive. so i can write all the json to hyperdrive using writeFile(filename, metadata)","timestamp":1502830731551}
{"from":"ogd","message":"mafintosh: then .readdir will return the correct filenames, but obviously the file lengths wont be accurate cause the contents are just metadata, not real files","timestamp":1502830762502}
{"from":"mafintosh","message":"ogd: you wanna lazy write the metadata first and then after the file content?","timestamp":1502830763617}
{"from":"ogd","message":"mafintosh: what do you think i should do first?","timestamp":1502830780228}
{"from":"ogd","message":"mafintosh: i know up front all the filenames and file sizes in my google drive","timestamp":1502830830930}
{"from":"ogd","message":"mafintosh: but i dont have any of the file contents yet","timestamp":1502830842357}
{"from":"mafintosh","message":"ogd: then write that the hyperdrive as \"special\" files","timestamp":1502830848931}
{"from":"mafintosh","message":"where the content of the file is just the drive metadata","timestamp":1502830867637}
{"from":"mafintosh","message":"and have a way to resolve them on demand","timestamp":1502830875654}
{"from":"ogd","message":"mafintosh: so when i get the actual content i overwrite the metadata?","timestamp":1502830890868}
{"from":"mafintosh","message":"yea","timestamp":1502830895783}
{"from":"ogd","message":"mafintosh: how are you thinking the 'resolve on demand' part works? if you have peer A who has the read access to the google drive, and the hyperdrive filled with metadata, and peer B sends a bunch of requests for files, is it peer B's job to notice that they're metadata files and send a protocol extension asking to download it and respond when it's done downloading, then issue another request for the same file now that it's","timestamp":1502831028258}
{"from":"ogd","message":"populated?","timestamp":1502831028364}
{"from":"mafintosh","message":"ogd: yea exactly like that","timestamp":1502831069263}
{"from":"ogd","message":"mafintosh: alright cool ill try that. do you have example code for sending a protocol extension message","timestamp":1502831085239}
{"from":"pfrazee","message":"tara and I are doing some updating to how beaker works. I'm going to tweet some details in a sec, but would anybody be terribly put out if we removed the staging area?","timestamp":1502831147958}
{"from":"mafintosh","message":"ogd: https://github.com/mafintosh/hyperdb/blob/master/index.js#L84","timestamp":1502831153137}
{"from":"ogd","message":"mafintosh: thx","timestamp":1502831160569}
{"from":"mafintosh","message":"ogd: https://github.com/mafintosh/hyperdb/blob/master/index.js#L50-L76","timestamp":1502831162699}
{"from":"pfrazee","message":"from the API, that is","timestamp":1502831166061}
{"from":"mafintosh","message":"ogd: https://github.com/mafintosh/hyperdb/blob/master/index.js#L33-L43","timestamp":1502831179368}
{"from":"mafintosh","message":"ogd: last part is a bit hackish but works. just ping me if it causing issues :)","timestamp":1502831198644}
{"from":"creationix","message":"pfrazee: I think it's fine. I havn't yet used the API","timestamp":1502831201680}
{"from":"pfrazee","message":"creationix: cool","timestamp":1502831211474}
{"from":"mafintosh","message":"pfrazee: where is my sweet p2p tshirt?","timestamp":1502831267713}
{"from":"pfrazee","message":"mafintosh: you must have missed out on the campaign!!","timestamp":1502831279292}
{"from":"mafintosh","message":"dang","timestamp":1502831378636}
{"from":"ogd","message":"mafintosh: one last thing, is the idea with the protocol extension approach that it's easier to ship now? cause in this scenario wouldn't it be easier for the peer A to just transparently handle the 'metadata file to real file' resolution step and not have to involve peer B at all? but i guess that would require internal changes to hyperdrive or something?","timestamp":1502831380484}
{"from":"mafintosh","message":"ogd: unsure how tht would work","timestamp":1502831418247}
{"from":"ralphtheninja[m]","message":"mafintosh pfrazee actually it's 6 hours left of the t-shirt campaign","timestamp":1502831437824}
{"from":"ralphtheninja[m]","message":"just checked my order :)","timestamp":1502831447102}
{"from":"mafintosh","message":"ralphtheninja[m]: link","timestamp":1502831447208}
{"from":"ralphtheninja[m]","message":"sec","timestamp":1502831453274}
{"from":"ogd","message":"mafintosh: like if i could write a hyperdrive style stat entry into hyperdrive for a file with no hashes or chunks, but have a hook to \"fetch on missing\"","timestamp":1502831462798}
{"from":"pfrazee","message":"ralphtheninja[m]: it resets periodically, Im not sure there will be enough orders to ship","timestamp":1502831464635}
{"from":"ralphtheninja[m]","message":"https://teespring.com/beaker-p2p-web#pid=87&cid=2325&sid=front","timestamp":1502831466690}
{"from":"pfrazee","message":"it works by campaigns","timestamp":1502831472014}
{"from":"ralphtheninja[m]","message":"oh ok","timestamp":1502831479231}
{"from":"pfrazee","message":"I'll tweet that link again though","timestamp":1502831481975}
{"from":"ogd","message":"mafintosh: but to a remote peer the stat object would look normal, it would just take longer the first time that peer requests a chunk in a byte range under that stat","timestamp":1502831487390}
{"from":"mafintosh","message":"ralphtheninja[m]: thanks","timestamp":1502831494602}
{"from":"ralphtheninja[m]","message":"cheers .. my t-shirts have been idling in stockholm for 5 days .. hoping to get them soon","timestamp":1502831508170}
{"from":"mafintosh","message":"ogd: yea we don't have support for that.","timestamp":1502831530451}
{"from":"mafintosh","message":"ogd: its none trivial to do in general","timestamp":1502831539184}
{"from":"ogd","message":"mafintosh: cool just making sure, cause i think that would be a nicer api. but i can do the protocol extension thing just to ship it","timestamp":1502831561242}
{"from":"mafintosh","message":"ogd: the protocol extension is exactly there for these use cases","timestamp":1502831619292}
{"from":"mafintosh","message":"anything that is not an extension is super tricky to change","timestamp":1502831649826}
{"from":"mafintosh","message":"but try and make an mvp of it :)","timestamp":1502831679542}
{"from":"ogd","message":"mafintosh: yea no problem. id argue this kind of thing belongs in core in the long term. but i agree we need to prototype it and test it in the wild for a while","timestamp":1502831704900}
{"from":"mafintosh","message":"ogd: it should be a core extension instead","timestamp":1502831726546}
{"from":"mafintosh","message":"like a BEP","timestamp":1502831729423}
{"from":"mafintosh","message":"thats how you scale the protocol","timestamp":1502831738450}
{"from":"mafintosh","message":"in general the only thing not a *EP should be the core data transfer","timestamp":1502831758137}
{"from":"mafintosh","message":"ogd: i guess this would need to write the import request back to the source peer","timestamp":1502832021640}
{"from":"mafintosh","message":"hadn't really thought about that part","timestamp":1502832036472}
{"from":"ogd","message":"mafintosh: peer B has to ask peer B to download the file, then peer A has to tell peer B when its ready to download, right?","timestamp":1502832073662}
{"from":"ogd","message":"mafintosh: bah sorry messed up the letters","timestamp":1502832080400}
{"from":"jhand","message":"mafintosh: isn't that similar to the proxy idea we were talking about?","timestamp":1502832081697}
{"from":"mafintosh","message":"jhand: the proxy is easier cause we distribute bitfields","timestamp":1502832117147}
{"from":"mafintosh","message":"jhand: so all the distributed decision making is local","timestamp":1502832117253}
{"from":"jhand","message":"ah right","timestamp":1502832126058}
{"from":"mafintosh","message":"ogd: yea assuming peer A can write to the dat","timestamp":1502832146946}
{"from":"mafintosh","message":"that was the tricky part","timestamp":1502832157217}
{"from":"pfrazee","message":"ok here's everything we're planning to do RE dat in beaker https://twitter.com/pfrazee/status/897566790544478208","timestamp":1502832159811}
{"from":"mafintosh","message":"pfrazee: looks good","timestamp":1502832238341}
{"from":"mafintosh","message":"you learn a lot using your own apis :)","timestamp":1502832245508}
{"from":"ogd","message":"mafintosh: yea peer A is the writer. so it has to listen for \"extension download requests\" from remotes and respond with \"download complete\" basically","timestamp":1502832248705}
{"from":"pfrazee","message":"mafintosh: cool. Yes indeed.","timestamp":1502832261578}
{"from":"mafintosh","message":"ogd: what about the case where A isn't a writer","timestamp":1502832291897}
{"from":"ogd","message":"mafintosh: i guess the remote would respond with an error to the extension request","timestamp":1502832335344}
{"from":"mafintosh","message":"ogd: hmm yea. i need to think more about this","timestamp":1502832362555}
{"from":"ogd","message":"mafintosh: thats why i think it would be nicer if this was an internal detail on one dat, not exposed over the network","timestamp":1502832365011}
{"from":"mafintosh","message":"ogd: thats just another way to tell me to fix haha","timestamp":1502832383798}
{"from":"mafintosh","message":"*fix it","timestamp":1502832385464}
{"from":"ogd","message":"mafintosh: haha","timestamp":1502832387000}
{"from":"mafintosh","message":"ogd: oh actually ...","timestamp":1502832451486}
{"from":"mafintosh","message":"ogd: you don't need an extension right now","timestamp":1502832462014}
{"from":"mafintosh","message":"ogd: you'll get an upload event when someone downloads the metadata file on the writer","timestamp":1502832481569}
{"from":"mafintosh","message":".content.on('upload', index)","timestamp":1502832516581}
{"from":"mafintosh","message":".content.on('upload', index, metadataAsBuffer)","timestamp":1502832547427}
{"from":"mafintosh","message":"so you can just hook into that and then write the file","timestamp":1502832566079}
{"from":"mafintosh","message":"easy","timestamp":1502832572141}
{"from":"ogd","message":"mafintosh: thats a hyperdrive event that emits when the hyperdrive Stat object is sent?","timestamp":1502832616489}
{"from":"mafintosh","message":"ogd: when the file content is sent","timestamp":1502832636386}
{"from":"mafintosh","message":"but then you just check that the content is metadata content","timestamp":1502832650186}
{"from":"mafintosh","message":"metadata meaning your drive metadta","timestamp":1502832674375}
{"from":"ogd","message":"mafintosh: value is the entire hypercore entry buffer?","timestamp":1502832683576}
{"from":"mafintosh","message":"ogd: first chunk","timestamp":1502832693137}
{"from":"mafintosh","message":"ya entry","timestamp":1502832698904}
{"from":"mafintosh","message":"ya full entry","timestamp":1502832702799}
{"from":"mafintosh","message":"so if you do .writeFile('/foo', '{\"type\": \"googledriveplaceholder\",...})","timestamp":1502832738470}
{"from":"mafintosh","message":"it would be the last part as a buffer (assuming the json is less than 64kb)","timestamp":1502832763218}
{"from":"ogd","message":"mafintosh: so on the remote side, its implied that by requesting a file the first time, you then wait till that file updates and you can request it again to get the real data?","timestamp":1502832818548}
{"from":"mafintosh","message":"there is an error case where you crash while downloading the drive content and then the client wont request the metadata again cause they have already downloaded it","timestamp":1502832824452}
{"from":"mafintosh","message":"but you can worry about later","timestamp":1502832837202}
{"from":"ogd","message":"mafintosh: yea i was just gonna ask how errors would work","timestamp":1502832837813}
{"from":"ogd","message":"mafintosh: ok haha","timestamp":1502832844926}
{"from":"mafintosh","message":"ogd: we can make the client refetch metadata after a timeout","timestamp":1502832887288}
{"from":"mafintosh","message":"that is easy","timestamp":1502832889795}
{"from":"ogd","message":"mafintosh: ok cool ill start w/ this","timestamp":1502832973788}
{"from":"ogd","message":"its like a ghetto version of git smudge and clean filters","timestamp":1502833121930}
{"from":"bret","message":"https://media.giphy.com/media/3oriOcbtwCxGlnFZv2/giphy.gif","timestamp":1502833670151}
{"from":"creationix","message":"Implementing a hyperdrive inspired data structure gives me new respect for mafintosh. It's neat logic, but very tricky","timestamp":1502835571235}
{"from":"creationix","message":"https://gist.github.com/creationix/356c1f8d571f5837cc1960b51d4ef5b7","timestamp":1502835585369}
{"from":"ralphtheninja[m]","message":":)","timestamp":1502836094816}
{"from":"larpanet","message":"@f/6sQ6d2CMxRUhLpspgGIulDxDCwYD7DzFzPNr7u5AU=.ed25519:...to these networks? - #ipfs (ipfs bootstrap ...)- #bitcoin- #ethereum- electrum-*- discovery-swarm ( ... http://wx.larpa.net:8807/%25Mvouag%2FA%2BQBImAdwz9GmNUUvXKysgmvMvcKATLjxThM%3D.sha256","timestamp":1502837097789}
{"from":"ogd","message":"creationix: just gotta write lots of tests :)","timestamp":1502841781131}
{"from":"creationix","message":"finally, it works!","timestamp":1502866759501}
{"from":"creationix","message":"150 lines for a hyperdrive-like filesystem on top of plain log storage (currently a vanilla JS array, soon will be async enabled with read/write locks and read-only snapshots)","timestamp":1502866824286}
{"from":"barnie","message":"I don't have time to test further, but this looks very interesting for bundling: https://github.com/datproject/dat/issues/851","timestamp":1502882095448}
{"from":"ralphtheninja","message":"https://github.com/Level/level/issues/50","timestamp":1502897041661}
{"from":"ralphtheninja","message":"any thoughts?","timestamp":1502897048009}
{"from":"ralphtheninja","message":"creationix: cool!","timestamp":1502897818311}
{"from":"ralphtheninja","message":"ogd: mcollina suggested to move level.js to level org, not sure what you think about it","timestamp":1502898298468}
{"from":"pfrazee","message":"I've got one dev that depended on the DatArchive .diff / .commit / .revert APIs","timestamp":1502899224763}
{"from":"pfrazee","message":"(nothing concrete yet, but in repsonse) I'm considering removing staging UIs from the library, and then having staging be something that's toggled *on* for individual archives","timestamp":1502899279634}
{"from":"pfrazee","message":"not yet sure that's a serviceable idea though","timestamp":1502899725903}
{"from":"ogd","message":"ralphtheninja: yea sure","timestamp":1502907964903}
{"from":"ogd","message":"jhand: maybe relevant to your mapping class https://gist.github.com/maxogden/c6924fc82e573e851eb1e2ffa49d6416","timestamp":1502908001519}
{"from":"jhand","message":"lol nice","timestamp":1502908028336}
{"from":"ogd","message":"jhand: would be cool to webgl render that","timestamp":1502908078568}
{"from":"jhand","message":"we should get a dat -> geojson map thing going probably lol.","timestamp":1502908142485}
{"from":"ralphtheninja","message":"dat clone dat://a7ad024bd8ea202b45f246e59f61328ccf6d84ef98f42249495dd78507a9bd9f charlottesville","timestamp":1502916451934}
{"from":"ralphtheninja","message":"could have just pasted a youtube link, but where's the fun in that? :)","timestamp":1502916934365}
{"from":"mafintosh","message":"hyperdb 1.0.0-rc1 is out!! https://github.com/mafintosh/hyperdb","timestamp":1502921961040}
{"from":"mafintosh","message":"thankns ralphtheninja, noffle, emilbayes (+more!) for helping getting it out","timestamp":1502921987921}
{"from":"noffle","message":"hyperdb hyper<3","timestamp":1502922072281}
{"from":"mafintosh","message":"noffle: i wanna add a cli module for it","timestamp":1502922625646}
{"from":"mafintosh","message":"noffle: to making testing and prototyping easier","timestamp":1502922638945}
{"from":"yoshuawuyts","message":"mafintosh: woot! You also got the hyperDB name ? :D","timestamp":1502922639602}
{"from":"mafintosh","message":"yoshuawuyts: yea of course","timestamp":1502922654699}
{"from":"yoshuawuyts","message":"Niceeee","timestamp":1502922661488}
{"from":"noffle","message":"mafintosh: I've been reading the src and am trying to understand how the 'heads' field in each node works. it seems to be a list of feed seq #s, but I'm not clear on what it's for. topographic sorting?","timestamp":1502922698510}
{"from":"noffle","message":"the per-node trie is for doing prefix matches, right?","timestamp":1502922712945}
{"from":"mafintosh","message":"noffle: exactly","timestamp":1502922718532}
{"from":"mafintosh","message":"on both accounts","timestamp":1502922724058}
{"from":"mafintosh","message":"noffle: i renamed it to clock","timestamp":1502922743907}
{"from":"mafintosh","message":"noffle: cause it it just a vector indicating causality","timestamp":1502922758417}
{"from":"dat-gitter","message":"(serapath) @shama @yoshuawuyts i sometimes use the `onload` and `onunload` functionality. I was wondering if there is a good component to have a component based `onresize` too. I was also thinking, if there is an `onresize` event - maybe it sets the current size to `undefined` like in this w3c spec proposal i came across where they seem to think about it. If so - maybe the load/unload hooks could use it too. Did you try this","timestamp":1502922766951}
{"from":"dat-gitter","message":"before? Would that make sense?","timestamp":1502922767057}
{"from":"noffle","message":"mafintosh: I sent you a pr to your v1.0.0 branch yesterday, fyi","timestamp":1502922881578}
{"from":"noffle","message":"oh, just saw your response","timestamp":1502922888796}
{"from":"mafintosh","message":"noffle: ah dang, it's merged now","timestamp":1502922937601}
{"from":"mafintosh","message":"noffle: but send to master if still relevant","timestamp":1502922944569}
{"from":"noffle","message":"all right","timestamp":1502922953734}
{"from":"mafintosh","message":"the code is a bit messy but there are lots of tests","timestamp":1502922953974}
{"from":"pfrazee","message":"mafintosh: nice congrats","timestamp":1502923155263}
{"from":"ogd","message":"mafintosh: jhand worked on the remote file thing we talked about https://github.com/maxogden/dat-remote-storage","timestamp":1502925133091}
{"from":"ogd","message":"mafintosh: jhand question though, the tests exit prematurely. i thought having live: true on the replication streams would keep the process open, but its exiting on its own for some reason. can you look at test.js and see if anything looks wrong?","timestamp":1502925177849}
{"from":"ralphtheninja[m]","message":"mafintosh: it was nothing much, but happy to be of some use","timestamp":1502925280455}
{"from":"ralphtheninja[m]","message":"getting some really strange errors when trying to install dat-desktop","timestamp":1502925675413}
{"from":"ralphtheninja[m]","message":"and the error just doesn't make any sense, it seems like browserify fails to parse `elements/modal.js`","timestamp":1502925707551}
{"from":"ralphtheninja[m]","message":"mafintosh: should I'll make an issue on it?","timestamp":1502925724435}
{"from":"jhand","message":"ogd: got class stuff rest of night, i can look at it tomorrow if still not solved","timestamp":1502926091177}
{"from":"dat-gitter","message":"(lukeburns) yay hyperdb!","timestamp":1502927720793}
{"from":"dat-gitter","message":"(lukeburns) mafintosh: readme says db.local is writable. does this mean it is the hypercore where db.put stores things? If so, should one avoid writing to it directly?","timestamp":1502927965053}
{"from":"karissa","message":"ralphtheninja[m]: oh hm yeah, make an issue.","timestamp":1502928868661}
{"from":"lachenmayer","message":"+1 for hyperdb, very very awesome!!!","timestamp":1502931443308}
{"from":"lachenmayer","message":"getting a npm warning \"npm WARN deprecated crypto@0.0.3: This package is no longer supported. It's now a built-in Node module.\" when i install dat","timestamp":1502931650179}
{"from":"lachenmayer","message":"it's a dependency of dat-doctor https://github.com/joehand/dat-doctor/blob/master/package.json","timestamp":1502931672567}
{"from":"lachenmayer","message":"is that intentional?","timestamp":1502931675848}
{"from":"mafintosh","message":"@lukeburns yea you shouldn't write to that","timestamp":1502932284331}
{"from":"mafintosh","message":"i should make that more","timestamp":1502932295548}
{"from":"pfrazee","message":"mafintosh: scale of 1 to 10, how would you feel if I implemented union mounting in beaker's dat archives FS https://en.wikipedia.org/wiki/Union_mount","timestamp":1502935359520}
{"from":"pfrazee","message":"10 being totally for it, 1 being pretty hard against it","timestamp":1502935381084}
{"from":"mafintosh","message":"pfrazee: I'm supporting basically union mounts in hyperdb","timestamp":1502935420469}
{"from":"pfrazee","message":"I'm thinking of using that as a replacement to the staging API. You could mount an offline archive on top of an online archive, and the offline archive would become a defacto staging area","timestamp":1502935420767}
{"from":"pfrazee","message":"mafintosh: ok interesting, how's that working?","timestamp":1502935432225}
{"from":"mafintosh","message":"pfrazee: basically jusr another feed","timestamp":1502935588607}
{"from":"mafintosh","message":"that you dont replicate","timestamp":1502935600083}
{"from":"pfrazee","message":"mafintosh: right ok, that'll work fine","timestamp":1502935624501}
{"from":"mafintosh","message":"pfrazee: you wanna test the db with me?","timestamp":1502940593028}
{"from":"ogd","message":"pfrazee: you should check what andrewosh is building re: union fs","timestamp":1502940724738}
{"from":"pfrazee","message":"ogd: you remember the name?","timestamp":1502940765612}
{"from":"pfrazee","message":"mafintosh: sure","timestamp":1502940767308}
{"from":"ogd","message":"pfrazee: @andrewosh in this channel, mebbe he'll post it","timestamp":1502940785884}
{"from":"mafintosh","message":"pfrazee: npm i -g hyperdb-cli","timestamp":1502940988095}
{"from":"pfrazee","message":"mafintosh: k","timestamp":1502941020720}
{"from":"pfrazee","message":"mafintosh: ok now what","timestamp":1502941142372}
{"from":"mafintosh","message":"pfrazee: hyperdb 586541d0f3c1d1fba4d5762733af31eb9fb71367c38187ab5c73f6240706f293","timestamp":1502941156592}
{"from":"mafintosh","message":"pfrazee: in some tmp folder","timestamp":1502941164257}
{"from":"mafintosh","message":"(creates a ./hyperdb folder)","timestamp":1502941171521}
{"from":"pfrazee","message":"mafintosh: ok running","timestamp":1502941188312}
{"from":"mafintosh","message":"pfrazee: and the get foo","timestamp":1502941192224}
{"from":"pfrazee","message":"is baz!","timestamp":1502941207751}
{"from":"mafintosh","message":"yuh","timestamp":1502941211968}
{"from":"mafintosh","message":"pfrazee: send me your local.key","timestamp":1502941219431}
{"from":"pfrazee","message":"mafintosh: d5965d98732abe39cb2ee1f06c59ff7e2f3581d515c18e52bfb066e9a131b958","timestamp":1502941236704}
{"from":"mafintosh","message":"pfrazee: authorized","timestamp":1502941258857}
{"from":"pfrazee","message":"mafintosh: ok cool","timestamp":1502941264103}
{"from":"mafintosh","message":"pfrazee: try put'ing something","timestamp":1502941265423}
{"from":"pfrazee","message":"I set foo","timestamp":1502941266422}
{"from":"mafintosh","message":"whassup","timestamp":1502941273394}
{"from":"pfrazee","message":"that's it!","timestamp":1502941278171}
{"from":"mafintosh","message":"if you get foo what do you get?","timestamp":1502941295310}
{"from":"mafintosh","message":"i get both values actually","timestamp":1502941301574}
{"from":"pfrazee","message":"baz whassup","timestamp":1502941306933}
{"from":"mafintosh","message":"yuh","timestamp":1502941311792}
{"from":"mafintosh","message":"prob cause you wrote it before i auth'ed you","timestamp":1502941323497}
{"from":"pfrazee","message":"yeah","timestamp":1502941331117}
{"from":"mafintosh","message":"pfrazee: try merging it","timestamp":1502941333466}
{"from":"pfrazee","message":"done","timestamp":1502941339554}
{"from":"mafintosh","message":"bar!","timestamp":1502941345788}
{"from":"pfrazee","message":"sweet!","timestamp":1502941354060}
{"from":"mafintosh","message":"pretty neat","timestamp":1502941364986}
{"from":"mafintosh","message":"pfrazee: get cool-module","timestamp":1502941534167}
{"from":"ogd","message":"mafintosh: im offended you did the first multiwriter test with paul. i thought we had something.... special","timestamp":1502942043801}
{"from":"pfrazee","message":"mafintosh: got it, thanks","timestamp":1502942070140}
{"from":"ogd","message":"ooo new hyperdb on npm","timestamp":1502942465209}
{"from":"mafintosh","message":"ogd: now with a readme","timestamp":1502942599119}
{"from":"dat-gitter","message":"(e-e-e) AMAZING!!!","timestamp":1502943471005}
{"from":"andrewosh","message":"pfrazee: ogd: was referring to this https://github.com/andrewosh/layerdrive (just threw together some docs, so sorry for potential shoddiness there). unclear how useful it'll be for you, since i'm using your scoped-fs internally as a staging area as well","timestamp":1502945314133}
{"from":"cblgh","message":"mafintosh: ohhhhhhhhhh nice hyperdb got merged","timestamp":1502960897949}
{"from":"cblgh","message":"and a hyperdb-cli!","timestamp":1502960901858}
{"from":"cblgh","message":"i had hacked together a cli in my mud lol","timestamp":1502960914124}
{"from":"cblgh","message":"so i could check what keys were when the state was completely fubar","timestamp":1502960925789}
{"from":"cblgh","message":"guess it's time to build my electron project and finish it up so that i can go back to hyperdb explorations :~~","timestamp":1502960954444}
{"from":"cblgh","message":"mafintosh: also congrats on release!!!","timestamp":1502960968610}
{"from":"pfrazee","message":"mafintosh: I'm your hype man https://twitter.com/pfrazee/status/898191869150146561","timestamp":1502980937208}
{"from":"mcollina","message":"mafintosh: do you plan to remove https://github.com/mafintosh/hyperdb/blob/master/index.js#L165-L169?","timestamp":1502985275116}
{"from":"mafintosh","message":"mcollina: yea there is a comment about that in the constructor","timestamp":1502985535018}
{"from":"mcollina","message":"mafintosh: awesome work.","timestamp":1502985564086}
{"from":"mcollina","message":":D","timestamp":1502985565736}
{"from":"mcollina","message":"I look forward to that!!!","timestamp":1502985570648}
{"from":"mafintosh","message":"mcollina: there is no lock on replication though","timestamp":1502986419824}
{"from":"mcollina","message":"whoaaaaa","timestamp":1502986451552}
{"from":"mcollina","message":":D","timestamp":1502986454314}
{"from":"mafintosh","message":".watch is about to land :)","timestamp":1502987190587}
{"from":"creationix","message":"mafintosh: is hyperdb-cli supposed to be stable yet? I can't get it to work","timestamp":1502987657280}
{"from":"mafintosh","message":"creationix: should work yea","timestamp":1502987742482}
{"from":"mafintosh","message":"creationix: it listens on stdin for commands","timestamp":1502987778681}
{"from":"creationix","message":"the first time I tried, I was able to set and get locally","timestamp":1502987808572}
{"from":"creationix","message":"but every time after that, I couldn't save anything","timestamp":1502987817263}
{"from":"creationix","message":"(even moved to a new empty directory)","timestamp":1502987825601}
{"from":"creationix","message":"never could get multi-writer working","timestamp":1502987832530}
{"from":"creationix","message":"I assume the second peer puts the key of the first in it's CLI","timestamp":1502987844181}
{"from":"creationix","message":"and then authorize the second peer's local key in the first","timestamp":1502987859610}
{"from":"creationix","message":"but it wasn't ever able to set it's own values (not even locally)","timestamp":1502987932575}
{"from":"creationix","message":"and every test I've ever done after that, set doesn't work at all","timestamp":1502987941917}
{"from":"mafintosh","message":"creationix: if you make a new one and write put hello world it doesn't work?","timestamp":1502987996810}
{"from":"creationix","message":"user error!","timestamp":1502988014216}
{"from":"creationix","message":"\"set\" is not \"put\"","timestamp":1502988022962}
{"from":"mafintosh","message":":)","timestamp":1502988028809}
{"from":"mafintosh","message":"should prob warn on that","timestamp":1502988040954}
{"from":"creationix","message":"well, when you add more error checking, warn on invalid commands :)","timestamp":1502988045074}
{"from":"pfrazee","message":"mafintosh: https://twitter.com/gratidue/status/898223068065222657","timestamp":1502988094728}
{"from":"mafintosh","message":"pfrazee: not atm, but will add more things like that in the future","timestamp":1502988157948}
{"from":"creationix","message":"yay, working better now. But shouldn't `get` show all values when conflicting? I'm just getting one of them (the secondary one even)","timestamp":1502988172732}
{"from":"mafintosh","message":"first will be the write-only perm","timestamp":1502988177552}
{"from":"pfrazee","message":"mafintosh: got it","timestamp":1502988178457}
{"from":"mafintosh","message":"creationix: you sure there was a confilct?","timestamp":1502988212413}
{"from":"creationix","message":"I `put name Tim` in one and `put name Bob` in another","timestamp":1502988225718}
{"from":"creationix","message":"whichever ran last is what shows for `get name`","timestamp":1502988232986}
{"from":"mafintosh","message":"while both are running?","timestamp":1502988241520}
{"from":"mafintosh","message":"cause then they'll have replicated and no conflict will happen","timestamp":1502988287120}
{"from":"creationix","message":"ahh, so it's only a conflict if they both write while offline","timestamp":1502988307340}
{"from":"mafintosh","message":"yea","timestamp":1502988317507}
{"from":"creationix","message":"interesting","timestamp":1502988324213}
{"from":"creationix","message":"well, then it's working as designed!","timestamp":1502988337717}
{"from":"creationix","message":"great work","timestamp":1502988343168}
{"from":"creationix","message":"btw, redis is `get` and `set`. That's why I keep typing `set` instead of `put`","timestamp":1502988442022}
{"from":"creationix","message":"maybe allow both in the CLI","timestamp":1502988464695}
{"from":"noffle_","message":"mafintosh: will hyperdb use multifeed, or are they incompatible in some way?","timestamp":1502988517087}
{"from":"noffle_","message":"I'm really interested in something like hyperdb /wo the vector clock and trie pieces","timestamp":1502988532360}
{"from":"noffle_","message":"ie the multi-writer stuff","timestamp":1502988540763}
{"from":"creationix","message":"https://gist.github.com/creationix/a67e5312b33d8ecbe78b343464b1d3d8","timestamp":1502988699146}
{"from":"mafintosh","message":"creationix: ya sounds good","timestamp":1502988709684}
{"from":"mafintosh","message":"noffle_: it will later on. had to inline the impl for now","timestamp":1502988743256}
{"from":"mafintosh","message":"cause that made it easier to ship","timestamp":1502988763079}
{"from":"noffle_","message":"cool","timestamp":1502988790544}
{"from":"creationix","message":"speaking of shipping, I need to get my MVP for this week done. Laterz...","timestamp":1502988820587}
{"from":"noffle_","message":"mafintosh: another question..","timestamp":1502988827363}
{"from":"noffle_","message":"each node entry stores a vector clock of seq #s","timestamp":1502988842942}
{"from":"noffle_","message":"and *somehow* each index # refers to a hypercore the hyperdb knows about","timestamp":1502988854631}
{"from":"noffle_","message":"how do you map an integer to a hypercore feed in a distributed system?","timestamp":1502988869735}
{"from":"noffle_","message":"i.e. why isn't it FEED-ID -> SEQ#","timestamp":1502988887390}
{"from":"mafintosh","message":"noffle_: the index in the array maps to the feed key in each writers .feeds array","timestamp":1502989365693}
{"from":"mafintosh","message":"noffle_: that they add everytime you update that array","timestamp":1502989377641}
{"from":"mafintosh","message":"noffle_: to add it everytime would bloat the index a lot (32b vs 1b per entry)","timestamp":1502989432993}
{"from":"noffle_","message":"mafintosh: couldn't there be the same feed id on separate machines that has a different index in that array though?","timestamp":1502992906601}
{"from":"noffle_","message":"maybe I'm not articulating it well","timestamp":1502992918904}
{"from":"mafintosh","message":"noffle_: no, cause the id is local to each feed","timestamp":1502992939648}
{"from":"noffle_","message":"mafintosh: something isn't clicking for me. I'll reread that part of the code again","timestamp":1502992986196}
{"from":"pfrazee","message":"only the owner feed assigns the index","timestamp":1502993496756}
{"from":"pfrazee","message":"so the indexes will be the same for everyone","timestamp":1502993515317}
{"from":"noffle_","message":"does the owner also specify the integer -> feedid mapping somewhere?","timestamp":1502993797209}
{"from":"pfrazee","message":"I would imagine so","timestamp":1502994078352}
{"from":"pfrazee","message":"I havent checked the code","timestamp":1502994082846}
{"from":"noffle_","message":"oh I see","timestamp":1502994431481}
{"from":"noffle_","message":"db.authorize adds a new entry for that feed","timestamp":1502994638586}
{"from":"mafintosh","message":"noffle: to it's local feed","timestamp":1502995359884}
{"from":"noffle","message":"mafintosh: how does this work when a hyperdb is re-opened? I don't see _writers being populated from the known feeds in the db","timestamp":1502995618863}
{"from":"cblgh","message":"mafintosh: so if i create a hyperdb and it has an archive key 1337K3Y","timestamp":1502996747001}
{"from":"cblgh","message":"mafintosh: when you want to join in on the writing party, do you (as a second, separate peer) do hyperdb(storage, 1337K3Y) ?","timestamp":1502996789329}
{"from":"cblgh","message":"(afterwhich you get a local key, tell me it and i authorize your writes; but the question pertains to if there is a single key to share around for a hyperdb instance)","timestamp":1502996825511}
{"from":"mafintosh","message":"noffle: see writer.js - they are lazily set when heads are read","timestamp":1502996867766}
{"from":"mafintosh","message":"cblgh: ya","timestamp":1502996908560}
{"from":"cblgh","message":"mafintosh: awesome","timestamp":1502996921700}
{"from":"mafintosh","message":"cblgh: the original key acts as the identifier of the db basically","timestamp":1502996937172}
{"from":"cblgh","message":"ya","timestamp":1502996946091}
{"from":"noffle","message":"mafintosh: ah, that's the magic of writer._update I see","timestamp":1502996962276}
{"from":"mafintosh","message":"cblgh: it also works with hyperdiscovery btw","timestamp":1502996966097}
{"from":"cblgh","message":"ohh??","timestamp":1502996973395}
{"from":"cblgh","message":"does that mean i can programmatically add peers?","timestamp":1502996993713}
{"from":"cblgh","message":"authorize*","timestamp":1502996998010}
{"from":"mafintosh","message":"cblgh: ya there is an api for that","timestamp":1502997173676}
{"from":"mafintosh","message":"noffle: the local feed aliasing was the most tricky part to get right","timestamp":1502997203769}
{"from":"cblgh","message":"niiiiiiiiiiiiiice","timestamp":1502997226515}
{"from":"noffle","message":"clever :)","timestamp":1502997250325}
{"from":"mafintosh","message":"noffle: so all the magic is in the _decodeMap and _encodeMap in the writer","timestamp":1502997912933}
{"from":"mafintosh","message":"noffle: those maps the local ids to determistic ones across all feeds","timestamp":1502997930511}
{"from":"noffle","message":"mafintosh: yeah this was the piece I was missing","timestamp":1502997958535}
{"from":"mafintosh","message":"noffle: i'd be interested in ideas/thoughts on how i can make the codebase clearer / more approachable","timestamp":1502999220371}
{"from":"mafintosh","message":"other than docs (which of course is super important)","timestamp":1502999237357}
{"from":"noffle","message":"mafintosh: I'll think on that","timestamp":1502999472031}
{"from":"mafintosh","message":"noffle: thanks :)","timestamp":1502999484027}
{"from":"mafintosh","message":"noffle: i wonder if there good way to put the .get/.put logic in seperate files","timestamp":1502999520478}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh: congrats on hyperdb. I have a question on hyperdrive. Here you are setting version but you have identified the version property with a getter only. https://github.com/mafintosh/hyperdrive/blob/master/index.js#L83","timestamp":1503001531386}
{"from":"mafintosh","message":"@scriptjs nice catch. that's legacy","timestamp":1503002972956}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh: right, so thinking there should just be a method to set the initial value that is then updated with update(). If so I will patch it.","timestamp":1503003191600}
{"from":"karissa","message":"mafintosh: does hyperdb have selective sync?","timestamp":1503004185495}
{"from":"mafintosh","message":"karissa: yea","timestamp":1503004348517}
{"from":"mafintosh","message":"there is sparse mode in thete","timestamp":1503004364795}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh. fixed with https://github.com/mafintosh/hyperdrive/pull/185","timestamp":1503008544686}
{"from":"mafintosh","message":"ogd, jhand https://github.com/mafintosh/hyperdb#unwatch--dbwatchfolder-onchange (this is the main api needed for our bots thing :))","timestamp":1503025744983}
{"from":"TheLink","message":"https://yoric.github.io/post/binary-ast-newsletter-1/","timestamp":1503058801657}
{"from":"yoshuawuyts","message":"TheLink: :D :D","timestamp":1503059819198}
{"from":"pfrazee","message":"mafintosh: +1 the watch api!","timestamp":1503068917273}
{"from":"mafintosh","message":"pfrazee: have iterators working locally now :)","timestamp":1503072966933}
{"from":"mafintosh","message":"just need to clean it up","timestamp":1503072972168}
{"from":"pfrazee","message":"mafintosh: sweet, for streaming?","timestamp":1503072989100}
{"from":"mafintosh","message":"pfrazee: for efficiently listing a folder in hyperdb","timestamp":1503073011789}
{"from":"mafintosh","message":"pfrazee: as a stream","timestamp":1503073019782}
{"from":"pfrazee","message":"mafintosh: sweet","timestamp":1503073031078}
{"from":"jhand","message":"mafintosh: does the watch emit the new value too?","timestamp":1503073098672}
{"from":"jhand","message":"or the key","timestamp":1503073101057}
{"from":"mafintosh","message":"jhand: no cause there might have been 1000s of updates","timestamp":1503073126135}
{"from":"pfrazee","message":"jhand: we need to update datprotocol.com","timestamp":1503073128859}
{"from":"jhand","message":"ooh","timestamp":1503073131559}
{"from":"jhand","message":"pfrazee: ya I think I have a WIP pr","timestamp":1503073137268}
{"from":"mafintosh","message":"jhand: like you watch a folder, then start replicating and the remote is faaaaar ahead of you","timestamp":1503073155248}
{"from":"jhand","message":"pfrazee: er I merged it","timestamp":1503073155482}
{"from":"pfrazee","message":"jhand: ok cool. I was going to suggest redirecting to something that already exists, like the dat repo or awesome-dat or etc","timestamp":1503073182166}
{"from":"jhand","message":"pfrazee: ya i like the idea but still have the problem of too many places to update info","timestamp":1503073214866}
{"from":"pfrazee","message":"jhand: yeah","timestamp":1503073229928}
{"from":"pfrazee","message":"I'm 100% ok with ditching the old thing in favor of reducing places","timestamp":1503073242553}
{"from":"mafintosh","message":"jhand: but only the prefix you are watching have changed of course","timestamp":1503073250207}
{"from":"jhand","message":"mafintosh: yea so how were you thinking we'd use that for the bot?","timestamp":1503073291597}
{"from":"jhand","message":"didn't quite get it last night but mabye was too late to think about it","timestamp":1503073305580}
{"from":"mafintosh","message":"jhand: so a bot just watching the \"input\" folder","timestamp":1503073336246}
{"from":"mafintosh","message":"jhand: so fx a video conversion bot watches \"/videos\" and when that changes it lists \"/videos\" and converts any new videos that it hasn't already converted","timestamp":1503073400513}
{"from":"jhand","message":"mafintosh: ahh ok yea","timestamp":1503073451999}
{"from":"jhand","message":"mafintosh: so there will be a way to get all the values for each key in a folder?","timestamp":1503073565315}
{"from":"jhand","message":"oh i think you said that above","timestamp":1503073584650}
{"from":"cblgh","message":"bots + dats hell ya","timestamp":1503073795593}
{"from":"mafintosh","message":"jhand: yea thats what an iterator is","timestamp":1503073873411}
{"from":"pfrazee","message":"ogd: https://twitter.com/jimpick/status/898588727869489152","timestamp":1503075394970}
{"from":"ogd","message":"haha","timestamp":1503075454574}
{"from":"mafintosh","message":"10k reads/sec using iterators on my machine","timestamp":1503077544817}
{"from":"pfrazee","message":"nicee","timestamp":1503078134196}
{"from":"ogd","message":"mafintosh: any ideas on why this test wouldnt keep the process open? (process exits right after logging 'sync', i expected it to log 'sync' multiple times cause i'm writing files after the first 'sync')","timestamp":1503078202792}
{"from":"ogd","message":"mafintosh: does live: true block a process from unreffing?","timestamp":1503078245571}
{"from":"mafintosh","message":"ogd: which test?","timestamp":1503078386482}
{"from":"ogd","message":"mafintosh: oops","timestamp":1503078392750}
{"from":"ogd","message":"mafintosh: https://github.com/maxogden/dat-remote-storage/blob/master/test/test.js#L38","timestamp":1503078395048}
{"from":"mafintosh","message":"and no it shouldn't block","timestamp":1503078399795}
{"from":"ogd","message":"mafintosh: oh","timestamp":1503078425672}
{"from":"ogd","message":"mafintosh: basically im using the on 'upload' event you mentioned, and writing more files after i get an 'upload' event. so 'sync' emits once, then some more writes happen, then the process exits, but sync never emits a second time","timestamp":1503078469612}
{"from":"ogd","message":"mafintosh: you can run 'npm test' to see","timestamp":1503078474470}
{"from":"mafintosh","message":"ogd: ah maybe jhand knows if dat-node hangs / need shutdown","timestamp":1503078496188}
{"from":"mafintosh","message":"ogd: i'll run it later but just left my laptop","timestamp":1503078511281}
{"from":"jhand","message":"ogd: maybe try dat.close before t.end in the first test?","timestamp":1503078750717}
{"from":"ogd","message":"jhand: t.end is never called","timestamp":1503078766165}
{"from":"jhand","message":"ogd: ooh in the first one?","timestamp":1503078773514}
{"from":"ogd","message":"jhand: oh oops","timestamp":1503078782865}
{"from":"jhand","message":"dat.close closes the file watching, network, and archive","timestamp":1503078803219}
{"from":"jhand","message":"but the first two shouldn't matter","timestamp":1503078808343}
{"from":"jhand","message":"since you aren't using network or watching files","timestamp":1503078816001}
{"from":"ogd","message":"https://www.irccloud.com/pastebin/h0yrrfMu/","timestamp":1503078944430}
{"from":"ogd","message":"jhand: adding dat.close() before t.end() in first test o/","timestamp":1503078956925}
{"from":"jhand","message":"hmm","timestamp":1503079045730}
{"from":"ogd","message":"jhand: is it possible theres state between instances?","timestamp":1503079125797}
{"from":"jhand","message":"ogd: shouldn't be","timestamp":1503079206873}
{"from":"jhand","message":"ogd: the srcPath isn't used in the second test?","timestamp":1503079276594}
{"from":"ogd","message":"jhand: oh wait","timestamp":1503079296493}
{"from":"ogd","message":"jhand: i cant close first one, i want it to be open","timestamp":1503079301146}
{"from":"jhand","message":"I don't think I understand the tests","timestamp":1503079304356}
{"from":"ogd","message":"jhand: for 2nd test to use (testDat)","timestamp":1503079306695}
{"from":"jhand","message":"ogd: ah right","timestamp":1503079328123}
{"from":"ogd","message":"jhand: i need 2 dats to exist at the same time in the same process so i can replicate from src to dest","timestamp":1503079336144}
{"from":"ogd","message":"need to make more coffee","timestamp":1503079349341}
{"from":"acao","message":"curious about this project: https://github.com/devgeeks/echidna.js are devgeeks affiliated with y'all? i see there was a gh issue where they referenced dat so i thought i would ask","timestamp":1503079410674}
{"from":"ogd","message":"jhand: basically i write metadata files to src. then dest replicates. which causes 'upload' to emit, which causes index.js to overwrite the metadata files with real data. which should cause a 2nd 'sync' event, but instead the process exits","timestamp":1503079424875}
{"from":"ogd","message":"acao: i know devgeeks from phonegap community years ago","timestamp":1503079442912}
{"from":"acao","message":"@ogd nice, yeah this is a pretty neat little asymmetric encryption lib for pouchdb using tweetnacl","timestamp":1503079498265}
{"from":"jhand","message":"ogd: dest.on('end') isn't getting caleld for me","timestamp":1503079557181}
{"from":"jhand","message":"in second test","timestamp":1503079559946}
{"from":"ogd","message":"jhand: yea i know","timestamp":1503079569548}
{"from":"ogd","message":"jhand: 2nd test isnt done right now. but whats weird is how process exits without calling it. and also how process exits without calling sync a 2nd time","timestamp":1503079597119}
{"from":"jhand","message":"ogd: ooh misunderstood the problem. ok","timestamp":1503079617734}
{"from":"ogd","message":"jhand: what i expected to happen was the process would sit open forever, since live: true","timestamp":1503079707835}
{"from":"ogd","message":"jhand: and it would log 'sync' twice","timestamp":1503079733316}
{"from":"jhand","message":"ogd: not getting the second sync even if process stays open (just added `dat.joinNetwork()` to top test to force it open","timestamp":1503080056661}
{"from":"ogd","message":"jhand: hmmm","timestamp":1503080066842}
{"from":"ogd","message":"jhand: maybe sync doesnt work that way then","timestamp":1503080071798}
{"from":"jhand","message":"ogd: also you can remove that inner pump if you want, just set `{indexing: false}` on the dats","timestamp":1503080083539}
{"from":"ogd","message":"jhand: seems weird that live: true wouldnt keep the process open...","timestamp":1503080085260}
{"from":"jhand","message":"ogd: yea agree","timestamp":1503080091407}
{"from":"jhand","message":"ogd: this may help debug in the second test https://www.irccloud.com/pastebin/GsJ9jqaz/","timestamp":1503080220490}
{"from":"jhand","message":"seems like the sync should be firing again","timestamp":1503080265018}
{"from":"ogd","message":"jhand: are there docs for indexing: false?","timestamp":1503080300871}
{"from":"jhand","message":"ogd: not really. was working on fixing that yesterday. its broken right now","timestamp":1503080320178}
{"from":"jhand","message":"ogd: if indexing: false it writes to dat.path and archive at the same time.","timestamp":1503080353803}
{"from":"ogd","message":"jhand: ahh ok","timestamp":1503080391266}
{"from":"jhand","message":"https://www.irccloud.com/pastebin/3ZRa2JHz/","timestamp":1503080414319}
{"from":"jhand","message":"ogd: i think o/ is the same as what you had as long as you use `indexing: false` option on Dat()","timestamp":1503080443253}
{"from":"ogd","message":"jhand: so if indexing: false then if i create a dat with dat-node, then do dat.archive.writeFile('/foo', 'bar') will it create the real file in the folder?","timestamp":1503080450602}
{"from":"jhand","message":"ogd: yea","timestamp":1503080460837}
{"from":"ogd","message":"jhand: ok cool i was confused about that. cause i called writeFile and it didnt create the file","timestamp":1503080480925}
{"from":"jhand","message":"ogd: yea I gotta make that the default.","timestamp":1503080495375}
{"from":"jhand","message":"ogd: don't really see anything else for why the replication live isn't keeping it open","timestamp":1503080871331}
{"from":"ogd","message":"jhand: i prob need to make a hyperdrive only test case so mafintosh can fix it lol. or more likely tell me how im using it wrong","timestamp":1503080971902}
{"from":"jhand","message":"ogd: yea lol.","timestamp":1503080988265}
{"from":"jhand","message":"ogd: it doesn't fire the second sync if you replicate over network either","timestamp":1503080997561}
{"from":"ogd","message":"jhand: btw i have a sweet campsite for the eclipse","timestamp":1503081035190}
{"from":"jhand","message":"ogd: huh archive.on('sync') does but the content one doesn't","timestamp":1503081043645}
{"from":"ogd","message":"jhand: and i installed a solar panel on my camper so now i have no reason to rejoin society","timestamp":1503081053404}
{"from":"jhand","message":"ogd: hha nice, where is it?","timestamp":1503081061332}
{"from":"jhand","message":"society is overrated","timestamp":1503081069670}
{"from":"ogd","message":"true","timestamp":1503081072898}
{"from":"jhand","message":"ogd: is there history or something in the content? that may be why sync isn't firing for archive.content but is for archive","timestamp":1503081155756}
{"from":"jhand","message":"https://github.com/mafintosh/hyperdrive/blob/master/index.js#L140","timestamp":1503081160582}
{"from":"ogd","message":"jhand: when i go into dest and do dat log it shows the correct changes (two metadata writes and two real file writes)","timestamp":1503081194330}
{"from":"dat-gitter","message":"(hailincai) hi I am new for dat","timestamp":1503082778287}
{"from":"dat-gitter","message":"(hailincai) I try to sync the file between computers between office and home","timestamp":1503082796313}
{"from":"dat-gitter","message":"(hailincai) seems not working","timestamp":1503082799414}
{"from":"dat-gitter","message":"(hailincai) the computer at office is behind a firewall, so I think the dat port can't be accessed.","timestamp":1503082819178}
{"from":"dat-gitter","message":"(hailincai) any work around for this situation?","timestamp":1503082827463}
{"from":"ogd","message":"hailincai you can run 'dat doctor' to see if it gives you any specific failures","timestamp":1503082916245}
{"from":"ogd","message":"jhand: ok so heres my theory... live: true doesnt prevent process from closing. so its up to you to wrap it in e.g. a network server that causes the process to stay open","timestamp":1503083028385}
{"from":"jhand","message":"huh","timestamp":1503083104341}
{"from":"dat-gitter","message":"(hailincai) thanks","timestamp":1503083213962}
{"from":"dat-gitter","message":"(hailincai) when I run dat doctor","timestamp":1503083218759}
{"from":"dat-gitter","message":"(hailincai) the process just try to connect to some port behind firewall and never get pass through","timestamp":1503083235761}
{"from":"dat-gitter","message":"(hailincai) thanks anyway","timestamp":1503083261902}
{"from":"ogd","message":"hailincai if you have a corporate firewall you may need to open 3282. but most home router firewalls do not require a port to be opened","timestamp":1503083324268}
{"from":"dat-gitter","message":"(hailincai) @dat-bot thanks for your reply. I don't think the company will open the port 3282","timestamp":1503083361720}
{"from":"dat-gitter","message":"(hailincai) but still thanks for your help","timestamp":1503083373068}
{"from":"ogd","message":"no problem","timestamp":1503083390193}
{"from":"jhand","message":"@hailincai you may also try dat-test. We haven't integrated that into the main dat program yet.","timestamp":1503083554539}
{"from":"jhand","message":"https://www.irccloud.com/pastebin/4JRGTOpz/","timestamp":1503083577734}
{"from":"dat-gitter","message":"(hailincai) ok","timestamp":1503083593643}
{"from":"dat-gitter","message":"(hailincai) @dat-bot I will try the dat-test later","timestamp":1503083606848}
{"from":"dat-gitter","message":"(hailincai) dat-test show me dat works","timestamp":1503083732464}
{"from":"dat-gitter","message":"(hailincai) Downloading test dat from 2 peer(s) into memory","timestamp":1503083740522}
{"from":"dat-gitter","message":"(hailincai) [==================================================>] 100.0%","timestamp":1503083740553}
{"from":"dat-gitter","message":"(hailincai) Dat worked!","timestamp":1503083740553}
{"from":"dat-gitter","message":"(hailincai) what does this mean?","timestamp":1503083744537}
{"from":"jhand","message":"@hailincai was that with the `--websocket` or the plain one?","timestamp":1503083879820}
{"from":"dat-gitter","message":"(hailincai) plain one","timestamp":1503083919386}
{"from":"dat-gitter","message":"(hailincai) I don't add --websocket","timestamp":1503083923544}
{"from":"jhand","message":"@hailincai okay, I think that may mean we have a bug somewhere. Can you send the full output from `dat doctor`?","timestamp":1503083998728}
{"from":"dat-gitter","message":"(hailincai) ok","timestamp":1503084027626}
{"from":"dat-gitter","message":"(hailincai) @dat-bot I run this command at my office machine","timestamp":1503084403768}
{"from":"dat-gitter","message":"(hailincai) dat doctor -d .","timestamp":1503084404535}
{"from":"dat-gitter","message":"(hailincai) I saw all these info log","timestamp":1503084417972}
{"from":"dat-gitter","message":"(hailincai) [info] Trying to connect to 167.206.125.124:3282","timestamp":1503084418808}
{"from":"dat-gitter","message":"(hailincai) [info] Trying to connect to 167.206.125.124:3282","timestamp":1503084418838}
{"from":"dat-gitter","message":"(hailincai) [info] Trying to connect to 167.206.125.124:3282","timestamp":1503084418869}
{"from":"dat-gitter","message":"(hailincai) [full message: https://gitter.im/datproject/discussions?at=59973f82210ac2692099c934]","timestamp":1503084418869}
{"from":"jhand","message":"@ hailincai was there any output above the first \"trying to connect\"?","timestamp":1503084561583}
{"from":"dat-gitter","message":"(hailincai) dat doctor 6e9ada7f5d3b95f4ffce0138e0fac9717ae277a642ed5b1eba9311627f4f4b1e","timestamp":1503084589962}
{"from":"dat-gitter","message":"(hailincai)","timestamp":1503084590031}
{"from":"dat-gitter","message":"(hailincai) Waiting for incoming connections... (local port: 3282)","timestamp":1503084590061}
{"from":"dat-gitter","message":"(hailincai) @dat-bot these are the things before the trying to connect to","timestamp":1503084610016}
{"from":"dat-gitter","message":"(hailincai) 167.206.125.124 is my office public ip","timestamp":1503084655272}
{"from":"jhand","message":"@hailincai maybe try without the -d argument `dat doctor`. And send the full output, if possible. I need to see whats at the top.","timestamp":1503084661005}
{"from":"jhand","message":"@hailincai it should have something like:","timestamp":1503084701489}
{"from":"jhand","message":"https://www.irccloud.com/pastebin/BjRx0nE6/","timestamp":1503084706893}
{"from":"dat-gitter","message":"(hailincai) ok","timestamp":1503084721222}
{"from":"dat-gitter","message":"(hailincai) [info] Testing connection to public peer","timestamp":1503084799413}
{"from":"dat-gitter","message":"(hailincai) [error] FAIL - unable to load utp-native, utp connections will not work","timestamp":1503084799483}
{"from":"dat-gitter","message":"(hailincai) [info] TCP ONLY - success!","timestamp":1503084799513}
{"from":"dat-gitter","message":"(hailincai) [full message: https://gitter.im/datproject/discussions?at=599740ff162adb6d2e0d899a]","timestamp":1503084799514}
{"from":"dat-gitter","message":"(hailincai) here it is","timestamp":1503084801399}
{"from":"dat-gitter","message":"(hailincai) seems utp not work","timestamp":1503084806306}
{"from":"jhand","message":"@hailincai ah yea, that will make the peer to peer connections harder. We use utp to get around firewalls. What operating system are you on?","timestamp":1503084844845}
{"from":"dat-gitter","message":"(hailincai) under linux machine seems I can make the utp work","timestamp":1503084869191}
{"from":"dat-gitter","message":"(hailincai) but under windows 10, I get this utp fail","timestamp":1503084876676}
{"from":"dat-gitter","message":"(hailincai) so I should make the share folder under linux","timestamp":1503084888362}
{"from":"jhand","message":"yea we do not support utp on windows yet, so that'll make the peer to peer connections harder","timestamp":1503084893931}
{"from":"dat-gitter","message":"(hailincai) get it","timestamp":1503084903777}
{"from":"dat-gitter","message":"(hailincai) so the share folder I should put to linux box","timestamp":1503084910978}
{"from":"jhand","message":"that should help, yes","timestamp":1503084920028}
{"from":"dat-gitter","message":"(hailincai) then I will try again to download from home","timestamp":1503084921532}
{"from":"dat-gitter","message":"(hailincai) @dat-bot thanks for your help","timestamp":1503084932372}
{"from":"dat-gitter","message":"(hailincai) have a good weekend","timestamp":1503084936329}
{"from":"jhand","message":"no problem, hopefully it works! let us know if oyu have more questions.","timestamp":1503084944273}
{"from":"cblgh","message":"https://github.com/cblgh/rotonde-choo","timestamp":1503091173475}
{"from":"cblgh","message":"releasd a (super) alpha version of my electron + choo + dat thinger!","timestamp":1503091201723}
{"from":"cblgh","message":"released*","timestamp":1503091205027}
{"from":"cblgh","message":"makes use of hashbase too using https://github.com/cblgh/hyperotonde/","timestamp":1503091218300}
{"from":"cblgh","message":"readme is in a purely functional state atm","timestamp":1503092089351}
{"from":"cblgh","message":"but it's basically an app for a decentralized social network experiment","timestamp":1503092108852}
{"from":"cblgh","message":"with the intent to share daily activity logs, like \"released the first version of the rotonde app\"","timestamp":1503092128006}
{"from":"ralphtheninja[m]","message":"just got a ticket for nodeconf eu","timestamp":1503092306738}
{"from":"ralphtheninja[m]","message":"it has gotten more corporate and business y (my opinion) the past years, but decided I don't really care, just fun to meet people","timestamp":1503093457379}
{"from":"ralphtheninja[m]","message":"mafintosh: oh cool, crypto workshop!","timestamp":1503093818605}
{"from":"ralphtheninja[m]","message":"yoshuawuyts: and millenials workshop! ;)","timestamp":1503093829148}
{"from":"pfrazee","message":"jnerula0: nice","timestamp":1503094133784}
{"from":"mafintosh","message":"ralphtheninja[m]: ya but that's also node in general","timestamp":1503094706560}
{"from":"ralphtheninja[m]","message":"mafintosh: agreed","timestamp":1503094854900}
{"from":"ralphtheninja[m]","message":"there's room for multiple worlds :)","timestamp":1503094875884}
{"from":"ralphtheninja[m]","message":"my goal this year will be partying even harder + crypto","timestamp":1503094913835}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh What is the showstopper for utp native for windows? What is left to get it working?","timestamp":1503145257297}
{"from":"ralphtheninja[m]","message":"mafintosh: node-modules.com query down again","timestamp":1503152631913}
{"from":"ralphtheninja[m]","message":"is something crashing or not restarting properly?","timestamp":1503152641698}
{"from":"ralphtheninja[m]","message":"sorry for off topic, should take this to PMs instead","timestamp":1503152687094}
{"from":"pfrazee","message":"did a JS-party podcast about dat/beaker https://twitter.com/JSPartyFM/status/898910683093487616","timestamp":1503155576009}
{"from":"cblgh","message":"i sometime's forget that novelty nicks exceed the channel i assumed it for","timestamp":1503157217637}
{"from":"cblgh","message":"sometimes* wtf","timestamp":1503157223849}
{"from":"cblgh","message":"pfrazee: cool! i'll have a listen :3","timestamp":1503157234960}
{"from":"pfrazee","message":"cblgh: :) skip like 30% to reach the Beaker bit","timestamp":1503157333684}
{"from":"cblgh","message":"thanks!","timestamp":1503157398997}
{"from":"cblgh","message":"btw you should see some new signups for hashbase by way of rotonde","timestamp":1503157412476}
{"from":"cblgh","message":"just like 2-5 so nothing major lol","timestamp":1503157425568}
{"from":"pfrazee","message":"cool","timestamp":1503157566599}
{"from":"bret","message":"ralphtheninja[m]: I haven't been able to get mafintosh's node-modules.com to work on a custom search since it came 😭","timestamp":1503176757364}
{"from":"bret","message":"pretty much flying blind","timestamp":1503176766485}
{"from":"bret","message":"I can","timestamp":1503176768089}
{"from":"bret","message":"I can't favor my in-group bias modules now","timestamp":1503176778152}
{"from":"yoshuawuyts","message":"https://mobile.twitter.com/watilde/status/899034036835831808","timestamp":1503182134225}
{"from":"karissa","message":"Ooo","timestamp":1503184008852}
{"from":"pfrazee","message":"nice","timestamp":1503184225849}
{"from":"dat-gitter","message":"(sdockray) pfrazee, i'm getting `Cannot POST /login` when trying to log in to hashbase.io","timestamp":1503202887131}
{"from":"dat-gitter","message":"(sdockray) hmmm it worked in firefox... the error was in safari and also reset button wasn't doing anything in safari.. but it all worked in firefox.. maybe my safari is just busted (every single medium page crashes safari for me)","timestamp":1503203824487}
{"from":"dat-gitter","message":"(e-e-e) mine too.","timestamp":1503203858519}
{"from":"nothingmuch","message":"when using random-access-idb, attempting to create a new hypercore, the line if (!self.key && self.live) { does not trigger the true case because a byte array of all 0's has been read out of the hitherto nonexistent key file in the storage","timestamp":1503217513017}
{"from":"nothingmuch","message":"is the bug in this case that r-a-i returned 32 0 bytes instead of an empty array or null? or is it that hypercore doesn't check the actual values of the key?","timestamp":1503217582669}
{"from":"nothingmuch","message":"(the observed behaviour from a user pov is that hypercore(require('random-access-idb')('foo')) initializes a hypercore whose key is public key is all nulls","timestamp":1503217656117}
{"from":"nothingmuch","message":"the indexeddb inspector only shows a 'data' file, and on subsequent loads the public key appears to be 16 bytes of something, and 16 bytes of 0s","timestamp":1503217776657}
{"from":"nothingmuch","message":"indeed, with random-access-memory, the openKey callback receives a cannot satisfy range error and undefined for the key, and the random-access-idb variant gets a null error and an array of 0 bytes","timestamp":1503219959679}
{"from":"nothingmuch","message":"oh... hmmm... seems to be because store.key = undefined","timestamp":1503220872301}
{"from":"nothingmuch","message":"https://github.com/substack/random-access-idb/blob/master/index.js#L38 # the way hypercore invokes this, as create(\"key\", opts) (in openKey), opts.name never gets set","timestamp":1503221103959}
{"from":"nothingmuch","message":"substack: I submitted two PRs for random-access-indexeddb as per the above, that fixes hypercore initialization in the browser for me","timestamp":1503225583065}
{"from":"nothingmuch","message":"I'm seeing \"premature close\" on the electron-webrtc side, and on the browser side protocol invokes destroy() almost immediately after connecting, but sometimes 1 block is synced between the two hypercore replicators. I tried looking at chrome://webrtc-internals but it doesn't say much apart from that it was disconnected","timestamp":1503238025825}
{"from":"nothingmuch","message":"how do I narrow it down between hypercore and the webrtc stuff? I'm kind of lost in between pump() callbacks","timestamp":1503238066092}
{"from":"pfrazee","message":"@sdocray @e-e-e hashbase crashed last night so that may have been what you were experiencing","timestamp":1503238840772}
{"from":"substack","message":"hey wow nothingmuch's patch fixes my old failing test yay","timestamp":1503256641707}
{"from":"substack","message":"ok great news everyone, hyperdb works in the browser now!","timestamp":1503258174802}
{"from":"dat-gitter","message":"(serapath) <3","timestamp":1503258365383}
{"from":"dat-gitter","message":"(serapath) yay","timestamp":1503258378506}
{"from":"pfrazee","message":"nice!","timestamp":1503258884200}
{"from":"mafintosh","message":"substack: that's awesome","timestamp":1503261030903}
{"from":"nothingmuch","message":"substack: actually I haven't, I've run into another problem, wrt store.length... I have a fix which I dislike, since it allows for a race condition","timestamp":1503269363309}
{"from":"nothingmuch","message":"however, the way I've implemented the fix works with hypercore, since it writes a header to the bitfield file before reading it, and in that txn the self.length property can be updated","timestamp":1503269395124}
{"from":"nothingmuch","message":"(otherwise old hypercores' data are overwritten)","timestamp":1503269402727}
{"from":"mafintosh","message":"nothingmuch: what's the issue? nothing relies on . length anymore","timestamp":1503269506459}
{"from":"nothingmuch","message":"i create a hypercore in the browser on top of RAI, and append one key to it, then reload the session, and recreate the hypercore (storeSecretKey: true, overwrite: false, live: true), this restored hypercore thinks its empty because the tree returns an empty list of roots because the bitfield length is ostensibly 0 because RAI sets self.length to 0 (unless given a parameter) even though it should be 3328","timestamp":1503269667448}
{"from":"nothingmuch","message":"at least I *think* what's going on","timestamp":1503269687929}
{"from":"substack","message":"nothingmuch: thanks for the patches","timestamp":1503269694466}
{"from":"nothingmuch","message":"js is brutal, i thought debugging perl was hard ;-)","timestamp":1503269698984}
{"from":"substack","message":"I fixed the test up in the second one since the plan was off","timestamp":1503269712415}
{"from":"nothingmuch","message":"substack: the length one? I was actually going to cancel it given the new issues, it took me a while to trace it back to the length thing, I only now figured it out","timestamp":1503269746798}
{"from":"substack","message":"if you have an npm account I can add you as a maintainer so you can publish releases","timestamp":1503269758044}
{"from":"nothingmuch","message":"i'm not comfortable with that just yet, I've very little js mileage","timestamp":1503269786908}
{"from":"substack","message":"nothingmuch: no, in test/random.js the plan was off because the t.ifError()s were removed","timestamp":1503269796642}
{"from":"nothingmuch","message":"hmm i wonder how I missed that, I thought budo would shout at me","timestamp":1503269855237}
{"from":"substack","message":"the tests are wired up in a real browser so tape doesn't know when the event loop has unwound like in node","timestamp":1503269888931}
{"from":"substack","message":"if things are working you should see a final summary message","timestamp":1503269905678}
{"from":"nothingmuch","message":"ah","timestamp":1503269905832}
{"from":"substack","message":"anyhow, fantastic detective work on these","timestamp":1503269947290}
{"from":"substack","message":"I got a bit stuck trying to get things to work in hyperdb/hypercore","timestamp":1503269965026}
{"from":"nothingmuch","message":"cool, yeah it was quite a ride","timestamp":1503269989670}
{"from":"nothingmuch","message":"CPS & mutability all the way","timestamp":1503270005934}
{"from":"nothingmuch","message":"I have no idea how it runs the web, js leaves me like 3 neurons available for higher thought","timestamp":1503270028228}
{"from":"nothingmuch","message":"anyway, for this length i'm going to try to work on a failing test for the restoration, i think i can frame it in terms of random.js whereby rai.length equals ram.length, and reinitialize the storage mid way to ensure that idb based files are persisted","timestamp":1503270071844}
{"from":"nothingmuch","message":"https://gist.github.com/nothingmuch/f7110106054ea3894b1173c3cc67d0db","timestamp":1503270137755}
{"from":"nothingmuch","message":"this diff works for me right now (core.append no longer overwrites blocks, and the old blocks are rereadable upon reconstruction)","timestamp":1503270166913}
{"from":"nothingmuch","message":"but I think the test for that will have to wait until tomorrow","timestamp":1503270186398}
{"from":"nothingmuch","message":"i think the real solution to this would be to add a .ready handler to idb based files though, because this hack is very hypercore specific","timestamp":1503270278283}
{"from":"nothingmuch","message":"https://github.com/mafintosh/hypercore/blob/4e69deb6c67edd9422f25b6973f6fde82ee30e6c/lib/storage.js#L178-L180 # this is what makes it work","timestamp":1503270395922}
{"from":"nothingmuch","message":"when the 32 byte header is written, this has a chance to read the previously written length and from that point on the .length property is in sync with the DB, but before that first write there is a potential race condition","timestamp":1503270447346}
{"from":"substack","message":"nothingmuch: I would copy random.js and make a new file to test that","timestamp":1503270493742}
{"from":"nothingmuch","message":"sure thing, and it can be much shorter too, just write 1-2 blocks, close, reopen and verify they are still readable and that the length matches up","timestamp":1503270602198}
{"from":"nothingmuch","message":"but I do want to use random-access-memory for comparison much like random.js (this insstance will be preserved, obviously)","timestamp":1503270638675}
{"from":"nothingmuch","message":"anymoose, I'm going to head to bed. i'm signing off the web gateway, but I'll look for backlog on gitter so feel free to add anything","timestamp":1503270715705}
{"from":"ralphtheninja[m]","message":"anyone else having problems accessing github?","timestamp":1503321429207}
{"from":"nothingmuch","message":"substack, mafintosh: any thoughts on what's the right approach for the bitfield length issue with random-access-idb?","timestamp":1503321745034}
{"from":"mafintosh","message":"nothingmuch: i'm unsure what the issue is","timestamp":1503321899449}
{"from":"mafintosh","message":"nothingmuch: can you give me a quick overview?","timestamp":1503321909404}
{"from":"dat-gitter","message":"(e-e-e) ralphtheninja[m]: https://status.github.com/messages looks like there was a bit of stress on there server recently, and they are looking into it.","timestamp":1503322191106}
{"from":"ralphtheninja[m]","message":"thx, I can reach the status page at least :D","timestamp":1503322271433}
{"from":"dat-gitter","message":"(e-e-e) :)","timestamp":1503322575670}
{"from":"nothingmuch","message":"mafintosh: var rai = require('random-access-idb'); var core = hypercore(rai('foo'), { storeSecretKey: true, live: true }); core.on('ready', function() { core.append(\"foo\"); console.log(core.length) } // run twice in browser","timestamp":1503322971393}
{"from":"nothingmuch","message":"mafintosh: when the core storage instantiates the bitfield pseudo-file, that file has got a .length of 0, even when it already has contents","timestamp":1503323010677}
{"from":"nothingmuch","message":"I don't remember exactly what the dependency is, but because of the bitfield file having that supposed length, tree.roots returns an empty list","timestamp":1503323064321}
{"from":"nothingmuch","message":"so those .append calls will overwrite each other, and the hypercore doesn't actually grow, but gets overwritten by that example","timestamp":1503323085112}
{"from":"nothingmuch","message":"https://github.com/mafintosh/hypercore/blob/4e69deb6c67edd9422f25b6973f6fde82ee30e6c/lib/storage.js#L178-L180","timestamp":1503323102937}
{"from":"nothingmuch","message":"because of those lines, in conjunction with this patch: https://gist.github.com/nothingmuch/f7110106054ea3894b1173c3cc67d0db","timestamp":1503323124554}
{"from":"nothingmuch","message":"i can actually get it to work, because .length will be implicitly set by hypercore writing the header to the bitfield (which it always does)","timestamp":1503323150996}
{"from":"nothingmuch","message":"so I'm wondering if this patch is good enough, or if instead random-access-idb needs some sort of ready event, to ensure there are no race conditions when reading the .length property","timestamp":1503323191405}
{"from":"nothingmuch","message":"(without the patch, readAll will only reads the 32 header bytes, instead of the 3328 length bitfield, and therefore the append tree assumes the tree is empty, and signature/tree data will be overwritten when core.append happens again, even though the key is restored succcessfully)","timestamp":1503323286469}
{"from":"nothingmuch","message":"(btw this depends on 1.0.3, random-access-idb prior to that would substitute the file name with undefined so the different hypercore files overwrite each other, and all files are implicitly infinite length sparse files)","timestamp":1503323595390}
{"from":"mafintosh","message":"nothingmuch: why does it have length 0?","timestamp":1503324598125}
{"from":"mafintosh","message":"ahhh","timestamp":1503324640970}
{"from":"mafintosh","message":"cause the module sets that in the constructor","timestamp":1503324650842}
{"from":"mafintosh","message":"nothingmuch: have you tried simply removign this line? https://github.com/substack/random-access-idb/blob/master/index.js#L57","timestamp":1503324692933}
{"from":"nothingmuch","message":"mafintosh: yes, that's not enough... I even tried -1 because there was a branch that checked for that, I was hoping that would trigger some sort of polling logic, but it didn't work and I didn't trace through it","timestamp":1503324981892}
{"from":"nothingmuch","message":"mafintosh: it's a bug in random-access-idb, in that it destroys information about how much data was written into it, you can't actually know whether or not any block # is the final one","timestamp":1503325082596}
{"from":"nothingmuch","message":"and I think due to how indexeddb txns work out, you can't place a bound on the number of blocks in an arbitrary DB at the end of the constructor","timestamp":1503325114234}
{"from":"nothingmuch","message":"the easy fix is just that patch plus some testing, but I think that still leaves random-access-idb inherently flawed","timestamp":1503325172473}
{"from":"nothingmuch","message":"(but good enough for hypercore)","timestamp":1503325197376}
{"from":"nothingmuch","message":"the hard fix is API incompatible","timestamp":1503325212852}
{"from":"mafintosh","message":"nothingmuch: can't you just do a lt query to find the last block on open?","timestamp":1503325300729}
{"from":"nothingmuch","message":"mafintosh: doesn't that still require a txn?","timestamp":1503325476236}
{"from":"nothingmuch","message":"last block won't be enough unless there's a padding scheme or something, FWIW, but it's the same as storing a length entry","timestamp":1503325500986}
{"from":"mafintosh","message":"nothingmuch: i'm unsure why this is hard. maybe i should dive into it :)","timestamp":1503325667679}
{"from":"nothingmuch","message":"rai('foo'); // at this point in time .length may be set, but it's unreliable, to use .length you need a ready handler of some kind","timestamp":1503325703790}
{"from":"nothingmuch","message":"my patch fixes it so that .length becomes reliable after the first write transaction","timestamp":1503325714887}
{"from":"nothingmuch","message":"hypercore unconditionally writes a header before checking .length, so it's good enough for this scenario","timestamp":1503325733626}
{"from":"mafintosh","message":"nothingmuch: we have a ready handler","timestamp":1503325800142}
{"from":"nothingmuch","message":"hypercore does, but not random-access-idb","timestamp":1503325815874}
{"from":"mafintosh","message":"nothingmuch: if you impl .open(cb) that will be called before anything else","timestamp":1503325817566}
{"from":"mafintosh","message":"nothingmuch: rai.open(cb)","timestamp":1503325837212}
{"from":"nothingmuch","message":"hmm, i wonder how i missed that","timestamp":1503325874774}
{"from":"nothingmuch","message":"OK, so then perhaps the solution is to document .length is being unreliable before the open event","timestamp":1503325890598}
{"from":"nothingmuch","message":"and reading it unconditionally, before the open cb?","timestamp":1503325898507}
{"from":"nothingmuch","message":"(and for correctness sake, make hypercore utlize the open callback when checking the bitfield length?)","timestamp":1503325944332}
{"from":"mafintosh","message":"nothingmuch: hypercore *does* that already","timestamp":1503326010685}
{"from":"mafintosh","message":"all the other ones rely on nit","timestamp":1503326024480}
{"from":"mafintosh","message":"also you do not have to impl .length","timestamp":1503326035820}
{"from":"nothingmuch","message":"hmmm... I must have gotten my attempts to fix it wrong then","timestamp":1503326037610}
{"from":"nothingmuch","message":"(previously I tried to read .length at the beginning, but since I didn't notice the 'open' cb sequence I must have not sequenced it correctly)","timestamp":1503326069662}
{"from":"nothingmuch","message":"as for the dependence on .length being optional, do you mean because https://github.com/mafintosh/hypercore/blob/4e69deb6c67edd9422f25b6973f6fde82ee30e6c/lib/storage.js#L265 ?","timestamp":1503326120164}
{"from":"nothingmuch","message":"or do you mean the way that random-access-idb handles .length as an option shouldn't be necessary?","timestamp":1503326141016}
{"from":"mafintosh","message":"nothingmuch: i mean cause of that hypercore commit ya","timestamp":1503326154932}
{"from":"nothingmuch","message":"i did initially try removing .length and setting it to -1, but that didn't fix the issues","timestamp":1503326191997}
{"from":"mafintosh","message":"setting .length on open is the easiest way","timestamp":1503326231762}
{"from":"nothingmuch","message":"sorry I debugged all this after a night of no sleep, so I no longer remember how the root list depends on the bitfield","timestamp":1503326237741}
{"from":"mafintosh","message":"it has to read the last part of the bitfield to figure out the root list","timestamp":1503326265450}
{"from":"nothingmuch","message":"the length is specified in the header, right?","timestamp":1503326286181}
{"from":"mafintosh","message":"what header?","timestamp":1503326297615}
{"from":"nothingmuch","message":"i thought the initial 32 bytes of the bitfield file are a type & length tag","timestamp":1503326323293}
{"from":"mafintosh","message":".length is just the length of the file backing your storage in bytes","timestamp":1503326325739}
{"from":"nothingmuch","message":"sorry, different levels of length","timestamp":1503326340649}
{"from":"mafintosh","message":"yea","timestamp":1503326344164}
{"from":"mafintosh","message":"those are unrelated","timestamp":1503326348797}
{"from":"nothingmuch","message":"so rai's notion of .length can be fixed with the open thing, by deferring the cb passed to the constructor, and adding a txn to read the length in between","timestamp":1503326365738}
{"from":"nothingmuch","message":"and changing the documentation to say tha t.length is only valid after open","timestamp":1503326380445}
{"from":"nothingmuch","message":"(and i'm not sure what to do about .length being passed as a constructor argument if it doesn't match the stored one - truncate? take max?...)","timestamp":1503326400987}
{"from":"mafintosh","message":"yup or returning an error if you read after the length","timestamp":1503326415498}
{"from":"mafintosh","message":"and then *not* set .length in the constructor","timestamp":1503326429655}
{"from":"mafintosh","message":"that also works","timestamp":1503326431761}
{"from":"nothingmuch","message":"that didn't work for me","timestamp":1503326435409}
{"from":"nothingmuch","message":"the length satisfiability error was the first fix, that's already released","timestamp":1503326449353}
{"from":"nothingmuch","message":"that ensured that e.g. the secret key can be restored correctly","timestamp":1503326461476}
{"from":"nothingmuch","message":"(otherwise it would set the secret key to 0 bytes, because the read function did not error like random-access-memory did)","timestamp":1503326489728}
{"from":"nothingmuch","message":"but that fix wasn't enough to sort out the bitfield/tree conundrum, even with random-access-idb's .length property always hard coded to -1","timestamp":1503326517357}
{"from":"nothingmuch","message":"I think it hung and never delivered the .ready event, but I don't remember exactly what behaviour I saw","timestamp":1503326530123}
{"from":"mafintosh","message":"nothingmuch: does rai return blocks of 0 bytes for \"gabs\" ?","timestamp":1503326599478}
{"from":"mafintosh","message":"nothingmuch: ie if the only write you do is .write(1000, 'something') does read(0, 1000) return a buch of zeros?","timestamp":1503326628706}
{"from":"nothingmuch","message":"it used to return 0 bytes for any read request, so read(0, 1000) would return a bunch of zeros even if you didn't write 'something' at 1000","timestamp":1503326666598}
{"from":"nothingmuch","message":"after the patch read(0, 1000) still returns 0,","timestamp":1503326673934}
{"from":"nothingmuch","message":"but read(1100, 1) would error, instead of return a 0","timestamp":1503326684069}
{"from":"mafintosh","message":"it writes in blocks right?","timestamp":1503326761272}
{"from":"nothingmuch","message":"yes","timestamp":1503326766318}
{"from":"mafintosh","message":"seems the easiest impl then is just to write in blocks as now","timestamp":1503326786312}
{"from":"mafintosh","message":"on open load the last one","timestamp":1503326792183}
{"from":"mafintosh","message":"use that to set .lengnth","timestamp":1503326797805}
{"from":"nothingmuch","message":"loading the last one is not enough, because you don't know where the logical content ends","timestamp":1503326805691}
{"from":"mafintosh","message":"thats fine","timestamp":1503326812427}
{"from":"nothingmuch","message":"(could do that with a padding scheme like crypto stuff)","timestamp":1503326815718}
{"from":"nothingmuch","message":"well, my fix that's already comitted just stores an additional \"length\" property for each file that saves the logical length","timestamp":1503326833918}
{"from":"mafintosh","message":"as long as the .length is > the actual content everything is fine","timestamp":1503326848543}
{"from":"nothingmuch","message":"anyway, I'll make that load reliably before open","timestamp":1503326884666}
{"from":"nothingmuch","message":"that's technically what I meant by \"the hard fix\" only that it isn't hard, because the hard parts (refactoring, thinking about backwards compat) are already done for the most part ;-)","timestamp":1503326919027}
{"from":"nothingmuch","message":"do we still care that when .length was -1 it didn't work?","timestamp":1503326956243}
{"from":"nothingmuch","message":"(even when .read still errors when it's past the logical end of the file)","timestamp":1503326996916}
{"from":"nothingmuch","message":"in all probability that was just me confusing myself, but I could look into that if you want a straight answer","timestamp":1503327033927}
{"from":"mafintosh","message":"nothingmuch: dontn worry about backwards compat","timestamp":1503327420334}
{"from":"mafintosh","message":"we'll just major bump it","timestamp":1503327432640}
{"from":"nothingmuch","message":"I don't even think that's necessary, since .length wasn't ever set except by the constructor before, it couldn't have been used for anything, in terms of the API implied by it I would consider this a bugfix not an API change","timestamp":1503327572580}
{"from":"nothingmuch","message":"(i just had a mistaken notion that this would need an API change before you told me of the 'open' event, I looked for a 'ready' event and when I didn't find it I assumed that would need to be added)","timestamp":1503327604628}
{"from":"pfrazee","message":"we're adding bookmark sharing over dat in beaker (https://twitter.com/pfrazee/status/899647890695397377)","timestamp":1503327808999}
{"from":"pfrazee","message":"mafintosh: did my response to that convo with kyle mitchell make sense?","timestamp":1503327867829}
{"from":"nothingmuch","message":"i suppose .length as a constructor property should follow random-access-file (and the truncate option)?","timestamp":1503327880943}
{"from":"mafintosh","message":"pfrazee: ya","timestamp":1503327902522}
{"from":"pfrazee","message":"cool","timestamp":1503327920885}
{"from":"mafintosh","message":"nothingmuch: we don't use the truncate anymore","timestamp":1503327933337}
{"from":"mafintosh","message":"just .length, read, write, close, destroy","timestamp":1503327948398}
{"from":"mafintosh","message":"optionally del","timestamp":1503327951727}
{"from":"nothingmuch","message":"but when constructing a new rai object, when the length property has been specified, that should behave similarly to random-access-file, right?","timestamp":1503328102598}
{"from":"nothingmuch","message":"https://github.com/mafintosh/random-access-file/blob/master/index.js#L70 # does this need to be ported over","timestamp":1503328161702}
{"from":"mafintosh","message":"nothingmuch: no","timestamp":1503328189384}
{"from":"nothingmuch","message":"cool","timestamp":1503328193112}
{"from":"nothingmuch","message":"https://github.com/juliangruber/abstract-random-access/blob/master/index.js#L40 # can this function ever be called? this is a bit too nuanced for my js skills","timestamp":1503330714289}
{"from":"nothingmuch","message":"I see random-access-file reimplements that logic","timestamp":1503330726853}
{"from":"pfrazee","message":"nothingmuch: that can be called within a certain scope","timestamp":1503331151702}
{"from":"pfrazee","message":"nothingmuch: see line 37?","timestamp":1503331176505}
{"from":"pfrazee","message":"there's basically 2 things to know to understand that","timestamp":1503331193274}
{"from":"nothingmuch","message":"ah, right, I missed that reference","timestamp":1503331215096}
{"from":"pfrazee","message":"1. by default, every function has its own scope, so any variable you declare inside of a function will disappear outside the function. That's true of functions you declare inside functions too, as in this case","timestamp":1503331240200}
{"from":"pfrazee","message":"2. functions hoist","timestamp":1503331243311}
{"from":"nothingmuch","message":"can function scopes be accessed through reflection somehow?","timestamp":1503331329102}
{"from":"nothingmuch","message":"actually, nevermind, I won't ever need that and if I did I made a mistake, I don't actually care if it's possible given that this is just a normal function invocation","timestamp":1503331368899}
{"from":"pfrazee","message":"nothingmuch: to answer your question, no you cant reflect on scopes","timestamp":1503331389836}
{"from":"pfrazee","message":"actually that's not totally true","timestamp":1503331421572}
{"from":"nothingmuch","message":"well it is good to know that that's impossible, as it rules out all sorts of evil","timestamp":1503331424362}
{"from":"nothingmuch","message":"oh","timestamp":1503331425256}
{"from":"nothingmuch","message":"lulz","timestamp":1503331427056}
{"from":"pfrazee","message":"you can do `typeof foo`","timestamp":1503331431760}
{"from":"pfrazee","message":"and if `foo` is not in the scope, you get `undefined`","timestamp":1503331444162}
{"from":"pfrazee","message":"but you can't *enumerate* the scope","timestamp":1503331459021}
{"from":"pfrazee","message":"and yeah I agree, full reflection tools on the scope would probably lead to some ugly meta programming","timestamp":1503331487722}
{"from":"nothingmuch","message":"is there a string eval implicit in that?","timestamp":1503331489670}
{"from":"pfrazee","message":"... might be fun","timestamp":1503331492162}
{"from":"pfrazee","message":"nope","timestamp":1503331497099}
{"from":"nothingmuch","message":"oh right javascript allows compilation of open code","timestamp":1503331517072}
{"from":"nothingmuch","message":"brb need a shower ;-)","timestamp":1503331523477}
{"from":"pfrazee","message":"kk","timestamp":1503331526898}
{"from":"nothingmuch","message":"that was a joke about JS","timestamp":1503331533091}
{"from":"nothingmuch","message":"as for usefulness, at least from Perl I can vouch that it can be used for good, but once that door is open, mostly it's just another layer of paranoia that discriminating Perl hackers must be aware of... https://metacpan.org/pod/PadWalker","timestamp":1503331576844}
{"from":"nothingmuch","message":"is it generally considered acceptible to override private methods of abstract classes? In this case, random-access-idb would override the definition of _open provided by abstract-random-access","timestamp":1503332637995}
{"from":"pfrazee","message":"not sure about a general rule, mafintosh you know ^ ?","timestamp":1503332735878}
{"from":"mafintosh","message":"_open, _write, and _read are meant to be overwritten","timestamp":1503332766759}
{"from":"nothingmuch","message":"substack, mafintosh: https://github.com/substack/random-access-idb/pull/3","timestamp":1503335853124}
{"from":"larpanet","message":"@QlCTpvY7p9ty2yOFrv1WU1AE88aoQc4Y7wYal7PFc+w=.ed25519:## Let's have *Dat* conversation http://wx.larpa.net:8807/%25MLO5nkXbQu1sfQO81z5F3hhi4Aqo25%2BzZ3VMRdMJPVg%3D.sha256","timestamp":1503337530022}
{"from":"cblgh","message":"So what I'm pondering is: am I missing something fundamental here with the protocols? Could scuttleverse applications use Dat? Could we merge with Dat? Could we copy Dat's DHT ideas to make a scuttlebot plugin with similar functionality around feed ids? Am I babbling nonsense which has been discussed many times in the past?","timestamp":1503337785190}
{"from":"cblgh","message":"from the link","timestamp":1503337790596}
{"from":"mafintosh","message":"cblgh: the link doesnt load for me","timestamp":1503340876831}
{"from":"cblgh","message":"mafintosh: yeah it's really slow loading any external ssb links","timestamp":1503341309230}
{"from":"cblgh","message":"maybe it only loads for me because i have patchwork open=","timestamp":1503341315392}
{"from":"cblgh","message":"https://viewer.scuttlebot.io/%25MLO5nkXbQu1sfQO81z5F3hhi4Aqo25%2BzZ3VMRdMJPVg%3D.sha256","timestamp":1503341338170}
{"from":"cblgh","message":"that should load?","timestamp":1503341341557}
{"from":"mafintosh","message":"cblgh: it loaded onwo","timestamp":1503341351390}
{"from":"mafintosh","message":"cblgh: yea scuttlebutt can run on dat as far as i know","timestamp":1503341395106}
{"from":"mafintosh","message":"the main difference in protocols is that dat supports random access replicationn vs oldest->newest in scuttlebutt","timestamp":1503341428078}
{"from":"pfrazee","message":"cblgh: mafintosh: I'll respond on sbot as soon as my dataset syncs","timestamp":1503344949830}
{"from":"cblgh","message":"i'm just forwarding what staltz wrote","timestamp":1503344972927}
{"from":"cblgh","message":"i do think it would be very exciting though","timestamp":1503344995999}
{"from":"cblgh","message":"especially after having just finished walkaway by doctorow lol","timestamp":1503345056910}
{"from":"cblgh","message":"all those cool redundant nets got me excited","timestamp":1503345078145}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh Can you tell me where the issues are to gain native utp support for windows","timestamp":1503345178362}
{"from":"mafintosh","message":"@scriptjs hey, i'm on my way to bed but ping me tmw and i'll point you in the right direction","timestamp":1503345244907}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh kk thanks","timestamp":1503345260962}
{"from":"nothingmuch","message":"substack: ta!","timestamp":1503348334213}
{"from":"jhand","message":"anyone have a cool example out there of using hyperdrive + random-access-idb yet?","timestamp":1503363130024}
{"from":"dat-gitter","message":"(sdockray) does anyone know if hashbase.io is supposed to pick up `index.md` and render as index page.. right now i just see a directory listing but maybe i'm supposed to add something to `dat.json`?","timestamp":1503368140187}
{"from":"pfrazee","message":"@sdockray not yet supported https://github.com/beakerbrowser/hashbase/issues/15","timestamp":1503370182928}
{"from":"pfrazee","message":"sorry about that","timestamp":1503370188096}
{"from":"dat-gitter","message":"(sdockray) gotcha, thanks for the link to the issue pfrazee","timestamp":1503372538722}
{"from":"yoshuawuyts","message":"mafintosh: o/","timestamp":1503393689435}
{"from":"yoshuawuyts","message":"mafintosh: did you ever get around to allowing hyperdrives to be nested?","timestamp":1503393747455}
{"from":"yoshuawuyts","message":"mafintosh: actually, perhaps a slightly different question: is hyperdb meant to replace hyperdrive?","timestamp":1503393821532}
{"from":"yoshuawuyts","message":"oh, also what happened to the .list() and iterate methods?","timestamp":1503397697885}
{"from":"mafintosh","message":"yoshuawuyts: in progress","timestamp":1503403450417}
{"from":"mafintosh","message":"yoshuawuyts: wont replace hyperdrive. It'll be used internally to improve metadata perf","timestamp":1503403501581}
{"from":"yoshuawuyts","message":"mafintosh: woot!","timestamp":1503405542821}
{"from":"yoshuawuyts","message":"mafintosh: to both :D - hyperdb doing metadata for hyperdrive sounds very cool","timestamp":1503405563369}
{"from":"tomatopeel","message":"any good resources/links on using dat to store structured data (json, rdf etc.) ?","timestamp":1503408604718}
{"from":"tomatopeel","message":"is this even advisable in any case?","timestamp":1503408618228}
{"from":"tomatopeel","message":"hm found some promising stuff on the awesome list","timestamp":1503409926318}
{"from":"dat-gitter","message":"(scriptjs) @mafintosh Do you have a moment to discuss the windows native utp. I set up the tools yesterday. Would like to understand where you left off.","timestamp":1503410273209}
{"from":"dat-gitter","message":"(scriptjs) Not sure I can make progress but willing to try","timestamp":1503410329445}
{"from":"mafintosh","message":"@scriptjs hey","timestamp":1503410730082}
{"from":"mafintosh","message":"so there is a good test suite","timestamp":1503410744837}
{"from":"mafintosh","message":"and if you run that on windows you'll see which tests don't run","timestamp":1503410763057}
{"from":"tomatopeel","message":"Let's say I want to share an mp3 with dat on my laptop, and then I want to stream it in a native mobile app on my Android - is this currently possible?","timestamp":1503410957169}
{"from":"tomatopeel","message":"And is it possible to push something from, say my Android again, to my laptop running dat?","timestamp":1503410978651}
{"from":"tomatopeel","message":"(I'm just wondering about the UDP hole punching stuff from a native mobile app)","timestamp":1503410990648}
{"from":"tomatopeel","message":"ah I guess mobiles have websockets right...","timestamp":1503411162930}
{"from":"tomatopeel","message":"ah I found this https://blog.datproject.org/2017/08/04/recently/#nodeonandroid","timestamp":1503411488580}
{"from":"tomatopeel","message":"also we have all these user personas here: https://github.com/datproject/datproject.org/issues/124 was anything done with these, like mapping dat features or related projects to these personas and their uses?","timestamp":1503412514744}
{"from":"dat-gitter","message":"(scriptjs) @mafinosh. ok I didn’t think it compiled originally for windows but will check later again.","timestamp":1503413708840}
{"from":"cblgh","message":"tomatopeel: you could also use http endpoints for dats, but prolly not optimal for your usecase","timestamp":1503414473853}
{"from":"cblgh","message":"at least wrt streaming from dat","timestamp":1503414478474}
{"from":"tomatopeel","message":"is it possible to share multiple different dat dirs with the .dat/ in them with a single dat cli process?","timestamp":1503414953816}
{"from":"tomatopeel","message":"is this dat cli supposed to be ran as a daemon I wonder...","timestamp":1503415029026}
{"from":"tomatopeel","message":"whoa, you can make it go recursive if you set up a directory with two directories that are dat directories, and then share the parent directory, and then clone that shared parent directory into a third directory in the directory...","timestamp":1503415159310}
{"from":"tomatopeel","message":"not sure how much time I might have been from filling my disk and screwing my whole system then xD","timestamp":1503415195851}
{"from":"dat-gitter","message":"(peterVG) Hello Dat-ers! Dat-ites? I'm a long time follower and fan of this project. Even after the recent Filecoin hype I still see DAT offering a number of advantages over IPFS (design choices, tech maturity, documentation). I finally have a practical personal project to take it for test drive with the intent to tinker & educate myself in preparation for implementation/pitch in pro projects (large research dataset","timestamp":1503416342402}
{"from":"dat-gitter","message":"upload/integrity; open archival hardware appliance). So I'm starting with a re-architect/build of our family/home media setup. Firstly, I am centralizing all our own digital content (personal photos/videos, family documents) & 3rd party content movies/music/show onto a shared NAS rather than having the master copies scattered over multiple (and changing/breaking/lost) devices. The entire stack should be 100% FOSS (with compatible","timestamp":1503416342433}
{"from":"dat-gitter","message":"licenses) so that the design can be re-used and scaled up cheaply and without restriction (whether by other home media users or in institutional digital repository implementations). I plan to connect the NAS to a Pi-powered Kodi media server for consuming the content on the living room TV. I'll also enable streaming from the NAS to mobile devices (may need separate streaming/transcoding server this). I intend to have a full-sync HD","timestamp":1503416342433}
{"from":"dat-gitter","message":"backup in the NAS as well as a sync to a geo-redundant cloud server under my control. This is where I'm looking to use DAT to replace both RAID and backup/cloud syncing functionality. So that's a long background to get to my question. I'm ramping up on FreeNAS (as the go-to NAS FOSS project) to put on my HDs and have been very interested in the discussions in that community on the use of ZFS for FreeNAS and the ability to take","timestamp":1503416342433}
{"from":"dat-gitter","message":"advantage of Error Correcting Code Memory (https://en.wikipedia.org/wiki/ECC_memory) which is obviously a robust way to meet the critical bitstream integrity requirements for systems preserving permanent archival materials. However, it turns out that enabling ECC support in a FreeNAS architecture is not trivial. I'm guessing in fact that there are FreeNAS users out there that assume they're getting ECC support built-in but it may","timestamp":1503416342560}
{"from":"dat-gitter","message":"not be enabled in their given combination of hardware/host OS, etc.. See http://www.hardwarecanucks.com/forum/hardware-canucks-reviews/75030-ecc-memory-amds-ryzen-deep-dive.html Furthermore, for this hardware-bound feature to meet digital preservation requirements, it should also be making all error-correction activity available in an easy to access to log (e.g. each time it makes a data correction on the users behalf). These","timestamp":1503416342591}
{"from":"dat-gitter","message":"limitations have made me reconsider ECC's benefit: (1) cost to purchase specific ECC supporting RAM & HD rather than re-purposed generic consumer hardware (2) technical complexity to configure & monitor correct ECC operation (3) dependence on specific hardware to implement the data integrity checking requirement that is mandatory for all digital preservation systems. If this requirement is met at the software level then it is","timestamp":1503416343559}
{"from":"dat-gitter","message":"portable to other hardware configurations which is pretty much a mandatory requirement for a successful FOSS digipres project (e.g. rather than building an opinionated stack/appliance). I think the final confirmation that I'm looking for to drop ECC hardware support from my design is that I'm pretty sure this feature (auto-correct bitflip/bitrot errors) is kind of baked-in to DAT as part of its core design (as its defining feature","timestamp":1503416343590}
{"from":"dat-gitter","message":"as a matter of fact:","timestamp":1503416344559}
{"from":"dat-gitter","message":"(peterVG) https://github.com/datproject/docs/blob/master/papers/dat-paper.md#21-content-integrity). So while ECC sounds very beneficial for a permanent archival system, it would appear that this is a feature overlap. The only difference is that the error detection wouldn't happen in memory/transit/transaction as with ECC but it would on the very next DAT read/write. As long as this is at least logged and routed to some type of","timestamp":1503416344590}
{"from":"dat-gitter","message":"(even manual) error correction process then this is acceptable for an archival preservation system (while not so much, of course, for a financial transactions server, BTC wallet app, etc).","timestamp":1503416345559}
{"from":"dat-gitter","message":"(peterVG) So I guess I'm looking for a confirmation on my current thinking/design before I make some hardware purchases. I'd welcome any feedback, advice, questions.","timestamp":1503416459758}
{"from":"dat-gitter","message":"(peterVG) i.e. whether DAT negates need for ECC. anyone use DAT for home media/cloud backup?","timestamp":1503416501135}
{"from":"dat-gitter","message":"(peterVG) any experience using DAT on FreeNAS?","timestamp":1503416529996}
{"from":"dat-gitter","message":"(peterVG) can I (shoud I?) install DAT directly on a block storage and forego FreeNAS altogether?","timestamp":1503416693306}
{"from":"dat-gitter","message":"(peterVG) [also apologies for forum-length post in chat room but this seems to be right channel for this question?]","timestamp":1503417028480}
{"from":"pfrazee","message":"@peterVG you could open an issue on https://github.com/datproject/dat, I think that's a pretty good place for it","timestamp":1503418355086}
{"from":"pfrazee","message":"I can't answer for certain but Dat's integrity checks are designed to ensure the network is honest -- not the disk","timestamp":1503418390834}
{"from":"pfrazee","message":"so it's possible there would be cases where it would check the data, write it to disk, and then not check the data again. As a result, ECC may still provide value","timestamp":1503418428569}
{"from":"pfrazee","message":"also - Dat as a protocol is getting mature, but the apps & tools that utilize it are still young, so it'll take some work to setup something like the NAS you want. It could be a really gratifying project, but you'd need to build it yourself I'm fairly sure","timestamp":1503418510073}
{"from":"jhand","message":"tomatopeel: you can use termux on android and that works with dat for now. I did streaming of video to vlc on android and that worked. Not sure if all the pieces are there using node-on-android yet (but may just work?).","timestamp":1503418590715}
{"from":"jhand","message":"@peterVG, sounds cool! I use dat as an rsync replacement sometimes but nothing doing regular backups yet. We also have some work doing at-home public data mirroring, which may be similar https://medium.com/@maxogden/have-fast-wi-fi-help-back-up-the-government-from-your-living-room-f0566d76b79d","timestamp":1503418947690}
{"from":"jhand","message":"AFIAK there is no way to force on-disk verification but I think that could be done manually or added. We are working with libraries who probably also have similar preservation & verification needs to ensure content isn't corrupted long-term.","timestamp":1503419048010}
{"from":"dat-gitter","message":"(peterVG) cool, thanks @pfrazee (love BeakerBrowser BTW!!). it's useful to hear where I stand (not square 1 but also not square 10).","timestamp":1503419057850}
{"from":"pfrazee","message":"@peterVG thanks! Yes I think that's right","timestamp":1503419115538}
{"from":"dat-gitter","message":"(peterVG) @jhand I had not seen that Medium article. pretty much the same track I'm on. I'll document as I go to help move the knowledge/implementation forward. will take Paul's suggestion and open a ticket, can include link to my own wiki/repo.","timestamp":1503419139711}
{"from":"dat-gitter","message":"(peterVG) @jhand dat you Max?","timestamp":1503419161439}
{"from":"jhand","message":"@peterVG awesome! Look forward to seeing what comes out of it.","timestamp":1503419169549}
{"from":"jhand","message":"Nah I'm Joe Hand, Max is ogd =).","timestamp":1503419186839}
{"from":"dat-gitter","message":"(peterVG) sorry :smile:","timestamp":1503419199135}
{"from":"dat-gitter","message":"(peterVG) saw \"we\" and Max's name as author on article","timestamp":1503419221605}
{"from":"jhand","message":"Ah ya. We as in Dat people lol =)","timestamp":1503419258040}
{"from":"dat-gitter","message":"(peterVG) the \"royal\" we. that's how it works in FOSS. like it.","timestamp":1503419276105}
{"from":"dat-gitter","message":"(peterVG) Max and I sat on a panel at Internet Archive last year. Your team culture is a big draw to this project for me.","timestamp":1503419350287}
{"from":"dat-gitter","message":"(peterVG) > ` jhand ` AFIAK there is no way to force on-disk verification but I think that could be done manually or added. We are working with libraries who probably also have similar preservation & verification needs to ensure content isn't corrupted long-term.","timestamp":1503420183775}
{"from":"dat-gitter","message":"(peterVG) Yes, this is the sector I work in (see http://vangarderen.net). Data integrity checking (at rest and in transport) is a mandatory requirement for digital repositories used by archives and libraries (and should be for any system trusted to manage permanent archival storage). After I dumped my thoughts in thread above, I've come to re-affirm that implementing this functionality at the software/repository level is more","timestamp":1503420183845}
{"from":"dat-gitter","message":"robust for a digital archiving system. If the digital collections are dependent on hardware for data integrity and encryption-at-rest then this increases technology lock-in and increases the risk of permanent data loss. Therefore, in the case of a system whose main design feature is long-term accessibility of authentic information objects, it makes more sense to implement these integrity checking/encryption features at the software","timestamp":1503420183876}
{"from":"dat-gitter","message":"level and not depend on specific hardware models/configurations. This allows for easier upgrade/migration of the entire digital repository onto new hardware devices. Of course, building it on DAT is then also making an opinionated technology commitment for the system. However, I don't think that's necessarily a bad thing at the software stack level (after building a FOSS preservation system that tried to be all things to all","timestamp":1503420183877}
{"from":"dat-gitter","message":"people).","timestamp":1503420183877}
{"from":"dat-gitter","message":"(peterVG) Yes, this is the sector I work in (see http://vangarderen.net). Data integrity checking (at rest and in transport) is a mandatory requirement for digital repositories used by archives and libraries (and should be for any system trusted to manage permanent archival storage). After I dumped my thoughts in thread above, I've come to re-affirm that implementing this functionality at the software/repository level is more","timestamp":1503420225290}
{"from":"dat-gitter","message":"robust for a digital archiving system. If the digital collections are dependent on hardware for data integrity and encryption-at-rest then this increases technology lock-in and increases the risk of permanent data loss. Therefore, in the case of a system whose main design feature is long-term accessibility of authentic information objects, it makes more sense to implement these integrity checking/encryption features at the software","timestamp":1503420225357}
{"from":"dat-gitter","message":"level and not depend on specific hardware models/configurations. This allows for easier upgrade/migration of the entire digital repository onto new hardware devices. Of course, building it on DAT is then also making an opinionated technology commitment for the system. However, I don't think that's necessarily a bad thing at the software stack level (after building a FOSS preservation system that tried to be all things to all","timestamp":1503420225388}
{"from":"dat-gitter","message":"people).","timestamp":1503420225388}
{"from":"jhand","message":"@peterVG ah yes, and I should have clarified - \"but I think that could be done manually or added *through Dat*\". Since we already have all the verification code written, seems like we can make it verify via Dat at-rest, not just in transit.","timestamp":1503420345671}
{"from":"dat-gitter","message":"(peterVG) yes, that's my thought too. we have some outdated tools for this in the digipres community. see https://wiki.umiacs.umd.edu/adapt/index.php/Ace:Main","timestamp":1503420452252}
{"from":"dat-gitter","message":"(peterVG) but here's where making a choice to deploy specifically on Dat would give us a hand-up","timestamp":1503420479014}
{"from":"dat-gitter","message":"(peterVG) sounds like we're half-way there to implementing a data-integrity at-rest check feature","timestamp":1503420498011}
{"from":"dat-gitter","message":"(peterVG) which, frankly, should probably be a requirement for research dataset storage as well","timestamp":1503420518666}
{"from":"dat-gitter","message":"(peterVG) Also, I burst into this room in the middle of @tomatopeel 's discussion on streaming direct from Dat. That is obviously related to my own media server requirement. I remember Juan demo-ing direct mp4 streaming as the 'a-ha moment' in his early IPFS presentations. It would be cool to configure something similar for my setup and negate the need for a media-server altogether or maybe add a new plug-in to manage transcoding","timestamp":1503421659896}
{"from":"dat-gitter","message":"requirements per client device (could be outsourced to cloud servers on same Dat whitelist).","timestamp":1503421659969}
{"from":"dat-gitter","message":"(peterVG) anyway. thanks for letting me think aloud here. hopefully some others will stumble across this thread and jump in. in meanwhile I'll move activity over to a Git repo for those that are interested","timestamp":1503421733980}
{"from":"tomatopeel","message":"Do I understand correctly that Dat doesn't duplicate the files like IPFS does (into .ipfs)? Looking in .dat, it just seems to be metadata","timestamp":1503422039746}
{"from":"jhand","message":"tomatopeel: correct, everything is indexed in place.","timestamp":1503422083689}
{"from":"jhand","message":"tomatopeel: oh and on the daemon, the CLI doesn't have anything in it yet. But you could use either the desktop app or https://github.com/mafintosh/hypercored depending on your use case (hypercored doesn't store files as regular files).","timestamp":1503422205457}
{"from":"tomatopeel","message":"and if I'm sharing a directory whose hash is $SOME_HASH and I want to link somebody a subdirectory, they can just fetch them $SOME_HASH/subdirectory ... but only the dir your sharing gets a hash, you can't give them a hash that will resolve to some file in some dir you're sharing, right?","timestamp":1503422243781}
{"from":"tomatopeel","message":"jhand: thanks, ya my use case is just to run a single dat process but to somehow pass some specification on exactly what parts of my filesystem to share","timestamp":1503422282977}
{"from":"jhand","message":"tomatopeel: ya you can only resolve the whole directory for now.","timestamp":1503422347524}
{"from":"jhand","message":"tomatopeel: once hyperdb is ready that'll be a nice use case for it =)","timestamp":1503422375804}
{"from":"tomatopeel","message":"also, just thinking out loud, both ipfs and dat correctly advertise themselves as p2p distributed/decentralized stores (paraphrasing) which is true and everything, but the moment you share (or in ipfs add/pin) something, that data isn't actually going anywhere until you give somebody the hash and they grab your stuff...","timestamp":1503422445948}
{"from":"tomatopeel","message":"I guess the registries kinda solve this problem... there's something with social networks though... and storage...","timestamp":1503422554845}
{"from":"tomatopeel","message":"my problem: where do I back stuff up? I either use the \"cloud\" or I buy hardware that I have to take care of... both of these are expensive... so I want to use my local machine and others' local machines using something like dat/ipfs... but we can't just get everybody doing this for anybody (e.g. what filecoin's direction seems to be)... so if we used our social networks to build little microcommunities","timestamp":1503422625588}
{"from":"tomatopeel","message":"and did clever crypto stuff so we can just use those same communities as the networks that we do the p2p file distribution on... anyway, /rambledream","timestamp":1503422657074}
{"from":"tomatopeel","message":"dat's network is a single monolithic network though, right?","timestamp":1503422673081}
{"from":"tomatopeel","message":"jhand: and what is this hyperdb you speak of? :D","timestamp":1503423167762}
{"from":"tomatopeel","message":"can't find anything on gh","timestamp":1503423212857}
{"from":"tomatopeel","message":"oh, found this https://www.npmjs.com/package/hyperdb","timestamp":1503423231288}
{"from":"tomatopeel","message":"and on gh now https://github.com/mafintosh/hyperdb not sure why google was so bad on this","timestamp":1503423399628}
{"from":"ogd","message":"tomatopeel: i think ipfs strives to be more of a monolithic network than us, dat starts from the point of view that repositories are private and unlisted/undiscoverable publicly, not part of any network. when you go to share it, you expose your discovery key to a discovery network, in our case this means DHT + DNS. then anyone you share your discovery key
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment