Skip to content

Instantly share code, notes, and snippets.

@lisa
Created April 28, 2022 13:36
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save lisa/c4a8e7c277a9471938e7b2c95f6199e1 to your computer and use it in GitHub Desktop.
Save lisa/c4a8e7c277a9471938e7b2c95f6199e1 to your computer and use it in GitHub Desktop.
A native, in-app way to restore files would be far better than needing to navigate to and login to select files.
The workflow to create a zip file in the website could be better: I totally get the 500GB limit, but I shouldn't be able to select files >500GB to receive that error, and if I did somehow, it should direct me to create a snapshot from those same files.
The backblaze downloader app requires logging in (with painful 2FA!) for each zip file. I should be able to select multiple zip files and snapshots. It's not really a backblaze "downloader" if it can't download all the things.
Snapshots: This whole process is a goddamn mess. I've tried for a week to fetch snapshots (ranging from multiple 500GB to a two over 3.9TB) and it's simply not gone well, which sucks because each attempt is costing real dollars.
I tried Cyberduck originally as suggested on backblaze website and that resulted in incomplete files and no way to resume. I've tried b2 3.3.0 to "sync" and it resulted in truncated files. I've tried to download-file-by-name and that works but at the end the program sits in a bizarre loop doing something with no indication what's going on (dtruss(1m) says madvise and lseek syscalls iterating over something - maybe some verification?).
I'm now trying yet again to use b2 sync over the 10.5TB bucket and I see one 3.6TB file at 3.4TB and incomplete. If it's meant to support resuming, it doesn't seem to work since the old 100GB+ partials were all truncated on this recent sync.
@lisa
Copy link
Author

lisa commented Apr 28, 2022

I forgot:

It would be really handy if there was a way to handle the chunking of files to Backblaze to handle because from my point of view the number doesn't matter as long as there's a relatively painless way to grab and unpack them all. I don't particularly like making these massive files, but the painful UX really drives one to do it this way.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment