I want to lazily process an input file, which contains tokens separated by whitespace.
The naive way is to let Elixir hand me lines, then split the lines:
def tokenize_lines(enum) enum |> Stream.flat_map(&String.split/1) end
|Break input data into chunks and use 'dmtxwrite' to create a DataMatrix 2D|
|barcode for each chunk. Dump all these into an HTML page for easy printing.|
|The size of the chunk depends on the target size of the barcodes; see|
|Intended for making hard (paper) copies of small but highly important data,|
|like GPG keys, that can be retrieved faster and more reliably (one hopes) than|
This is an unmodified selection of code I wrote for and under contract to AT&T, so AT&T holds all copyright.
AT&T released it to the public under the terms of the Apache 2.0 License (as described in license headers in the source code) as part of a larger body of work.
I retrieved these copies from their public repositories on 1/9/2018 after I was no longer under contract to them. I am hosting it here in compliance with the terms of their license. The links below point to AT&T's public repositories.
This is a list of common and divergent configuration options for django-storages' boto backends:
Created to cope with the documentation which reads:
"Available are numerous settings. It should be especially noted the following:"
I hereby claim:
To claim this, I am signing this object:
I want some tools to help me manage my photos and videos, conforming to some of my particular requirements.
I've already begun tinkering on some of my own, but with every line of code I think, "surely, someone must have done this already." Have you?