Skip to content

Instantly share code, notes, and snippets.

@rmar72
Last active June 24, 2019 21:40
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save rmar72/8c466e65cdfc06227b4a3d7fd6e21eae to your computer and use it in GitHub Desktop.
Save rmar72/8c466e65cdfc06227b4a3d7fd6e21eae to your computer and use it in GitHub Desktop.
Practicing with Node.js buffers & streams
// Node practice 09/25/18
/*
---------------------------Buffers --------------------------------------
Pure JavaScript is Unicode friendly, but it is not so for binary data.
While dealing with TCP streams or the file system, it's necessary to
handle octet streams. Node provides a Buffer class which provides instances
to store raw data similar to an array of integers but corresponds to a
raw memory allocation outside the V8 heap.
the Buffer class is a global class that can be accessed in an application
without importing the buffer module.
*/
/* Wikipedia
A bitstream (or bit stream), also known as binary sequence, which is a sequence of bits.
A bytestream is a sequence of bytes. Typically, each byte is a 8-bit quantity (octet),
and so the term octet stream is sometimes used interchangeably. An octet may be
encoded as a sequence of 8 bits in multiple different ways (see endianness) so
there is no unique and direct translation between bytestreams and bitstreams.
Bitstreams and bytestreams are used extensively in telecommunications and computing.
For example, synchronous bitstreams are carried by SONET,
and Transmission Control Protocol transports an asynchronous bytestream.
The Internet media type for an arbitrary bytestream is application/octet-stream.
Look up Internet Media Type, goes in detail about MIME, two-part identifier for file formats and format contents transmitted on the Internet.
*/
//-------------------------- Creating Buffers
// ex1. uninitiated Buffer of 10 octets
var buf = new Buffer(10);
console.log(buf); //<Buffer 00 00 00 00 00 00 00 00 00 00>
// ex2. Creating Buffer from given array
var buf2 = new Buffer([10, 20, 30, 40, 50]);
console.log(buf2); // <Buffer 0a 14 1e 28 32>
// ex3. Buffer from given String and optional encoding. utf-8 wouldnt work but utf8 did
// options: ascii, utf8, utf16le, ucs2, base64, hex
var buf3 = new Buffer("Simply Easy Learning", "ascii");
console.log(buf3) // depends on option, <Buffer 53 69 6d 70 6c 79 20 45 61 73 79 20 4c 65 61 72 6e 69 6e 67>
// with base 64 <Buffer 4a 29 a9 97 21 1a b3 22 de 6a b9 e2 9e>
// with utf16le <Buffer 53 00 69 00 6d 00 70 00 6c 00 79 00 20 00 45 00 61 00 73 00 79 00 20 00 4c 00 65 00 61 00 72 00 6e 00 69 00 6e 00 67 00>
//-------------------------- Writing to Buffers
// ex1
var buf4 = new Buffer(10);// uninitiated Buffer of 10 octets
len = buf4.write("Simply Easy Learning");
console.log("Octets written : "+ len); // Octets written : 10
// ex2
var buf5 = new Buffer(256);
len2 = buf5.write("Simply Easy Learning");
console.log("Octets written : "+ len2); // Octets written : 20, there is 20 chars in the string, still have 236 octects left
//-------------------------- Reading from Buffers
// ex1
var buffer1 = new Buffer(26);
for(var i = 0; i< 26; i++){
buffer1[i] = i + 97;
}
// got a buffer array of numbers [97...122]
// buffers are raw data, which is why I can massage that data to my liking like down below
console.log(buffer1)// <Buffer 61 62 63 64 65 66 67 68 69 6a 6b 6c 6d 6e 6f 70 71 72 73 74 75 76 77 78 79 7a>
console.log( buffer1.toString('ascii')); // oututs: abcdefghijklmnopqrstuvwxyz
console.log( buffer1.toString('ascii',0,5)); // outputs: abcde
console.log( buffer1.toString('utf8',0,5)); // outputs: abcde
console.log( buffer1.toString(undefined,0,5)); // encoding defaults to 'utf8', outputs abcde
console.log( buffer1.toString('hex',0,5)); // outputs: 6162636465
console.log( buffer1.toString('base64')); // outputs: YWJjZGVmZ2hpamtsbW5vcHFyc3R1dnd4eXo=
//-------------------------- Convert Buffer to JSON
// ex1
var buffer2 = new Buffer(5);
for(var i = 0; i< 5; i++){
buffer2[i] = i + 97;
}
console.log(buffer2.toString('utf8')) // 5 letters as expected abcde, (default utf8)
console.log(buffer2.toJSON()) // { type: 'Buffer', data: [ 97, 98, 99, 100, 101 ] }
//ex2
var bufTest = new Buffer('Simple'); // will be 6 octets
var json = bufTest.toJSON(bufTest);
console.log(json); // { type: 'Buffer', data: [ 83, 105, 109, 112, 108, 101 ] }
//-------------------------- Compare Buffers
// Good for indicating whether a Buffer comes before or after or is the same
// as the other Buffer in sort order
// ex1
var buffer101 = new Buffer('ABC');
var buffer102 = new Buffer('ABCD');
var result = buffer101.compare(buffer102);
console.log(result) // -1
if(result < 0) {
console.log(buffer101 +" comes before " + buffer102); //true, ABC comes before ABCD
}else if(result == 0){
console.log(buffer101 +" is same as " + buffer102); // false
}else {
console.log(buffer101 +" comes after " + buffer102); //false
}
//-------------------------- Concatenate Buffers
// ex1. I like this one piecing together the buffers
var buffer1111 = new Buffer('TutorialsPoint ');
var buffer2111 = new Buffer('Simply Easy Learning');
var buffer3111 = Buffer.concat([buffer1111,buffer2111]);
console.log("buffer3 content: " + buffer3111.toString());
// Result: buffer3 content: TutorialsPoint Simply Easy Learning
//-------------------------- Copy Buffers
// ex1.
var buffer1000 = new Buffer('ABC');
//copy a buffer
var buffer2000 = new Buffer(3); // anticipates # of octets
buffer1000.copy(buffer2000);
console.log("buffer2 content: " + buffer2000.toString()); // buffer2 content: ABC
//-------------------------- Slice Buffers
// ex1.
var buffer1022 = new Buffer('TutorialsPoint');
//slicing a buffer
var buffer2022 = buffer1022.slice(0,9);
console.log("buffer2 content: " + buffer2022.toString());
// buffer2 content: Tutorials
// hmm I thought you couldnt resize buffers?!
//-------------------------- Buffer Length
// ex1.
var buffa = new Buffer('TutorialsPoint');
//length of the buffer
console.log("buffer length: " + buffa.length); // buffer length: 14
// other METHODS Reference
console.log( Buffer.isBuffer(buffa) ) // true
console.log( Buffer.byteLength(buffa + 1) ) // 15
console.log( Buffer.isEncoding(buffa) ) // false ? I expected a format option
// and many others refer to docs
// 09/25/18
const fs = require('fs');
var data = '';
// what are Streams?
// Streams are objects that let you read data from a source
// or write data to a destination in continuous fashion.
// In Node.js, there are four types of streams
// 1.Readable, 2.Writable,
// 3.Duplex (RW),
// 4.Transform (where output is computed based on input)
// Each type of Stream is an EventEmitter, common used events:
// data - fired when there is data to read
// end - fired when no more data is left to read
// error - fired when R/W have an error
// finish - fired when data has been flushed
// -------------------- Reading
// Create a readable stream, will need to use an existing file, in this case Ive created input.txt
var readerStream = fs.createReadStream('input.txt');
readerStream.setEncoding('UTF8');
// Handle stream events --> data, end, and error
readerStream.on('data', function(chunk) {
data += chunk;
});
readerStream.on('end',function(){
console.log('async log 1 : reading done');
});
readerStream.on('error', function(err){
console.log(err.stack);
});
console.log("sync log 1 : reading Program Ended");
// -------------------- Writing
// doesnt need a file, bc it'll create one
var data1 = 'Simply Easy Learning';
// Create a writable stream
var writerStream = fs.createWriteStream('output.txt');
// Write the data to stream with encoding
writerStream.write(data1 + " plus same string in base 64: ",'ascii');
writerStream.write(data1,'base64'); // ?! wait naa it outputs another type thats not base64
// Mark the end of file
writerStream.end();
// Handle stream events --> finish, and error
writerStream.on('finish', function() {
console.log("async log 2 : Write completed.");
});
writerStream.on('error', function(err){
console.log(err.stack);
});
console.log("sync log 2: writing Program Ended");
// ------------------------ Piping the Streams
// Piping is a mechanism where we provide the output
// of one stream, as the input to another stream.
// normally used to get data from one stream and passes output to another stream
// Create a readable stream
var readerStream = fs.createReadStream('input.txt');
// Create a writable stream
var writerStream = fs.createWriteStream('output2.txt');
// Pipe the read and write operations
// read input.txt and write data to output.txt
readerStream.pipe(writerStream);
console.log("sync log 3: Piping Streams Program Ended");
// ------------------------ Chaining the Streams
// normally used with piping operations.
// Now we'll use piping and chaining to first compress a
// file and then decompress the same.
var zlib = require('zlib');
fs.createReadStream('input.txt')
.pipe(zlib.createGzip())
.pipe(fs.createWriteStream('input.txt.gz'));
console.log("sync log 4 : File Compressed, Chaining the Streams.");
// hmmm not so fast! I just checked the size and its actually
// bigger, the original is 95 bytes and the gz is 97! TF
// anyways Now let's try to decompress the same file
// using the following code −
// crashed HERE! uggh
// Error: unexpected end of file
// at Gunzip.zlibOnError (zlib.js:153:15)
// Decompress the file input.txt.gz to input.txt
var zlib = require('zlib');
fs.createReadStream('input.txt.gz')
.pipe(zlib.createGunzip())
.pipe(fs.createWriteStream('input2.txt'));
// console.log("File Decompressed.");
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment