Skip to content

Instantly share code, notes, and snippets.

@HenriqueLimas
Created May 12, 2017 20:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
  • Save HenriqueLimas/4215a0b0fde4b1b04d0627e9fc33117e to your computer and use it in GitHub Desktop.
Save HenriqueLimas/4215a0b0fde4b1b04d0627e9fc33117e to your computer and use it in GitHub Desktop.

Networking-streams

servers and clients

Any networked computer can be a server Any networked computer can be a client

Package a put in a little pieces.

tc vs udp

TCP - reliable transport: has ACK UPD - unreliable transport: (use to stream audio or video, games too)

protocols

the language that computer programs speak to each other

  • HTTP
  • HTTPS
  • SMTP
  • IMAP, POP3
  • IRC
  • FTP
  • SSH - remote shell
  • SSL - low-level secure data transfer (used by HTTPS)

port

Each computer can have many services

A port is a number btw 1 and 65535 that differentiates among the services on a system.

  • 21 - ftp
  • 22 ssh
  • 25 smtp
  • 80 http

port and permissions

By dfeault, systems acan only listen to ports below 1024 as the root user

servers

Its a role that a computer can play, any time that we listen to an incoming connection.

clients

Clients are computer programs that connect to servers. Initiate a connection. Any computer can be a cliente!

peer to peer

Aside from server and clients, there is a third role in computer networsk: peer. We don't have client and server, everything is both.

netcat

netcat can create tcp and up connection into a servers. You can have plain text with a server.

netcat server and client

nc -l 5000

then connect to your server in another terminal:

nc localhost 5000

This is how protocols works. Very simple example.

nc can have only one connection.

http

hypertext transfer protocol, its how web servers and web browsers communicate. Its an easy protocol.

http verbs

  • GET - fetch a document
  • POST - submit a form
  • HEAD - fetch metadata about a document
  • PUT - upload a file

http headers

Next come the headers. Headers have a key followed by a colo followed by a value:

GET / HTTP/1.0
Host: google.com

Node server

const http = require('http');
const parseform = require('body/any');

const server = http.createServer((req, res) => {
  parseform(req, res, (err, params) => {
    console.log(params);
    res.end('Ok\n');
  });
});

server.listen(5000);

http post

Forms in html are ofte delivered with a POST:

POST /form HTTP/1.1
Host: localhost
Content-Length: 51
Content-Type: application/x-www-form-urlencoded

title=watever&date=1421044443&body=beep%20boop%21

Parse the url

require('querystring').parse('title=watever&date=1421044443&body=beep%20boop%21')

curls

You can als send http requests with the curls command:

curl -s http://substack.net

It will print the content to the stdout. If you want the headers, you can put -I argument that it will return the header. The `s get rid fo annoying progrees output.

curl with verbs, use -X to set the http ver.

curl -X POST http://localhost:5000 -d title=whatever

-d is used for each value on the form.

curl -X POST http://localhost:5000 -d title=watever -d date=1421044443 -d body=beep%20boop%21

-H to set headers

smtp

Is the protocol used to deliver email messages. Here we can send an email from trum@whitehouse.gov to substack@localhost

irc

irc is an anciente text-based chat protocol that is still very popular among programmers.

nick wathevz
user wathevs whatevz irc.freenode.net :whathevz
join #cyberwizard
privmsg #cyberwizard :hack the planet!

text protocols

so far, we've seen a number of text protocols:

  • http
  • smtp
  • irc

These are nice protocols to implement because you can inspect the data going over the wire visually and type requests using this protocols

binary protocols

In binyar protocols, you can't. A good example is ssh To use ssh we should use a program like ssh

inpsecting protocols

To inpsect protocols we can use wireshark or tcpdump or tshark.

tcpdump

sudo tcpdump -X

to see each packte with a hexadeciamal representation of the data.

sudo tcpdump 'tcp port 80' -X sudo tcpdump 'tcp port 80' -A

protocol links

Is on rfscs for smtp rfc irc rfc and http rfc.

====

streams

Handy interface for shuffling data around called streams For example compression or transformations

origins

"We chould have some ways of connecting programs like garden hose--screw in another segment when it becomes necessary to massgae data in another way. This is the way of IO also." By Doug McIlroy. OCtober 11, 1964.

Why streams?

  • we can compose streaming abstractions
  • we can operate on data chunk by chunk

For ex if you want to read a video, we pick a part of that video.

composition

Just like how in unix we can pipe commands together. WE can pipe abastraction togethere with stream using .pipe()

fs.createReadStrem()
	.pipe(zlib.createGunzip())
	.pipe(replace(/\s+g/, '\n'))
	.pipe(filter(/whale/i/))
	.pipe(linecount(console.log))

fs

We can read a file and stream the file contents to stdout

var fs = require('fs')
fs.createReadStream(process.argv[2])
	.pipe(process.stdout);
var fs = require('fs')
var through = require('through2')

fs.createReadStream(process.argv[2])
	.pipe(through(write))
	.pipe(process.stdout)

buffer= objet of the chunck encode next= call to the next piece of data (next(null, buf.toString().toUpperCase()))

We can pass a buffer or a string.

We can also read from a standard info. with

process.stdin

transform

you can use stream.Transform from node core:

var transform = require('stream').Transform

through(write, end)

With trhough the are 2 parameter: write and end. Both are option

  • function (buff, encode, next) {this.push(buf); next()}
  • functino end () { this.push(null)}

This means that through() with no arguments will pass everthing written as input directly through to its output

concat-stream

concat-stream buffers up all the data in the stream. Take all the stdin will return a unique buffer. That use more memory

var concat = require('concat-stream')

prcess.stdin.pipe(concat(function () {
	console.log(body.length)
}))
var concat = require('concat-stream')
var http = require('http')

var server = http.createServer(function (req,res) {
	req.pipe(concat({ enconding: 'string' }, function (body) {
		var params = qs.parse(body)
		console.log(params)
		res.end('ok\n')			
	}))
})
var concat = require('concat-stream')
var http = require('http')

var server = http.createServer(function (req,res) {
	req
		.pipe(through(counter))
		.pipe(concat({ encoding: 'string' }, onbody))

	function couter () {
		var size = 0
		return through(function (buf, enc, next) {
			size += buf.length
			if (size > 20) next(null, null)
			else next(null, buf)
		})
	}
	
	function onbody (body) {
		var params = qs.parse(body)
		console.log(params)
		res.end('ok\n')			
	}
})

Generally we dont' need to close the stream, usually you don't have to worry about closing the stream. It depends of the stream details that we are working.

stream types

  • Readable - produes data: you can pipe FROM it readable.pipe(A)
  • writable - consumes data: you can pipe TO it A.pipe(writable)
  • transform - consumes data, producing transformed data A.pipe(transfomr).pipe(B)
  • dubplex - consumes data separately from producinsg data. Ex by direction network protocol. A.pipe(duplex).pipe(A)

writable stream methods

  • .write(buf)
  • .end()
  • .end(buf)
  • .on('finish', function() {})
var fs = require('fs')
var w = fs.createWriteStreamm('cool.txt')

w.on('finish', function() {
	console.log('FINISHED')
})

w.write('hi\n')
w.write('wow\n')
w.end()

readable stream methods

  • stream.pipe()
  • stream.on('end', () => {})

you probably won't to use that methods:

  • stream.read()
  • stream.on('readable', () => {})
var fs = require('fs')
var r = fs.createReadStream('cool.txt')
r.pipe(process.stdout)

readable paused mode

default behavior with automatci backpressure. Thhey are only going to produce data when it consume data. next(null, null) stop produce data. Backpressure

readable flowing mode

data is consumed as soon as chunks are availabe (no backpressure)

turn on flowing mode with

  • stream.resume()
  • stream.on('data', function (buf) {})

transform

readable + writable stream where:

input -> transform -> output

Place it btw two other stream.

Duplex

if you have seprate readable stream and a separate writable stream and they need to glue it together.

a.pipe(stream).pipe(a)

This is an example of echo server.

var net = require('net')
net.createServer(stream => {
	stream.pipe(stream)
}).listen(5000)

And here a proxy for the echo server created before

var net = require('net')
net.createServer(stream => {
	stream.pipe(net.connect(5000, 'localhost')).pipe(stream)
}).listen(5001)

vpn.js

var net = require('net')
var crypto = require('crypto')
var pw = 'abc123'

net.createServer(stream => {
	stream
		.pipe(crypto.createDecipher('aes192', pw))
		.pipe(net.connect(5000, 'localhost'))
		.pipe(crypto.createCipher('aes192', pw))
		.pipe(stream)
}).listen(5001)

vpn-client.js

var net = require('net')
var crypto = require('crypto')
var pw = 'abc123'


var stream = net.connect(5005, 'localhost')
process.stdin
	.pipe(crypto.createCipher('aes192', pw))
	.pipe(stream)
	.pipe(crypto.createDecipher('aes192', pw))
	.pipe(process.stdout)
		
}).listen(5001)

object streams

Normaly you can only read and write buffers and strings with streams. However, if you initiliza a stream in objectMode, you can use any kind of object.

var through = require('through2')
var size = 0
process.stdin
	.pipe(through.obj(write1) // .pipe(through({ objectMode: true}, write)
	.pipe(through.obj(write2, end))
	
function write1 (buf, enc, next) {
	next(null, { length: buf.length })
}

function write2 (obj, enc, next) {
	size += obj.length
	next()
}

function end () {
	console.log('size=', size)
}

core streams in node

  • fs.createReadStream()
  • fs.createWriteStream()
  • process.stdin, process.stderr
  • ps.stdin, ps.stdout, ps.stderr
  • net.connect(), tls.connect()
  • net.createServer(stream => {})
  • tls.createServer(obj, stream => {})
var spawn = require('child_process').spawn

var ps = spawn('grep', ['potato'])

ps.stdin.write('cheese\n')
ps.stdin.write('carrots\n')
ps.stdin.write('carrot potatoes\n')
ps.stdin.write('potato!\n')
ps.stdin.end()

http core streams

// req: readable, res: writable
http.createServer((req, res) => {})
var req = http.request(opts, res => {})
var http = require('http')
var fs = require('fs')

var server = http.createServer((req, res) => {
	if (req.method === 'POST) {
		req.pipe(process.stdout)
		req.once('end', () => res.end('ok\n'))
	} else {
		res.setHeader('Content-Type', 'text/plain')
		fs.createReadStream('hello.txt')
			.pipe(res)
	}
})

server.listen(5000)

POST client

var http = require('http')
var req = http.request({ method: 'POST', host: 'localhost', port: 5000, url: '/' }, res => {
 console.log(res.statusCode)
 res.pipe(process.stdout)
})
req.end('HELLO\n')

GET client

var http = require('http')
var req = http.request({ method: 'GET', host: 'localhost', port: 5000, url: '/' }, res => {
 console.log(res.statusCode)
 res.pipe(process.stdout)
})
req.end('HELLO\n')

crypto core streams

  • crypto.createCipher
  • crypto.createDecipher
  • crypto.createCipheriv
  • crypto.CreateDeciphervi
var createHash = require('crypto').createHash

process.stdin
  .pipe(createHash('sha512', { encoding: 'hex'})
  .pipe(process.stdout)

echo -n abcd | node hash.js; echo

echo -n abcd | shasum -a 512

zlib core streams

  • zlib.createGzip
  • zlib.createGunzip
  • zlib.createDeflate
  • zlib.createDeflateRaw
var zlib = require('zlib')

process.stdin
	.pipe(zlib.createGunzip())
	.pipe(process.stdout)
var createGunzip = require('zlib')createGunzip
var createHas = require('crypto').createHash

process.stdin
	.pipe(createGunzip())
	.pipe(createHash('sha512', { encoding: 'hex'})
	.pipe(process.stdout)

split2

split input on newlines

This program counts the number of lines of input, like wc -l

var split = require('split2')
var through = require('through2')

var count = 0
process.stdin
	.pipe(split())
	.pipe(through(write, end))
	
function write(buf, enc, next) {
	count++
	next()
}

function end(next) {
	console.log(count)
	next()
}

websocket-stream

streaming websockets in node and the browser

socket-server.js

var http = require('http')
var ecstatic = require('ecstatic')
var through =require('throug2')

var server = http.createServer(ecstatic(__dirname + '/public'))
server.listen(5000)

var wsock = require('websocket-stream')
wsock.createServer({server: server}, function(stream) {
	// stream is a duplex stream
	stream
		.pipe(loud())
		.pipe(stream)
})

function loud() {
	return through(function (buf, enc, next) {
		next(null, buf.toString().toUpperCase())
	})
}
var wsock = require('websocket-stream')

var stream = wsock('ws://' + location.host)
var html = require('yo-yo')
var root = document.body.appendChild(document.createElement('div'))
var output = []
update()

stream.pipe(through((buf, enc, next) => {
	output.push(buf.toString())
	update()
	next()
})

function update () {
	html.update(root, html`<div>
		<form onsubmit=${onsubmit}>
			<input type="text" name="msg">
		</form>
		<pre>${output.join('')}</pre>
	</div>`)
	
	function onsubmit(ev) {
		ev.preventDefault()
		stream.write(this.elements.msg.value + '\n')
		this.reset()
	}
}

<body><script src="bundle.js"></script></body>

collect-stream

collect a stream output into a single buffer for object stream, collect output into an array of objects

useful for unit tests

from2

create a readable stream with a pull function

var from = require('from2')
var messages = ['hello', 'world\n', null ]

from(function (size, next) {
	next(null, messages.shift())
}).pipe(process.stdout)

to2

create a writable stream with a write and flush function

var to = require('tow')
var split = require('split2')

process.strdin.pipe(split()).pipe(to(function()))

duplexify

Lets you define a stream where you have to do some setup before. Maybe you have to mkdir first before write inside that directory.

var duplexify = require('duplexify')
var mkdirp = require('mkdirp')
var fs = require('fs')

module.exports = function (name) {
	var d = duplexify()
	mkdirp('logs', function (err) {
		var w = fs.createWriteStream('logs/' + name + '.log')
		d.setWritable(w)
	})
	return d
}
var api = require('./api.js')

var stream = log()
var n = 0

var i = setInterval(() => {
	stream.write(Date.now() + '\n')
	if (n++ == 5) {
		clearInterval(i)
		stream.end()
	}
}, 100)

pump

If any error events is not listened, node will crash by default. Pump will handle the error propagation of the streams. Pump didn't crash the node or server.

pumpify

Instead just handle errors, its creates a stream

end-of-stream

Its really difficult to know a stream it finishs, this module make it simple

var onend = require('end-of-stream')
var net = require('net')

var server = net.createServer(stream => {
	var iv = setInterval(() => stream.write(Date.now() + '\n', 1000)
	onend(stream, () => clearInterval(iv))
})
server.listen(5000)

rpc-stream

call methods defined by a remote endpoint.

server

var net = require('net')
var rpc = require('rpc-stream')

net.createSercer(stream => {
	stream.pipe(rpc({
		hello: function (name, cp) {
			cp(null, 'howdy ' + name)
		}
	})).pipe(strem)
}).listen(5000)

client

var net
var rpc

var client = rpc()
client.pipe(net.connect(5000)).pipe(client)

var remote = client.wrap(['hello'])

remote.hello(process.env.USER, function(error, msg) {
	console.log(msg)
	client.end()
})

sometimes is useful.

multiplex

pack multiple streams into a single stream.

var net
var multplex
var rpc
var fs

net.createServer(stream => {
	var plex = multiplex()
	stream.pipe(plex).pipe(stream)
	var client = rpc({
		read: (name, cp) => {
			if (!/^[\W]+$/.test(name)) {
				return cb(new Error('File not allowed'))
			}
			
			var r = fs.createReadStream('files/' + name)
			r.on('error', cb)
			r.pipe(plex.createStream('file-' + name))
			cb(null)
		}
	})
	client.pipe(plex.cerateSharedStream('rpc')).pipe(client)
}).listen(5000)
var net
var multplex
var rpc

var plex = multiplex((stream, id) => {
	if (/^file-/.text(id)) {
		console.log('received
		stream.pipe(process.stdout)
	}
})

plex.pipe(net.connect(5000)).pipe(plex)

var client = rpc()
client.pipe(plex.createSharedStram('rpc').pipe(client)

var remote = client.wrap(['read'])
remote.read(process.argv[2], err => {
	if (err) console.error(err)
})

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment