Many ideas were inspired and taken from these documents:
Some major changes:
- Process instantiation and launch should require 2 separate steps. Separating the lifecycle allows us to avoid timing and race conditions when connecting event listeners. It also allows us to differntiate async vs sync behavior.
- We should be able to attach a custom environment into each process. The environment functions we have at the top-level should be moved into a per-process API.
- stdout, stdin, and stderr should all represent 1-way pipes and be instantiable
- Event listeners for data should be per-pipe
- The exit event listener will remain on the process object
- Simple support of synchronous processes using process-as-a-function. Would return the entire process output as a join of stdout and stderr.
- PID and restart functions should be a part of every process. We need to have a "current process" object.
- terminate should be moved to a more generic signal sending function, with a convenience "kill" that would send SIGINT to the process
- Pipes should be more transparently interconnected, using attach, join, and split functions
- If/when there is an error launching an asynchronous process we currently have no way of communicating this failure to the developer. We need a new "launch error" event
- Decide on a simple IO duck-type, so we can re-implement for other IO objects such as Files, SocketStreams, etc later.
API proposal:
createPipe(type)
- type =>
Process.PIPE_IN
orProcess.PIPE_OUT
- returns a Pipe object
- Example
var stdin = Process.createPipe(Process.PIPE_OUT);
createStandardPipes()
- return an object with all 3 standard pipes: stdin, stdout, stderr. stdin will be created with
Process.PIPE_OUT
, stdout and stderr created withProcess.PIPE_IN
- Example
*
var pipes = Process.createStandardPipes(); // pipes.stdin, pipes.stdout, pipes.stderr
createProcess(options) createProcess(args, [env, pipes|stdin, stdout, stderr])
- options (all except args are optional):
- args => array of arguments or string (including executable, no quoting should be needed, default 0 args)
- env => object/hash of custom environment (default inherits this environment)
- stdin => stdin pipe (created automatically otherwise)
- stdout => stdout pipe (ditto)
- stderr => stderr pipe (ditto)
- pipes => a pipes object as returned by createStandardPipes
- args/env/pipes/stdin/stdout/stderr => same as above, but as inline args to the function instead of options
- Example
var env = {'PATH': '/home/marshall/bin'}; var args = ['/bin/ls', '-l']; var pipes = Process.createStandardPipes(); var process = Process.createProcess(args, env, pipes); // or var process = Process.createProcess({ args: args, env: env, stdin: pipes.stdin, stdout: pipes.stdout, stderr: pipes.stderr });
getCurrentProcess()
- return the currently running process.
Process()
- not a constructor, but a usage semantic. When calling a process object as a function, the execution should be synchronous
- Example:
var listFiles = Process.createProcess(['ls','-l']); var files = listFiles();
getPID()
- return the process ID of this process
getPipes()
- return an object with stdin, stdout, and stderr pipes
getPipe(pipe)
- pipe => Process.STDIN, Process.STDERR, Process.STDOUT
- return the designated pipe
getEnvironment([name])
- return this process' environment as an object
- if name is provided, return only the value of that variable
addEventListener(listener)
- add an event listener to this process
- listener=>
function(event, data) {}
- event =>
Process.EXIT
,Process.STARTED
,Process.ERROR
- data => event specific data. EXIT => process exit code, ERROR => error message
- event =>
launch()
- launch this process asynchronously
restart([pipes,env])
- kill the currently running process (if it is running), and restart this process asynchronously
- this function should clone current pipes and environment, or accept new ones
sendSignal(signal)
- send a signal to this process
- signal =>
Process.SIGINT
,Process.SIGHUP
,Process.SIGTERM
(others? many will be OS specific: throw an exception for unsupported signals)
terminate()
- send the appropriate terminate signal to the process (posix: SIGTERM, win32: TerminateProcess)
getType()
- returns
Process.PIPE_OUT
close()
- close this pipe
isClosed()
- returns whether or not this pipe is closed
write(blob, [size])
- writes a blob to this pipe's stream
- blob => binary blob to write to this pipe
- size => number of bytes to write, uses the blob's size by default
- returns the number of bytes actually written
addEventListener(listener)
- add an event listener to this pipe
- listener =>
function(event, data) {}
- event =>
Process.PIPE_WRITE_READY
,Process.PIPE_CLOSE
- data => (null?)
- event =>
getType()
- returns
Process.PIPE_IN
close()
- close this pipe
isClosed()
- returns whether or not this pipe is closed
available()
- return the number of bytes available for reading (?)
read(bufsize=-1)
- read a blob from this Pipe
- bufsize => maximum number of bytes to read. if set to -1, will read all available bytes
- returns a Blob
- throws => Exception if EOF has occured or the pipe is closed
join(inputPipe)
- joins the output of another pipe with this pipe's output. when a read() or
Process.PIPE_READ
event occurs from this pipe, it will contain the joined output of the other pipe. - inputPipe => another input pipe
- throws => Exception if inputPipe is closed
attach(io)
- automatically pipe this pipe's output into
io
s input.io
must simply implement a write(blob,size) method (this would work for an OutputPipe, or in the future a File/Socket) - io => an IO type i.e. OutputPipe
- throws => Exception if IO is closed
split()
- split this pipe into 2 newly created pipes that will both receive identical data
- returns an array with 2 InputPipes
addEventListener(listener)
- add an event listener to this pipe
- listener =>
function(event, data) {}
- event =>
Process.PIPE_READ_READY
,Process.PIPE_CLOSE
- data => (null?)
- event =>