Skip to content

Instantly share code, notes, and snippets.

@slunski
Created August 4, 2016 07:50
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save slunski/a50ecbaf495114fed3dde1df0f2c31a2 to your computer and use it in GitHub Desktop.
Save slunski/a50ecbaf495114fed3dde1df0f2c31a2 to your computer and use it in GitHub Desktop.
So, obviously TimTody sugested separate thread pool for each thread in the process that changes his (G)UID :)
But let's look on OSI layers vs TCP/UDP/IP - security layer cut off but thing was much more sane...
Applets vs flash ? Security-sandbox concept cut off... But performance was better. Actually it was just somewhat native
element embaded in html.
And what is virtual machine ? It's just CPU simulator. And it sandboxes what is running in it. Do Erlang have problems
with GUIDs ? Such problem is literaly implementing things across three abstract layers ! It's a) constant pain point; or
b) rewrite candidate... Such code will have serious troubles with surviving more then 5 years...
Virtual machine those days is expected to allow to create 10k x as many as possible "threads"... Access to native threads
is allowed to be threated as runs-in-special-box - subsystem, container or even in spawned process... Lower abstractions
shouldn't "leak" into inside VM.
What is a real problem ? Thread->start(...) is lightly abstracted by reactive-like syntax. Why ? Becouse we realy want to
think/have sequencial code...
That is mistake. Concurrency features should be based on ==> (feed) operators, eg. objects do american way/their own job -
Actors communicating by magic ether (queues, shedulers, mailboxes, channels, ...).
And that "ether" is where security features can be placed.
We need -*EXPLICIT*- "alive" and comunicating parts - Actors/Drones - for concurrency work. Becouse *work* is what is
main purpose of writing programs. Shedule or PLAN, divide work *stages* first. Thinking in work to do ;)
We need to start programming with *parallel* sequences of operations - each Actor is doing his own but *different* job
and this simplifies concurency - no global locks needed just around "pipelines". If work is planned right...
It is of course hi level semantic...
2016.03.08 some backlog:
21:16 < jnthn> Duh... I really need to write us a decent thread pool scheduler soon.
21:20 < jnthn> (That accounts for available hardware, throughput, queue length, etc.)
21:21 < jnthn> Just tracked a deadlock down to it not starting enough threads.
21:24 < jnthn> That's not really something to start on in the middle of the evening though. Heck, the current one was probably
put together quickly in the space of an evening. :)
21:28 < [Coke]> m: say «123 asdf»
21:28 < camelia> rakudo-moar 7ec824: OUTPUT«(123 asdf)␤»
21:29 < [Coke]> m: say «123 asdf».perl;
21:29 < camelia> rakudo-moar 7ec824: OUTPUT«(IntStr.new(123, "123"), "asdf")␤»
21:43 < lizmat> jnthn: re writing a decent thread pool scheduler, isn't it more important that "await" no longer blocks a
thread ?
22:09 < jnthn> lizmat: Well, that one comes with the "only in 6.d" yak shave, I think, and I'm pretty sure it's going to be
rather more fraught. They both need doing.
22:10 < lizmat> why would that need to be 6.d ? syntax wise nothing changes? and it only fixes issues, no
22:10 < lizmat> ?
22:11 < jnthn> It's a quite notable semantic change, though...:)
22:12 < jnthn> And we *will* perhaps break things
22:12 < lizmat> I guess I'm being dense, but what is the semantic change ?
22:12 < jnthn> `await` whole holding a lock might have to become an error, for example
22:12 < lizmat> ah
22:12 < lizmat> hmmm....
22:12 < jnthn> Well, that your code ran be running on a different thread after an await unless you arrange otherwise.
22:13 < jnthn> Which is fine for most things, but could be a bit of a surprise if you ain't expecting it.
22:13 < jnthn> (And yes, we'll need to arrange ways for people to opt out of that)
22:13 < jnthn> Dealings with native code may also be affected, for example.
22:14 < jnthn> Most of the time you can write code without really caring what OS thread it might be running on.
22:14 < jnthn> But sometimes the escape hatch matters.
22:16 < lizmat> gotcha :-)
22:19 < gfldex> lizmat: on linux the effective UID/GID is per thread
22:20 < lizmat> hmmm.... doesn't that actually mean that we can *NEVER* switch threads after an await ?
22:20 < lizmat> security wise ?
22:21 < gfldex> i would think so
22:22 < lizmat> or it would need to be running on a thread with the same UID/GID
22:22 < gfldex> or better before switching to a thread it has to check if UID matches
22:22 < TimToady> separate threadpool for each?
22:23 * lizmat wonder how that works on Win
22:26 < jnthn> Note that since a start block may run on any of the thread pool threads, and its continuation (.then(...)) on a
different one, you'd already have to take care of this.
22:27 < jnthn> As far as I've got it worked out, though, if you Thread.start({ ...seteuid... ... await ... }) then your await
will really be blocking that one thread.
22:27 < jnthn> That is, there'll have to be something in your dynamic scope saying "I can handle awaits smarter"
22:28 < jnthn> And the ThreadPoolScheduler will put that in place for you
22:28 < jnthn> (The default one you get, that is.)
22:29 < lizmat> isn't this something that libuv should be doing ?
22:29 < jnthn> No.
22:29 < jnthn> libuv is far more low level than this
22:29 < lizmat> oki
22:33 < masak> libuv handles "thread pool" but not "thread pool scheduler"?
22:35 < lizmat> masak: indeed
22:36 < lizmat> that is currently handled in src/core/ThreadPoolScheduler
22:36 < jnthn> libuv does have a thread pool, but it's somewhat special-case
22:38 < jnthn> It's used by libuv itself in various cases
22:38 < jnthn> http://docs.libuv.org/en/v1.x/threadpool.html if you're curious
22:41 < masak> thank you
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment