Skip to content

Instantly share code, notes, and snippets.

@qmx
Last active December 21, 2015 03:29
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save qmx/6243027 to your computer and use it in GitHub Desktop.
Save qmx/6243027 to your computer and use it in GitHub Desktop.
vertx meeting logs
[13:02:49] <purplefox> OK, let's get started
[13:02:54] <purplefox> Thanks everyone for coming
[13:02:55] karianna_ (~karianna@static.72.107.9.5.clients.your-server.de) joined the channel.
[13:03:10] <tigeba> hi all :)
[13:03:16] <karianna_> howdy
[13:03:21] <karianna_> sorry I'm late
[13:03:21] <purplefox> The idea of today's meeting is to discuss a bit of planning for Vert.x
[13:03:24] <purplefox> karianna_: np
[13:03:33] <purplefox> I've posted a list on the googlr group
[13:03:42] <purplefox> but anyone is free to bring anything else up
[13:03:55] <purplefox> I am particularly interested in feedback from people who have used Vert.x in the wild
[13:03:57] diega (~diega@186.136.108.51) joined the channel.
[13:04:00] <purplefox> so that we can improve it
[13:04:24] millrossjez (~millrossj@host86-152-6-68.range86-152.btcentralplus.com) joined the channel.
[13:04:34] <purplefox> OK first thing on the list is changes that affect the Vert.x main project
[13:04:37] <purplefox> That's core + platform
[13:04:42] <purplefox> as opposed to modules
[13:04:44] johnwa (~chatzilla@129.8.102.152) joined the channel.
[13:05:13] <purplefox> as most of you know, core is supposed to remain reasonably static and most new functionality is provided in the form of modules
[13:05:23] <purplefox> however we can help some changes and things can still be improved
[13:05:29] <purplefox> also there are a few things lacking currently
[13:05:37] <purplefox> that don't really make sense to put in modules
[13:05:54] <purplefox> ok first thing - UDP
[13:05:59] <purplefox> right now we have no UDP support
[13:06:08] <Narigo> hi there
[13:06:13] <purplefox> normanm you are pretty keen on this right?
[13:06:25] <normanm> purplefox, yes
[13:06:33] <lando23> tim do you mean udp for the outside world or as a transport for the event bus?
[13:06:38] galderz (~galder@redhat/jboss/galderz) left IRC. (Ping timeout: 256 seconds)
[13:06:40] <purplefox> UDP support in core
[13:06:42] <normanm> I know the netty code base very well so I think I could get this rolling pretty easy
[13:06:56] <purplefox> so users can write UDP enabled apps
[13:07:02] <purplefox> like they currently can do with http and tcp
[13:07:17] <purplefox> we would need this to have parity with node.js as well
[13:07:21] <lando23> gotcha
[13:07:24] <normanm> purplefox +1
[13:07:35] lucaz (~lucaz@186.23.182.178) joined the channel.
[13:07:46] <purplefox> ok so not much more to say on that
[13:07:53] <purplefox> another one is async DNS
[13:08:02] <tigeba> it will cause some clients to not have parity I think
[13:08:13] <tigeba> since your javascript clients can't uniformly have udp support
[13:08:26] <purplefox> tigeba: you mean browser clients?
[13:08:33] <lando23> tigeba: irrelevant since browsers don't have tcp either
[13:08:56] <purplefox> when i say UDP support I don't mean support in browsers - that doesn't really make sense anyway
[13:08:59] <tigeba> yes
[13:09:26] <tigeba> just tossing that out there, thats all :)
[13:09:30] <purplefox> ok cool :)
[13:09:31] <normanm> purplefox, we have a GSOC student working on it (as part of Netty) . The DNS codec etc is done it is just a bit more cleanup to do. So I think we could just wrap it.
[13:09:45] <normanm> purplefox, I wonder if we want to have it in core or just as module
[13:09:50] <purplefox> normanm: does DNS need to go into core?
[13:09:57] <purplefox> normanm: or could it be a module?
[13:10:12] <purplefox> again we would need this for node.js parity
[13:10:13] <normanm> purplefox, I think we could also do a module and use the EventBus
[13:10:40] <tigeba> is there anything in core that requires dns lookups?
[13:10:41] <lando23> sorry to get meta, and async dns sounds great, but is the primary motivator there performance? and if so, what kinds of performance improvements are you seeing with async dns normanm ?
[13:10:49] <normanm> maybe let us target module first and if we notice it makes more sense directly in the core we could "merge it"
[13:11:07] <normanm> lando23, the problem is the jdk only does blocking dns resolution
[13:11:21] <purplefox> normanm: i think it would need to go in core for nodyn node.js compat
[13:11:21] <normanm> so whenever you use InetAddres.getBy* you block the EventLoop
[13:11:31] <normanm> purplefox, can't they import a module ?
[13:11:42] galderz (~galder@redhat/jboss/galderz) joined the channel.
[13:11:44] <lanceball> normanm: I would prefer it in core, but a module may be fine as well
[13:11:57] <galderz> sorry, connection dropped
[13:11:58] <purplefox> normanm: i guess but it would be cleaner if nodyn only talks to the core APIs
[13:12:04] <normanm> lando23, also we could allow to lookup different kind of records… like PTR, MX whatever
[13:12:12] <normanm> purplefox, ok core would also be fine for me
[13:12:18] <zznate> much as I love me some nodejs compat, don't go adding to core solely to support such
[13:12:22] <purplefox> normanm: i don't see a problem with putting it in core
[13:12:31] <normanm> purplefox, me neither
[13:12:39] <lando23> ah i see. to avoid that every app would need it's own "DNS verticle". sounds like a core addition
[13:12:43] <normanm> zznate, agree… but async dns is quite common need
[13:12:59] <purplefox> lanceball: well.. you include a module without having to deploy anything
[13:13:17] <purplefox> lando23: sorry that was for you not lanceball (damned autocomplete!)
[13:13:18] <zznate> normanm you are closer to this problem then I am, so I will defer, as long as its a general case and not to support a module feature
[13:13:36] <normanm> zznate, I think it is quite common… so core should be fine
[13:13:57] jack (4b41d650@gateway/web/freenode/ip.75.65.214.80) joined the channel.
[13:13:58] <purplefox> ok i don't think dns is particularly contentious so lets move on, we have lots to get through :)
[13:14:06] <normanm> purplefox, +1
[13:14:07] purplefox ticks dns box
[13:14:09] <lando23> dns lookup is more common than most people realize since it's used fairly often by 3rd party libs.
[13:14:21] ubiquitousthey (~hrobinson@rrcs-24-173-16-66.sw.biz.rr.com) left IRC. (Quit: ubiquitousthey)
[13:14:24] <purplefox> http compression - this has been called for a few times
[13:14:27] tenfourty is now known as tenfourty_afk
[13:14:29] jack is now known as Guest28041
[13:14:46] <ARMistice> But "blocking" DNS could be a problem ... I think
[13:14:47] <tigeba> yeah but we won't be able to fix 3rd party libs unless there is some real sneakiness going on
[13:14:54] <purplefox> i'm not sure how important it really is though
[13:15:26] <purplefox> anyone have any strong thoughts on http compression?
[13:15:28] <zznate> purplefox - compression is one of the 'check box' features. everyone else has it ...
[13:15:28] <normanm> purplefox, I think I could add it with maybe 3 hours work or so… so why not ?
[13:15:42] <ARMistice> I had a big multiuser application which failed because of that "blocking DNS" in Java
[13:16:07] <normanm> purplefox, we can have http compression without any extra deps..
[13:16:15] <normanm> purplefox, it's provided by netty anyway
[13:16:24] <normanm> just some extra setters or so on the server and client
[13:16:31] <purplefox> ok
[13:16:32] <normanm> and then move the netty handlers in the pipeline
[13:16:59] <normanm> purplefox, not sure if we also want to have "generic" compression on WriteStream and ReadStream
[13:17:04] <normanm> for that a bit more work would be needed
[13:17:14] <normanm> but for just http it is quite easy
[13:17:21] <normanm> I think just http would be fine for now
[13:17:24] <purplefox> i think just go for http for now
[13:17:27] <lando23> sounds like an easy win +1
[13:17:34] <Narigo> http compression, yay!
[13:17:34] <purplefox> if i'm tbh i think there are higher priorities
[13:17:52] <normanm> purplefox, agree… just http is fine
[13:18:09] <purplefox> Right... management. This is an important one
[13:18:12] normanm writes down UDP, http compress and DNS on his todo list
[13:18:29] <purplefox> right now we are lacking in any management monitoring kind of thing in core
[13:18:34] <rlmw> does the web server module not already implement http compression?
[13:18:50] <normanm> rlmw, I don't think so… also we want it for client and server
[13:18:51] ollins (~oliver.na@www.inventage.com) left IRC. (Remote host closed the connection)
[13:19:02] <normanm> purplefox, agree… we should expose stuff via JMX
[13:19:06] <purplefox> rlmw: i think it does some compression, but that's not via core
[13:19:27] normanm hates to write JMX code… worst API ever
[13:19:28] <zznate> purplefox i'll get on the hook for some of the JMX stuff (need it here)
[13:19:33] <tigeba> sigh jmx
[13:19:35] <tigeba> hehe
[13:19:37] <purplefox> lol JMX
[13:19:38] <normanm> zznate, :)
[13:19:46] <rlmw> purplefox: so you want it to be automatic for any Handler<HttpRequest> ?
[13:19:52] <zznate> devil you know and all....
[13:19:54] <normanm> zznate, you love to give yourself shit heh ;)
[13:19:55] <purplefox> there is some JMX stuff already there that pid did but it's not complete
[13:20:06] tenfourty_afk is now known as tenfourty
[13:20:12] <zznate> yep - just some thread pool counters, not much else
[13:20:17] <normanm> rlmw, yeah real HTTP compression
[13:20:19] <galderz> normanm, for DNS, there's adrian cole's denominator library...
[13:20:21] <karianna_> JMX in Java 7 isn't tooooooo terrible. OK it still is <sigh>
[13:20:23] <rlmw> normanm: ok, cool!
[13:20:25] <purplefox> zznate: yep it could definitely be improved
[13:20:39] <galderz> normanm, FYI: http://techblog.netflix.com/2013/03/denominator-multi-vendor-interface-for.html
[13:20:40] <normanm> galderz, we have it in netty already so we can stick it directly on the eventloop
[13:20:46] <galderz> normanm, adrian created jclouds in the past
[13:20:49] <normanm> galderz, but will have a look… thanks :)
[13:20:51] <normanm> galderz, I know ;)
[13:20:52] <zznate> purplefox - k. i'll create an issue or two with proposals
[13:21:04] <rlmw> it does seem like monitoring would be good for something that would go over the eventbus
[13:21:12] <zznate> ^yes
[13:21:14] <robbiev> +1 for JMX, especially for clustering/eventbus
[13:21:17] <rlmw> rather than using JMX
[13:21:36] <purplefox> i think JMX is part of the story for monitoring but not the whole story
[13:21:48] <purplefox> JMX should be just one way the information can be exposed
[13:21:57] <rlmw> +1 for that
[13:22:03] <purplefox> event bus would also be a good option
[13:22:06] <normanm> purplefox, +1… maybe we can just push it over the eventbus
[13:22:07] <lanceball> yes
[13:22:07] <diega> what will be the scope for the JMX support? will it be just for getting out information or also to modify configuration in runtime?
[13:22:08] <lando23> we've managed to patch something together with logging and a fancy appender
[13:22:11] <normanm> and have a jmx module which expose it
[13:22:12] <karianna_> Right, if hte data is going over the event bus you could attach anything to it
[13:22:12] <zznate> well, you could go deeper and do Metrics API or similar
[13:22:31] <purplefox> zznate: +1
[13:22:52] <purplefox> we need to be careful not to kill performance with monitoring though
[13:23:06] <normanm> zznate, purplefox I think we should just push over the eventbus and have modules that expose it
[13:23:10] <zznate> purplefox - what is basic criteria for core dependency inclusion?
[13:23:20] <zznate> … normanm - good point
[13:23:27] <purplefox> zznate: you mean jar dependencies?
[13:23:31] <zznate> yeah
[13:23:33] <rlmw> I think being able to monitor something "Give the me the number of events sent to address foo.bar every 5 minutes" is the message I'd like to be able to subscribe to
[13:23:45] <purplefox> criteria is - please don't have any! ;)
[13:23:49] <robbiev> maybe have a look at http://metrics.codahale.com/ as well
[13:23:55] <robbiev> (haven't used it)
[13:23:56] <purplefox> ah yes coda stuff
[13:23:57] <zznate> ha.
[13:24:00] <normanm> purplefox, lol
[13:24:09] <zznate> yeah - Metrics core is pretty minimal, iirc
[13:24:13] <normanm> the code metrics stuff is quite nice
[13:24:13] <tigeba> robbiev: I'm using that currently and.. pushing them over the event bus :)
[13:24:21] <zznate> but that handles are the "reporter" encapsulation
[13:24:29] <normanm> but still I would love to push it just on the eventbus
[13:24:34] <normanm> and have moduls for expose it
[13:24:35] <lucaz> it could be a good idea to create an metrics module that exposes the internal information into the event bus and a jmx module that consumes that info and exposes it using mbeans
[13:24:40] <Narigo> i don't know about the available options for management stuff, but it would definitely be nice to have at least some way to see which verticles are deployed how often, etc.. i think normanm's idea would be great: have a module that you can ask that stuff :)
[13:24:44] <purplefox> zznate: maybe if the core just pushes the events on the bus -that's no dependencies. and then have a module that does stuff with it (which can have deps)
[13:24:51] <purplefox> lucaz: +1
[13:24:52] <zznate> yes ^
[13:24:58] <normanm> purplefox, exactly that is what I mean
[13:24:58] <tigeba> however metrics on the event bus does somewhat skew your monitoring of the event bus..
[13:25:02] <rlmw> purplefox: +1 to that approach
[13:25:15] <zznate> ok, i'll scope an issue to use metrics-core to dump onto event bus
[13:25:16] <normanm> tigeba, good point
[13:25:23] <rlmw> tigeba: not it if possible to runtime enable/disable metric gathering
[13:25:29] <purplefox> of course we can't just push every event on the event bus -that will kil perf
[13:25:37] <purplefox> there has to be some intelligence at source of event
[13:25:39] <jordanhalterman> normanm: +1
[13:25:42] <normanm> purplefox, we should use some kind of sampling
[13:25:51] <purplefox> normanm: +1 sampling
[13:25:58] <normanm> otherwise we will kill perf
[13:26:00] <lando23> what kind of metrics are you thinking about guys?
[13:26:06] <Narigo> if you push the events via the event bus, what happens if my monitoring module starts after some other verticles?
[13:26:08] <rlmw> aggregation over a time window is another thing to think about
[13:26:09] <zznate> rlmw - there is a rudimentary on/off switch built in now for system property for "JMX" stuff. i could keep that approach
[13:26:17] <normanm> lando23, for example concurrent connections, http req per sec
[13:26:24] <normanm> lando23, how busy the EventLoop is
[13:26:26] <normanm> etc
[13:26:42] <normanm> lando23, bytes send… how many errors etc
[13:26:45] <normanm> such things
[13:26:45] <tigeba> as painful as it is to say jmx is probably the path of least resistance. I have used jolokia pretty successfully to make pulling the info less painful FYI
[13:26:49] <purplefox> i think we would need some kind of core components that know how to sample/aggregate etc
[13:26:57] <normanm> purplefox, agree here
[13:26:58] <tigeba> and that could probably be made into a module
[13:27:00] <zznate> ^all is in metrcis-core
[13:27:05] <zznate> purplefox
[13:27:16] <zznate> histograms, guages and meters
[13:27:27] <purplefox> ok so lots to think about there
[13:27:28] <tigeba> yeah the codehale metrics does all the counters and sampling that is probably desired
[13:27:33] <lando23> normanm: thanks. what about using a logger based approach? that would allow sensors to be placed by application writers and it wouldn't block or add to event bus traffic?
[13:27:46] <zznate> huh. looks like github is down...
[13:27:53] <normanm> zznate, yes it is ;)
[13:27:59] <normanm> since 30 minutes or so
[13:28:02] <zznate> dag.
[13:28:19] <rlmw> they go down pretty regularly
[13:28:21] <normanm> lando23, I think eventbus would be more flexible
[13:28:23] <normanm> rlmw, DDOS
[13:28:24] <purplefox> ok.. so is anyone volunteering to "own" the core management task? zznate ?
[13:28:27] <zznate> k. well i'll throw an issue in when it comes back for JMX. game on after that for anyone interested
[13:28:34] <zznate> purplefox - yeah
[13:28:39] <purplefox> great
[13:28:55] <normanm> zznate, feel free to ping me if you need some infos how to integrate stuff which needs to come out of netty
[13:29:02] <lucaz> I can spend some time in the jmx module
[13:29:02] <normanm> zznate, or need any other infos
[13:29:04] <zznate> ok - cool
[13:29:42] <zznate> normanm rad! thx. lucaz, ill post to the mail list w/ the issue (some other folks asked recently on their)
[13:29:49] <zznate> … once github comes back
[13:29:58] <purplefox> next thing on the list: HA
[13:30:09] <lando23> why on earth do people ddos github? that's like mugging mother theresa
[13:30:14] <purplefox> (btw.. sorry going so fast, but the list is long...)
[13:30:15] <tigeba> not sure if this is relevant to the topic or not, but since none of the clustering info from hazel cast is exposed
[13:30:21] <lucaz> zznate: great. i'll be looking at it
[13:30:33] <rlmw> purplefox: might be worth setting some scope for HA
[13:30:33] <tigeba> there is no way to really know anything about how many um, nodes are running or anything like that
[13:30:44] <purplefox> tigeba: yes you can get this info from hazelcast
[13:30:51] <tigeba> sure that is what I did basically
[13:31:10] <jazzmtl> you can implement membership listener in hazelcast
[13:31:16] <purplefox> although previously the set wasn't consistent wrt to cluster messages, but they fixed this
[13:32:01] <purplefox> so by HA, i mean an "erlang-style" HA. where a node has a set of modules running, and when it fails another node takes over and starts those modules
[13:32:13] <purplefox> not trying to failover state
[13:32:16] <purplefox> that's too hard
[13:32:20] <normanm> purplefox, makes sense… state failure is a PITA
[13:32:21] <tigeba> or the way that akka handles it?
[13:32:29] <normanm> tigeba, how do they handle it ?
[13:32:54] <purplefox> (although your own application could manage it's own state somewhere but that's not the concern of vert.x)
[13:32:55] <tigeba> I believe it is just derived from the way erlang does it, i just don't have any real experience with erlang
[13:33:07] <purplefox> tigeba: i think it's similar to this too
[13:33:16] <purplefox> tigeba: although i'm not an akka expert
[13:33:16] <tigeba> they use some sort of managers to watch their actors and restart them if they die
[13:33:29] <normanm> tigeba, ok sounds like erlang like
[13:33:30] <purplefox> right this is supervisor model, but that's a bit different
[13:33:55] <normanm> purplefox, you want todo something different then the supervisior ?
[13:33:55] <tigeba> i'm not either, but i wrote my current system in akka before converting it to vert.x
[13:34:04] <jazzmtl> about HA cant we just run the same verticles/modules on each server , and share data with lets say hazelcast maps?
[13:34:14] <rlmw> purplefox: so how would you configure the supervisor?
[13:34:17] <zznate> purplefox - so implies a central registry of 'node x has z instances of y module'
[13:34:17] <purplefox> jazzmtl: yes you could do that
[13:34:29] <rlmw> seems like a bit of an issue
[13:34:43] <rlmw> ie which node in the cluster can take over which module etc.
[13:34:59] <tigeba> the data sharing is what potentially kills you badly..
[13:35:09] <purplefox> rlmw: the simplest way of doing this is to use hazelcast to maintain a consistent list of nodes
[13:35:30] <purplefox> rlmw: and then just fail over to a particular node which you can calculate by consistent hashing
[13:35:41] <purplefox> rlmw: all nodes will calculate the same result
[13:35:56] <purplefox> rlmw: and if it's their node they know to restart those services
[13:36:18] <zznate> so now you have topology
[13:36:26] <purplefox> yep
[13:36:26] <normanm> purplefox, I agree we should keep it simple
[13:36:29] <rlmw> purplefox: what if all nodes on the cluster aren't running the same set of modules?
[13:36:30] <zznate> hmm
[13:36:33] <normanm> purplefox, maybe we could just use zookeeper ?
[13:36:44] <purplefox> welll.. i think we should make it pluggable
[13:36:45] <normanm> I heard it is quite good for this stuff
[13:36:49] <purplefox> so we abstract out the cluster manager
[13:36:51] <normanm> purplefox, even better
[13:36:53] <purplefox> and hazelcast is just one impl
[13:37:00] <purplefox> but could use zookeeper
[13:37:01] <purplefox> or whatever
[13:37:19] <purplefox> that would be a part of this task - making the clustering impl pluggable
[13:37:22] <zznate> purplefox - i half assedly started with a cassandra impl, never got anywhere with it
[13:37:23] <normanm> purplefox, maybe this would be a good job for galderz ;)
[13:37:23] <tigeba> rlmw: i somewhat 'solved' that problem by making sure that nodes advertise what services they are capable of running
[13:37:29] <tcrawley> normanm: purplefox: jgroups does coordination as well, and bobmcw has a prototype jgroups-over-eventbus transport
[13:37:35] <normanm> galderz, has done something similiar for infinispan I guess
[13:37:54] <zznate> true - you RHAT folks have groups and infinispan on top of that
[13:38:02] <purplefox> +1
[13:38:02] <normanm> tcrawley, neat… never used jgroups so can't comment on that
[13:38:04] <rlmw> tigeba: ok, so nodes declare what they can run, and then modules can only failover to nodes which are capable of running the module?
[13:38:09] <purplefox> it would be good to have an infinispan impl too
[13:38:11] <zznate> *jgroups
[13:38:48] <tcrawley> normanm: TorqueBox and Immutant use JGroups for HA coordination currently, and it works well
[13:38:48] <purplefox> rlmw: in the simplest case nodes can failover to any node of the cluster. but we could refine it by nodes belonging to certain "groups"
[13:39:03] <lanceball> normanm: purplefox: I think bob's got HA with jgroups over eventbus pretty much "solved"
[13:39:05] <zznate> purplefox: how about pulling scope back a bit to making cluster management pluggable for now. that will drive a lot of impl
[13:39:12] <rlmw> purplefox: ok, groups solves the use case fine
[13:39:12] <tigeba> rlmw: yes basically. You can obviously be in a situation where there is no place to fail to, but it allows you to have heterogeneous nodes
[13:39:19] <galderz> normanm, indeed, Infinispan provides caching, which can be used to keep any sort of data around the cluster, and underneath is JGroups for group management
[13:39:26] <rlmw> and then everything within a group needs to be heterogenous
[13:39:48] <normanm> so I think we "just" need to come up with an abstraction and see what impl we use
[13:39:50] <normanm> purplefox, ^^
[13:39:52] <purplefox> +1
[13:40:05] <purplefox> I have a prototype i did already of HA in a branch somewhere
[13:40:07] <purplefox> I will dig it out
[13:40:13] <normanm> maybe bobmcw can take care
[13:40:19] <normanm> as he wrote the proof of concept
[13:40:28] <purplefox> ok i will ping bob
[13:40:45] <normanm> galderz, maybe you can also have a look with bob
[13:40:53] <normanm> galderz, not sure about your time constraints
[13:41:02] <normanm> galderz, I'm not your manager =P
[13:41:08] <galderz> normanm, very limited :|
[13:41:27] <normanm> galderz, damn it… so we all suffer from the same
[13:41:30] <purplefox> i am happy to drive the HA task
[13:41:37] <tigeba> i haven't used zookeeper directly but I believe it might already use hazelcast under the hood
[13:41:37] <tigeba> not entirely sure how useful that would be
[13:41:42] <purplefox> since i already made a start anyway
[13:42:08] <normanm> tigeba, it does not use hazelcast
[13:42:11] <purplefox> so.. some people don't like hazelcast for whatever reasons, so i think it's a good idea to make it pluggable
[13:42:14] <tigeba> hermf
[13:42:15] <zznate> yes
[13:42:20] <zznate> pluggable first step
[13:42:26] <normanm> zznate, purplefox +1
[13:42:30] <purplefox> yep first make it pluggable
[13:42:35] <jazzmtl> is the topic of distributed tasks related to HA?
[13:42:35] <zznate> dissertation on distributed systems 2nd :)
[13:42:46] <tigeba> hah
[13:42:47] <purplefox> lol
[13:43:10] <jazzmtl> i was trying to a distributed cast in vertx with hazelcast
[13:43:13] <purplefox> having implemented clustering stuff before I know it is a minefield so I hope to avoid the difficult problems and use what other people have done :)
[13:43:19] <jazzmtl> *task
[13:43:27] <normanm> purplefox, sounds like you will have some fun soon =P
[13:43:33] <normanm> purplefox, good old days
[13:43:37] millrossjez (~millrossj@host86-152-6-68.range86-152.btcentralplus.com) left IRC. (Read error: Connection reset by peer)
[13:43:38] normanm hides
[13:43:44] <purplefox> ok
[13:43:48] <purplefox> so that's HA
[13:43:54] <tigeba> normanm: Herm, I don't know what I was thinking about..
[13:44:00] <purplefox> I think that's quite an important task
[13:44:20] <purplefox> right..next thing on list
[13:44:24] <zznate> we on to wire format?
[13:44:30] <purplefox> this was raised by Ramki
[13:44:52] toddnine (~apigee@65.87.18.18) joined the channel.
[13:44:53] millrossjez (~millrossj@host86-152-6-68.range86-152.btcentralplus.com) joined the channel.
[13:44:55] <purplefox> yep, publish a wire protocol for the event bus so things like c/c++/whatever can talk to it
[13:45:11] <purplefox> that's what i was thinking for interoperability so we don't have to provide clients
[13:45:11] <tigeba> +1
[13:45:21] <normanm> purplefox, expose via TCP or UDP ?
[13:45:22] <purplefox> this is similar to the redis approach and it's been very successful
[13:45:24] <normanm> or not matter ?
[13:45:30] <purplefox> TCP I guess
[13:45:35] <zznate> agree on usefulness
[13:45:38] <purplefox> but UDP is an interesting ideas
[13:45:47] <normanm> purplefox, UDP gives some nice perf
[13:45:50] <normanm> once we have UDP
[13:45:51] <normanm> ;)
[13:45:58] <rlmw> how locked in does publishing a wire protocol make things? Eg would you be willing to change it at a major version
[13:45:58] <purplefox> but you have to deal with list packets
[13:46:00] <normanm> purplefox, I like the memcached binary protocol
[13:46:01] <purplefox> which complicates things
[13:46:03] <rlmw> if there was a compelling reason
[13:46:04] <normanm> and it works for TCP and UDP
[13:46:07] <purplefox> s/list/lost
[13:46:26] <purplefox> yeah really simple protocol
[13:46:28] <zznate> oohhh +1 on memcached simplicity as model
[13:46:43] <normanm> zznate, yeah just had to read it for my book… I really like it
[13:46:49] <zznate> makes it easy to debug/monitor
[13:46:51] <normanm> (only the binary… the text is a mess)
[13:47:19] <purplefox> although text protocols are arguably easier to program to
[13:47:19] <tigeba> UDP is interesting for that, but you probably can't use it where most people are probably looking to deploy
[13:47:21] <tigeba> well making a huge assumption that most people want to use EC2 or simlar
[13:47:25] <tigeba> err similar
[13:47:45] <purplefox> i think TCP would be fine
[13:47:49] <normanm> purplefox, but binary is more powerful
[13:48:09] <normanm> purplefox, I think we should use TEXT or BINARY… not both
[13:48:23] <purplefox> so you would open a connection to a particular vert.x node and that node would bridge to the event bus, rather than partipate in the cluster directly
[13:48:50] <purplefox> normanm: agreed we should settle on one
[13:49:04] <tigeba> sounds like a hazel cast super client basically
[13:49:25] <zznate> i would rather use telnet than netcat to mess around with it. +1 on text though I agree it's messier
[13:49:29] <tigeba> well personally I'm doing binary messages on my eventbus..
[13:49:53] <normanm> zznate, TEXT is sometimes messy because of encoding etc
[13:50:14] <normanm> not sure if its an issue… I agree telnet is easier but binary is often a "better" choice
[13:50:20] <normanm> and easier to "extend" later
[13:50:20] <tigeba> Id think anyone wanting to write the integration here could handle a binary proto but I'm not sure I have a super strong opinion on the matter
[13:50:27] <zznate> normanm - yeah, not gonna argue that. systems guy in me likes telnet debug-ability though
[13:50:50] <purplefox> ok
[13:50:53] <normanm> zznate, I know what you are talking about ;)
[13:51:18] <purplefox> moving on to pluggability in general
[13:51:19] <zznate> actually, a really simple CLI wrapper thingy would make me happy with binary protocol
[13:51:19] <petermd> isnt there a STOMP module already?
[13:51:20] <normanm> zznate, but also easier for admins to mess things up ?
[13:51:22] <normanm> =P
[13:51:26] <zznate> true
[13:51:28] <zznate> :)
[13:52:09] <purplefox> there are some things in Vert.x that it makes sense to expose via a SPI
[13:52:10] <normanm> anyway… lets move on
[13:52:28] <purplefox> e.g. in a module you might want to access the netty event thread pools
[13:52:35] <purplefox> right now there is no clean way to do that
[13:52:48] <normanm> purplefox, I think expose the EventLoop would be something really nice
[13:52:49] <purplefox> e.g. i think the postgres module has this issue
[13:53:00] <normanm> makes it easy to put netty based libraries on the EventLoop directly
[13:53:01] <purplefox> right
[13:53:05] <normanm> so no extra threads etc
[13:53:08] <purplefox> yep
[13:53:24] <purplefox> also the event bus should be pluggable
[13:53:32] <purplefox> basically there are a few things we can provide SPIs for
[13:53:33] <rlmw> in what sense pluggable?
[13:53:38] <normanm> purplefox, the only "problem" I have is that it expose some netty class directly in the public api
[13:53:48] <rlmw> replace the underlying implementation, or be able to intercept events?
[13:53:50] <normanm> purplefox, didn't you say you want to replace netty =P
[13:54:10] <purplefox> yep I guess replace the event bus impl if required
[13:54:18] <purplefox> intercept events
[13:54:29] ubiquitousthey (~hrobinson@rrcs-24-173-16-66.sw.biz.rr.com) joined the channel.
[13:54:35] <purplefox> i don't think we would want to replace netty though - it is too tightly integrated
[13:54:45] <purplefox> and it would make normanm cry ;)
[13:54:50] <tigeba> its madness :)
[13:54:55] <normanm> purplefox, AHHHHHHHHHHHHHHHHH!
[13:54:59] <rlmw> I think being able to intercept events is a really useful thing
[13:55:10] normanm was almost freaking out
[13:55:11] <rlmw> both from monitoring and debugging povs
[13:55:37] <purplefox> agreed
[13:55:54] <purplefox> this needs more thought
[13:56:06] <rlmw> sure
[13:56:07] <purplefox> event bus timeouts....
[13:56:17] <purplefox> this has been asked for quite a lot
[13:56:18] ubiquitousthey (~hrobinson@rrcs-24-173-16-66.sw.biz.rr.com) left IRC. (Client Quit)
[13:56:31] <jazzmtl> so does all this mean i'll be able to get the ip address of the client from SockJSSocket soon?
[13:56:33] <purplefox> so i think we should add it
[13:56:43] chmk (bdd25b82@gateway/web/freenode/ip.189.210.91.130) joined the channel.
[13:56:50] <tigeba> purplefox: I was just going to ask about event bus timeouts..
[13:56:52] <tigeba> :0
[13:57:03] <normanm> jazzmtl, nope… we should add this to the api
[13:57:09] <normanm> jazzmtl, that's some "minor" thing
[13:57:09] <jazzmtl> ok
[13:57:11] <purplefox> jazzmtl: i don't think it's related, but pls add an issue
[13:57:20] <normanm> purplefox, there is one already ;)
[13:57:21] <petermd> +1 for timeouts, and strongly typed errors (not encoded in message body)
[13:57:59] <purplefox> petermd: yep, will need to look at your work in more detail here
[13:58:09] <petermd> purplefox: thx
[13:58:13] <purplefox> petermd: i don't want to change core API too much though
[13:58:14] <normanm> purplefox, I think we should also define what todo if the handle(…) method of an module throws an exception
[13:58:24] <normanm> purplefox, so what is send back over the eventbus in this case
[13:58:31] <normanm> purplefox, just to eliminate hangs etc
[13:58:32] tenfourty is now known as tenfourty_afk
[13:58:37] <petermd> would also suggest adding a content-type to the Message, so you can transfer Buffer + content-type
[13:58:41] <rlmw> normanm: that would be really useful
[13:58:43] <purplefox> normanm: well this feeds into the supervisor model
[13:58:45] <jazzmtl> is this called reliable messaging?
[13:58:57] <normanm> purplefox, not sure it really fits in
[13:59:08] <purplefox> normanm: the erlang approach would be to fail the module, restart it or whatever
[13:59:27] <purplefox> normanm: depending on policy
[13:59:28] <tigeba> this might be related to the supervisor, but is there no way for a verticle to undeploy itself?
[13:59:29] <petermd> you really just need signals (as per erlang)
[13:59:31] <normanm> purplefox, but how would the sender of a message that this module use get notified
[13:59:51] <petermd> so you can watch a request or verticle and link to it
[13:59:51] <tigeba> the verticle does not seem to have a way to know its deployment id other than i guess sending it over
[14:00:52] <jazzmtl> is DefaultFutureResult related?
[14:00:56] <purplefox> reliable messaging is an impossibility ;) failures will always happen the trick is to be able to deal with it gracefully
[14:01:21] <normanm> purplefox, sorry meant to say… how will the other verticle know the target of the message throw an exception
[14:01:33] <normanm> purplefox, like whn you send something over the eventbus but wait for a response
[14:01:42] <normanm> and it never send the response because it throw an exception
[14:01:57] <purplefox> normanm: why does it need to know?
[14:02:08] <tigeba> I think that is something you should handle in your own code probably
[14:02:14] <purplefox> normanm: another approach would be to timeout and retry sending
[14:02:14] <tigeba> err should/could
[14:02:21] <rlmw> its a pretty common use case though
[14:02:26] <normanm> purplefox, because it may have a handler attached that is never called and will only process if the handler is called ?
[14:02:29] <purplefox> normanm: and the failed component could be restarted by the supervisor
[14:02:41] <rlmw> and for a lot of stuff you'd end up having to timeout every handler to make certain things sae
[14:02:43] <petermd> restarting wont help in majority of cases
[14:02:50] <tigeba> i guess you could make a 'required' reply
[14:02:51] <normanm> purplefox, yeah it can be restarted but how know the other verticle that it needs to send the message again ?
[14:02:58] <tigeba> and if your receiver blew up and failed to respond
[14:03:07] <lando23> purplefox: reliability of the event bus is a critical concern for a lot of applications
[14:03:10] <petermd> its about responsiveness. you can't make the server reliable, but you can ensure its responsive.
[14:03:10] <normanm> purplefox, maybe the timeout can help here ?
[14:03:22] <jazzmtl> why not use settimer?
[14:03:23] <purplefox> normanm: : you can code your system to be idempotent so retrying is always ok
[14:03:35] <normanm> purplefox, I think it is a common requirement
[14:03:43] <normanm> purplefox, but maybe a timeout would be good enough
[14:03:52] <Narigo> can't we just have eventbus.send(address, data, replyhandler, timeout, timeouthandler) ?
[14:03:53] <rlmw> jazzmtl: because you'd have to do that everywhere you had a replyhandler
[14:03:53] <petermd> timeouts wont make it responsive. has really bad attributes imho
[14:04:05] <normanm> like if you not receive a respond after 10sec send again or whatever
[14:04:16] normanm is just thinking loud
[14:04:22] <jazzmtl> oh ur right rlmw
[14:04:29] <petermd> normanm: when system slows down you get cascading failure / retry storm
[14:04:31] <rlmw> I agree with petermd that there's a responsiveness issue
[14:04:42] <rlmw> if you've got a user and they are waiting 10 seconds for a timeout
[14:04:49] <normanm> petermd, agree we would need some kind of backpressure handling
[14:04:54] <rlmw> they aren't having a good time
[14:05:02] <petermd> right - also if the request is broken (triggers exception)
[14:05:12] <petermd> then timeout or retry are both really bad
[14:05:17] <petermd> (will never work)
[14:05:32] <petermd> magnifies the problem, kills server etc etc
[14:05:33] <normanm> I have no good solution.. just wanted to point out the "problem" :)
[14:05:34] <lando23> this brings up a question i have which I'm not sure is appropriate for this meeting, but under what conditions does the vertx event queue get saturated, and how can my application detect when this is about to occur?
[14:05:43] <rlmw> this might be easy to handle by just having a handler which wraps another handler and sends an error message
[14:05:47] <normanm> lando23, can we move it out of the meeting ?
[14:05:54] <rlmw> if there's an exception
[14:05:59] <purplefox> normanm: +1
[14:06:01] <lando23> normanm: of course
[14:06:11] <purplefox> it's not an easy problem to solve
[14:06:23] <normanm> purplefox, maybe just something to think about for a bit
[14:06:24] <petermd> would suggest taking a look at my 1.3.1 patch. its limited in scope but it solves most of the pain points for me.
[14:06:32] <lando23> link?
[14:06:34] <purplefox> petermd: +1
[14:06:39] <normanm> lando23, its on the ml
[14:06:41] <lando23> k
[14:06:48] <normanm> petermd, yeah sure
[14:06:55] <Guest89718> can you follow ajax approach with a success handler, fail handler?
[14:06:59] <normanm> petermd, prepare to fill the CLA ;)
[14:07:30] <purplefox> Guest28041: well. the core api uses AsyncResultHandler heavily, which is kind of similar
[14:08:01] <purplefox> ok moving on
[14:08:05] <jazzmtl> ok
[14:08:12] <purplefox> Clustered Shared Data
[14:08:21] <jazzmtl> oh yes!
[14:08:36] <normanm> purplefox, a MUST TO HAVE
[14:08:37] <tigeba> oof
[14:08:38] <purplefox> right now SharedData is only local to a vert.x instance
[14:08:46] <rlmw> its a good idea
[14:08:50] <Narigo> wait, purplefox, what's the action plan for that eventbus send / never gets a reply problem then?
[14:08:50] <normanm> purplefox, couldn't we just use hazelcast for it ?
[14:08:56] <lando23> e.g. "copy hazel cast"
[14:09:03] <normanm> Narigom , think about it ;)
[14:09:05] <jazzmtl> ^ norman
[14:09:05] <petermd> if you were using zookeeper you could use that
[14:09:24] <rlmw> doesn't zookeeper just back onto hazelcast for this situation?
[14:09:25] <purplefox> Narigo: all these tasks I will write up into proper tasks and we can flesh out the details in there. and hopefully get someone to own it! :)
[14:09:26] <jazzmtl> I use hazelcast IMap already but i dont know if this is correct
[14:09:38] bytor99999 (~bytor9999@213.3.49.201) joined the channel.
[14:09:40] <purplefox> Narigo: well hazelcast is an implementation detail
[14:09:44] <lando23> jazzmtl: it isn't because hazel access blocks
[14:09:50] <normanm> purplefox, again we should use an abstraction
[14:10:00] <normanm> and just use hazelcast for the impl as we ship it anyway
[14:10:02] <purplefox> lando23: yep our api will be non blocking
[14:10:10] <petermd> perhaps ClusterManager & SharedData should be plugged together?
[14:10:13] <Narigo> normanm, i got too much things to think about already. like thinking about excuses why i can't think about that problem :P
[14:10:15] <normanm> oh it blocks ?
[14:10:18] <normanm> damnit
[14:10:25] <purplefox> and we can make it pluggable to it can be implemeneted by infinispan/hazelcast/whatever
[14:10:27] <jazzmtl> damn for real, im doomed! lol
[14:10:28] <normanm> Narigo, =P
[14:10:45] <purplefox> petermd: yep, clustermanager and shared data impl are related
[14:11:23] <purplefox> actually this will be a different api to shared data
[14:11:38] <purplefox> since shareddata api is currently sync gets/sets which won;t work for distributed
[14:11:47] <lando23> yes, like an async clone of hazel
[14:12:12] <purplefox> lanceball: yes or infinisspan - i belive infinispan already has an async get/set api
[14:12:27] <purplefox> so this is quite an important feature
[14:12:30] <normanm> purplefox, yes it has as far as i know
[14:12:34] <purplefox> shit, so much to do!
[14:12:42] <normanm> galderz, ^^ ?
[14:12:44] <purplefox> and we haven't even got half way through yet
[14:12:54] <lando23> vert.x needs async clones of: dns, hazel, every datastore known to man…am I missing anything?
[14:13:06] <lanceball> it does
[14:13:09] <Narigo> purplefox, so that means there will be a new async shared data api?
[14:13:12] <purplefox> luckily we can delegate much of the hard work to the system we plug in to
[14:13:13] <galderz> for shared data, maybe better try using javax.cache spec
[14:13:18] <purplefox> Narigo: yes
[14:13:24] <galderz> normanm, indeed, there's an async API
[14:13:32] <purplefox> galderz: ok, interesting
[14:13:40] <normanm> galderz, is it async ? (cache spec)
[14:13:47] <galderz> the spec bit would make it easy to switch providers
[14:13:59] <rlmw> didn't jsr 107 get dropped from ee 7 though?
[14:13:59] ubiquitousthey (~hrobinson@rrcs-24-173-16-66.sw.biz.rr.com) joined the channel.
[14:14:02] <galderz> normanm, purplefox, there's some bits that are, but not as much as Infinispan
[14:14:03] <rlmw> ie its still incomplete right
[14:14:13] <normanm> galderz, hm ok…
[14:14:22] <bytor99999> Evening. The biggest issue with eventual consistency with shared data. Or a notification system to alert that a change in a particular shared data is about to change. Because in most scenarios where you need shared data clustered, is guarantee that whichever node receives a message sees the most up to date data.
[14:14:26] <galderz> rlmw, it's 2nd public draft review - should be 1.0 pretty soon
[14:14:30] <galderz> already 0.9 of the spec
[14:14:41] <rlmw> ok
[14:14:41] <jazzmtl> hazelcast api: Asynchronously gets the given key. Future future = map.getAsync(key);
[14:14:47] <galderz> rlmw, it got dropped cos it was late, a lot of legality stuff...
[14:14:59] <purplefox> bytor99999: i think we start with get/set initially - setting listeners is more complex
[14:15:01] <galderz> between ehcache owners and oracle
[14:15:08] <rlmw> yeah
[14:15:12] <bytor99999> k
[14:15:32] <purplefox> next thing:
[14:15:33] <normanm> purplefox, you could also use memcached api
[14:15:43] <normanm> for the impl
[14:15:45] <normanm> ok
[14:15:47] <galderz> normanm, why use memcache API when you have a java spec?
[14:15:54] <normanm> galderz, also true
[14:15:58] <lando23> jazzmtl: does vert.x magically turn Futures into non-blocking callbacks, etc?
[14:15:58] <galderz> memcache is very limited in what you can do with it...
[14:16:06] <galderz> there are far more capable providers
[14:16:14] johnwa (~chatzilla@129.8.102.152) left IRC. (Ping timeout: 268 seconds)
[14:16:19] <purplefox> lanceball: java futures?
[14:16:31] <purplefox> lanceball: sorry that was for lando23
[14:16:33] <lanceball> purplefox: you mean lando23
[14:16:34] <lanceball> :)
[14:16:43] purplefox curses autocomplete
[14:16:59] <normanm> purplefox, if you would use OSX this would not happen
[14:16:59] <purplefox> ok.. flow control on event bus
[14:17:03] normanm runs away
[14:17:08] <galderz> purplefox, i guess it means completable futures...
[14:17:13] <lando23> purplefox: jazzmtl mentioned the hazel async api that uses Futures and I asked if they are compatible with vertx
[14:17:16] <jazzmtl> purplefox yes java concurrent future
[14:17:17] <galderz> futures that call back when it's completed...
[14:17:27] <purplefox> lanceball: old java futures were blocking not async
[14:17:32] <jazzmtl> I don't know, I'm asking.
[14:17:34] <normanm> galderz, you mean something like netty's ChannelFuture ?
[14:17:39] purplefox curses autocomplete again!
[14:17:42] <rlmw> there's completablefuture in java 8
[14:17:50] <rlmw> but I imagine you don't want to depend on 8
[14:17:50] <galderz> normanm, something like scala future/promises :)
[14:17:52] <purplefox> rlmw: yep in java8
[14:17:56] <normanm> galderz, =P same
[14:17:58] <purplefox> a proper future
[14:18:05] <rlmw> this would be really good
[14:18:10] <normanm> rlmw, jdk8 is too far away
[14:18:15] <rlmw> normanm: ack
[14:18:29] <rlmw> we have a lot of code in work's vertx codebase which would be /much/ better with promises
[14:18:57] <lando23> we too :)
[14:19:07] rezn8r (adae39cf@gateway/web/freenode/ip.173.174.57.207) joined the channel.
[14:19:19] <purplefox> rlmw: there is an argument to create a module which wraps the vert.x core apis with a promise style api rather than straight callbacks
[14:19:21] <purplefox> rlmw: this would be possible
[14:19:45] <jazzmtl> task.setExecutionCallback(new ExecutionCallback<String>() { public void done(Future<String> future) { .....
[14:19:45] nonanon (b03d521b@gateway/web/freenode/ip.176.61.82.27) joined the channel.
[14:19:57] <rlmw> but I think this stuff can be implemented as a module without touching the core
[14:20:02] <purplefox> rlmw: +1
[14:20:16] <normanm> yeah keep the core as minimal as possible
[14:20:19] <rlmw> ok
[14:20:22] <rlmw> that's fine with me
[14:20:29] <purplefox> it would be a pretty mechanical task actually to wrap it, much of it could be automated
[14:20:34] gercan (~gercan@redhat/jboss/gercan) left IRC. (Ping timeout: 264 seconds)
[14:20:39] <purplefox> right
[14:20:41] <lando23> indeed. we shouldn't impose a flow control approach on people
[14:20:51] <rlmw> and I'm happy to spend some time on this kind of a module
[14:20:57] <purplefox> rlmw: that would be cool
[14:20:59] <normanm> rlmw, cool :)
[14:21:04] <purplefox> let me add to list
[14:21:24] <purplefox> ok event bus flow control:
[14:21:37] <purplefox> right now there is no flow control on event bus
[14:21:43] <purplefox> so you can use pump etc
[14:21:45] tigeba grins.
[14:21:46] <purplefox> s/can/can't
[14:21:58] <purplefox> and you have to roll your own flow control
[14:22:10] <purplefox> but we could provide a readstream/writestream for event bus too
[14:22:25] <normanm> purplefox, I guess this could make sense
[14:22:30] <normanm> to keep thing generic
[14:22:42] <purplefox> otherwise you can easily overwhelm event bus
[14:22:57] <lando23> i feel dumb but i don't understand how flow control relates to adding readstream/writestream for event bus too
[14:23:00] <purplefox> e.g. when you're writing benchmarks it's easy to send messages faster than the event bus can cope with so you just OOM
[14:23:20] <purplefox> flow control just allows you to push back on the sender
[14:23:31] <purplefox> so let's say you are sending to a particular address
[14:23:39] <normanm> purplefox, so basically we need also kind of a drainHandler etc ?
[14:23:41] <lando23> oh now i get it: flow control vs control flow. haha
[14:23:45] <purplefox> it's a way of getting the sender to pause if the receiver has got too many outstanding unread messages
[14:23:50] <petermd> could you just set a discard policy?
[14:23:58] <purplefox> petermd: discard is another option
[14:24:04] <lando23> yes, you're talking about back pressure etc. great.
[14:24:06] <purplefox> petermd: but is not so satisfactory
[14:24:12] <purplefox> lanceball: back pressure yes
[14:24:15] <normanm> purplefox, petermd I think we need to make it configurable
[14:24:23] <petermd> for real-time stuff - flow control isnt great - want to discard oldest instead of newest
[14:24:39] <purplefox> petermd: i think we can have configurable policy here
[14:24:47] <petermd> +1
[14:24:56] <purplefox> but again trying not to make it too complex is the hardest thing
[14:25:01] <lando23> more than anything we need it observable. i'm thinking a per verticle callback that triggers on a certain buffer saturation condition
[14:25:14] <tigeba> would the policy be per queue or global, etc?
[14:25:19] <petermd> sure - just want registerHandler to run in constant memory (or max memory)
[14:25:22] <purplefox> lando23: yep this is how readstream/writestream currently works
[14:25:39] <purplefox> tigeba: we can work out the details later
[14:26:12] ubiquitousthey (~hrobinson@rrcs-24-173-16-66.sw.biz.rr.com) left IRC. (Quit: ubiquitousthey)
[14:26:12] <tigeba> I probably have cases where I might want back pressure and discard in the same system
[14:26:12] <tigeba> not on the same queue/topic tho
[14:26:15] <tigeba> obivously :)
[14:26:19] <lando23> is there any sense in which message queues suffer from the same issues as streaming buffers?
[14:27:01] <purplefox> i think so. i mean you can use event bus to send buffers around the system anyway
[14:27:36] <lando23> what i'm worried about is that a given verticle will get too many messages and melt under the load, a queue will get full and start dropping data.
[14:27:49] <lando23> i'd like to be able to take action before that happens
[14:27:57] <purplefox> sure that's what flow control is all about
[14:28:04] <purplefox> it prevents you getting into that situation
[14:28:30] <lando23> oh okay i thought you were just talking readstream/writestream
[14:28:37] <Narigo> when the discussion reaches scala / async driver modules, could someone ping me, please? :)
[14:28:39] sgargan_ (~sgargan@176.61.82.27) joined the channel.
[14:28:49] <purplefox> readstream/writestream implements flow control
[14:29:15] <purplefox> Narigo: there is just one more core item on the list then we hit language modules :)
[14:29:28] <purplefox> ok last core item:
[14:29:33] <purplefox> security of event bus traffic
[14:29:42] <jazzmtl> sessionID
[14:29:48] <lando23> don't care :)
[14:29:52] <rlmw> so I think I raised this issue
[14:29:57] <normanm> ssl?
[14:30:06] <Narigo> purplefox, yeah and if i don't get something to eat now, everything will be gone i won't have anything to eat tonight ;)
[14:30:09] <purplefox> i don't really care too much either but some people seem to
[14:30:27] <rlmw> Usecase: if you want to deploy vertx on different cloud services traffic can be going unsecured between nodes of a cluster
[14:30:54] <lando23> vpn?
[14:31:02] <purplefox> rlmw: true but vert.x clusters are only really designed for small clusters
[14:31:02] <tigeba> I think stretching the event bus between cloud centers might be somewhat dubious
[14:31:04] <rlmw> I think SSL solves our usecase, and you can enable SSL for hazelcast, but the eventbus traffic itself still isn't secure
[14:31:16] <lando23> …and what tigeba said
[14:31:19] <purplefox> to join clusters some kind of bridging approach would be more appropriate imho
[14:31:27] <purplefox> tigeba: +1
[14:31:27] <petermd> some of the auth request for eventbus are really about eventbusbridge i think
[14:31:35] <petermd> want to prevent clients addressing the whole eventbus
[14:31:46] <purplefox> well.. you can stretch the event bus but it's more than one vert.x cluster
[14:31:53] <karianna_> Yep - this is a concern for our customers - VPN is a workaround, but it has other perf and extra infra issues
[14:31:53] <purplefox> federation
[14:32:14] <rlmw> vpn has both performance and reliability issues
[14:32:32] <lando23> ha, what doesn't? :)
[14:32:41] <rlmw> I guess all I really want is a way to be able to SSL the eventbus traffic
[14:32:50] <purplefox> Ok people. This meeting is running on longer than I expected, and I think we have another 1-2 hours left to do. So... perhaps we should adjourn until another time?
[14:32:57] <purplefox> wdyt?
[14:33:02] <lando23> +1
[14:33:16] <normanm> +0
[14:33:43] <jazzmtl> I have one question about idle connections? is there a timeout to control idle connections?
[14:33:44] <tigeba> is the security concern for the event bus wire level, or application level or both?
[14:33:51] <rlmw> can we finish this topic first ;)
[14:34:01] <jazzmtl> someone just opens a connection and sends nothing
[14:34:02] <purplefox> yep let's finish this topic
[14:34:08] <tcrawley> purplefox: I must leave soon, so adjourning would be fine for me. unless you want to talk about mod-lang-clojure *right now*
[14:34:13] <lanceball> purplefox: I'm OK breaking after this topic, as long as we don't let the rest drop :)
[14:34:20] <jordanhalterman> agnostic
[14:34:35] nonanon (b03d521b@gateway/web/freenode/ip.176.61.82.27) left IRC. (Quit: Page closed)
[14:34:41] <lando23> an event bus bridge with SSL support sounds like a solid mod idea to me
[14:34:44] <purplefox> yep i think 90 mins is long enough for a meeting, don't want everyone to fall asleep and it's getting late in the day in eurpoe
[14:34:53] <rlmw> tigeba: I think its a conver for the wire protocol
[14:34:58] <rlmw> concern *
[14:35:21] <tigeba> roger
[14:35:46] <purplefox> ok so let's adjourn
[14:35:52] <purplefox> i am not available tomorrow
[14:35:56] ubiquitousthey (~hrobinson@rrcs-24-173-16-66.sw.biz.rr.com) joined the channel.
[14:36:01] <purplefox> can we continue next monday, same time?
[14:36:17] <tcrawley> works for me
[14:36:18] <lanceball> purplefox: works for me
[14:36:21] <tigeba> works for me
[14:36:27] <purplefox> once we get through the rest, i will write up all the tasks and we can try and share them out amongst everyone
[14:36:29] <normanm> purplefox, ok for me
[14:36:34] <jordanhalterman> yep
[14:36:38] <jazzmtl> anything to discuss specifically for the next time?
[14:36:48] <jazzmtl> ok
[14:36:50] <purplefox> sorry guys for not getting everything done. but there is a lot as you can see :)
[14:37:00] <purplefox> next time lets start with language modules
[14:37:00] <tigeba> seems like pretty good progress to me
[14:37:03] <tigeba> there were a lot of points
[14:37:10] <purplefox> scala, clojure, php, JS, nodyn etc
[14:37:21] <jazzmtl> ok
[14:37:39] <galderz> purplefox, i'm not around on Monday
[14:38:02] <jazzmtl> log on irc and read the conversation later?
[14:38:52] <karianna_> purplefox: can we revisit security first thing next meeting? Still unresolved for us..
[14:39:07] <jazzmtl> purplefox, norman, everyone who contributed to this fantastic project, just wanted to say thank you! :)
[14:39:13] jordanhalterman (~jordanhal@ip68-109-68-221.oc.oc.cox.net) left IRC. (Quit: jordanhalterman)
[14:39:22] <purplefox> karianna_: sure
[14:39:24] millrossjez (~millrossj@host86-152-6-68.range86-152.btcentralplus.com) left IRC. (Remote host closed the connection)
[14:39:30] <purplefox> or we can continue here for a while if you want
[14:39:40] chirino (~chirino@c-98-219-101-186.hsd1.fl.comcast.net) left IRC. (Quit: Computer has gone to sleep.)
[14:40:00] <jazzmtl> have a good day/evening everyone! :)
[14:40:13] jazzmtl (45a58d0f@gateway/web/freenode/ip.69.165.141.15) left IRC. (Quit: Page closed)
[14:40:35] <karianna_> Sure - so in order to enable SSL for the event bus...
[14:40:52] <purplefox> Thanks everyone
[14:40:53] <bytor99999> Is there going to be a transcript of tonight?
[14:41:03] <purplefox> bytor99999: I'll copy it
[14:41:06] <normanm> yeah thanks too all
[14:41:08] <bytor99999> Thanks Tim.
[14:41:15] <purplefox> bytor99999: np
[14:41:23] whaley (~whaley@96-37-23-93.dhcp.gnvl.sc.charter.com) joined the channel.
[14:41:38] <bytor99999> Wish I was able to be here the whole time. Was in the Zurich Train Station trying to get a Mifi for internet access over the next few days.
[14:41:59] <purplefox> karianna_: yep ssl for event bus
[14:42:12] <purplefox> karianna_: what is your use case here?
[14:42:24] <normanm> purplefox, does hazelcast support ssl ?
[14:42:24] <purplefox> karianna_: running on untrusted network?
[14:42:31] <purplefox> normanm: yep it does
[14:42:35] <normanm> ok
[14:42:43] <bytor99999> But based on our conversations last night at the pub. kirianna_ and rlmw seems to have me covered on what issues I was hoping for.
[14:42:55] <rlmw> heh
[14:42:58] <karianna_> Yep - so we're over hte open internet
[14:43:32] <Sticky> isnt the event bus traffic shared vai plain sockets from netty? So wouldnt adding ssl as an option on those sockets be relatively easy?
[14:43:50] <purplefox> Sticky: yep there are two levels:
[14:44:02] <purplefox> 1) the hazelcast traffic (mainly topology) would need to be encrypted
[14:44:19] <purplefox> 2) the actual cluster connections which are straight TCP maintained by vert.x would need to be ssl
[14:44:37] whaley (~whaley@96-37-23-93.dhcp.gnvl.sc.charter.com) left IRC. (Client Quit)
[14:44:38] <Sticky> hazelcast out of the box has encryption options so that is less of a concern
[14:44:59] <Narigo> so on monday we'll talk about lang modules and async drivers? :) i can stay at the dinner table? :D
[14:44:59] <Narigo> (get back there)
[14:45:00] <rlmw> I think hazelcast can be set with a property.
[14:45:43] <purplefox> Narigo: yep
[14:46:38] <purplefox> so i guess we would need to configure nodes with certificates
[14:46:54] <purplefox> it's kind of ugly would have to configure hazelcast ssl separately
[14:47:13] <normanm> purplefox, yeah would be nice to unify it somehow
[14:47:29] <purplefox> normanm: maybe can programmaticallty configure hazelcast
[14:47:45] <purplefox> but this is always going to be implementation specific
[14:47:45] <normanm> purplefox, yeah our just set a System.property via code ?
[14:47:59] <purplefox> e.g. if someone is using infinispan it would be done differently
[14:48:01] <normanm> purplefox , yeah need some api
[14:49:32] bytor99999 (~bytor9999@213.3.49.201) left IRC. (Ping timeout: 268 seconds)
[14:49:33] <purplefox> yep maybe when we abstract out the pluggable cluster manager we can include some way of providing keyststores
[14:49:34] toddnine (~apigee@65.87.18.18) left IRC. (Quit: toddnine)
[14:49:40] <normanm> purplefox, +1
[14:50:16] galderz (~galder@redhat/jboss/galderz) left IRC. (Ping timeout: 264 seconds)
[14:51:24] <purplefox> ok guys I have to go now, and cook dinner
[14:51:32] <purplefox> thanks everyone
[14:51:35] <normanm> purplefox, ejoy
[14:51:36] <purplefox> same time next monday
[14:51:37] <normanm> enjoy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment