Skip to content

Instantly share code, notes, and snippets.

@smreed
Forked from bketelsen/nsqadmin-service.json
Last active August 29, 2015 14:10
Show Gist options
  • Save smreed/e93fd74765caef99266a to your computer and use it in GitHub Desktop.
Save smreed/e93fd74765caef99266a to your computer and use it in GitHub Desktop.
This gist represents all the manifests you'll need to run a redundant and fault-tolerant NSQ cluster on Kubernetes. NSQ Admin's http interface is available on port 14171 on any node. There's an issue with NSQ Admin if you're running more than one lookupd service, so I recommend running just one lookupd until I figure it out.
{
"id": "nsqadmin-http",
"kind": "Service",
"apiVersion": "v1beta1",
"containerPort": 4171,
"port": 14171,
"protocol": "TCP",
"selector": { "name": "nsqadmin" },
"createExternalLoadBalancer": true
}
{
"id": "nsqadminController",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "nsqadmin"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "nsqadmin",
"containers": [{
"name": "nsqadmin",
"image": "smreed/nsqadmin",
"ports": [{"containerPort": 4171}],
}]
}
},
"labels": {"name": "nsqadmin"}
}},
"labels": {"name": "nsqadmin"}
}
{
"id": "nsqd-http",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 14151,
"containerPort": 4151,
"protocol": "TCP",
"selector": { "name": "nsqd" }
}
{
"id": "nsqd-tcp",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 14150,
"containerPort": 4150,
"protocol": "TCP",
"selector": { "name": "nsqd" }
}
{
"id": "nsqdController",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 5,
"replicaSelector": {"name": "nsqd"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "nsqd",
"volumes": {["name": "nsqdpersistence"]},
"containers": [{
"name": "nsqd",
"image": "smreed/nsqd",
"volumes": {["name":"nsqdpersistence","mountPath":"/data"]},
"ports": [
{"containerPort": 4150},
{"containerPort": 4151}
]
}]
}
},
"labels": {"name": "nsqd"}
}},
"labels": {"name": "nsqd"}
}
{
"id": "nsqlookupd-http",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 14161,
"containerPort": 4161,
"protocol": "TCP",
"selector": { "name": "nsqlookupd" }
}
{
"id": "nsqlookupd-tcp",
"kind": "Service",
"apiVersion": "v1beta1",
"port": 14160,
"containerPort": 4160,
"protocol": "TCP",
"selector": { "name": "nsqlookupd" }
}
{
"id": "nsqlookupdController",
"kind": "ReplicationController",
"apiVersion": "v1beta1",
"desiredState": {
"replicas": 1,
"replicaSelector": {"name": "nsqlookupd"},
"podTemplate": {
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "nsqlookupdController",
"containers": [{
"name": "nsqlookupd",
"image": "nsqio/nsqlookupd",
"ports": [
{"containerPort": 4160},
{"containerPort": 4161}
]
}]
}
},
"labels": {"name": "nsqlookupd"}
}},
"labels": {"name": "nsqlookupd"}
}
@mindscratch
Copy link

I've got the same setup, except I'm running on private VM's (not a cloud provider) so I don't use createExternalLoadBalancer. I stood up a single nsqlookupd, nsqadmin (started with the lookupd service ip and port -- via environment variables), nsqd (started with the lookuped service ip and port -- via environment variables).

I published a message to a topic using curl which worked. I then started up a nsq_to_file and pointed it at nsqlookupd. The nsq_to_file got back a message from nsqlookupd that told it about the nsqd, however, it gave back the hostname of the nsqd container (which is a random id generated by kubernetes) and nsq_to_file can't "Dial" it (aka resolve the hostname).

Did you run into this? If so, any solution?

@aureliensibiril
Copy link

I have the exact same issue, still trying to find a workaround (I use kubernetes on google cloud container)

@aureliensibiril
Copy link

@mindscratch I just built an image of nsqd based on @smreed one. It solves the issue by broadcasting the IP instead of the hostname.

use asibiril/nsqd instead of smreed/nsqd

@gravis
Copy link

gravis commented Jun 18, 2015

I don't see how this could work in production... The volume are not persistent, so they will be destroyed with the pods. If you setup a persistent volume, all the instances will read/write in the same fs, leading to file corruption.
Also, since nsqd instances are LB by the service, it will be seen as one nsqd only by the clients. There's no coordination / replication between nsqd instances, so the consumers will probably see nothing until the LB arrives on the instance with a message, which seems pretty inefficient.
I think the only solution is to have a service and a persistent volume PER instance, which is a lot more configuration, and it's not scalable.
What do you think?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment