Skip to content

Instantly share code, notes, and snippets.

@slackpad
Last active September 27, 2020 17:57
  • Star 11 You must be signed in to star a gist
  • Fork 2 You must be signed in to fork a gist
Star You must be signed in to star a gist
Save slackpad/d89ce0e1cc0802c3c4f2d84932fa3234 to your computer and use it in GitHub Desktop.
Bootstrapping Consul Servers with Version 8 ACLs Enabled (v0.8.1)

NOTE - An updated and more complete example can be found here.

Bootstrapping Consul Servers with Version 8 ACLs Enabled (v0.8.x)

Start a new Consul server

Here's acl.json:

{
  "acl_datacenter": "dc1",
  "acl_master_token": "root",
  "acl_default_policy": "deny"
}

Start the server. Note that we get the expected ACL errors since the server can't register itself with the catalog yet:

$ ./consul agent -server -data-dir=/tmp/consul-node-1 -bootstrap -config-file=acl.json
==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v0.8.1'
           Node ID: '774af59f-23d8-9255-a00c-066067a5db52'
         Node name: 'workpad.local'
        Datacenter: 'dc1'
            Server: true (bootstrap: true)
       Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 10.183.189.160 (LAN: 8301, WAN: 8302)
    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
             Atlas: <disabled>

==> Log data will now stream in as it occurs:

    2017/05/05 08:19:28 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:10.183.189.160:8300 Address:10.183.189.160:8300}]
    2017/05/05 08:19:28 [INFO] raft: Node at 10.183.189.160:8300 [Follower] entering Follower state (Leader: "")
    2017/05/05 08:19:28 [INFO] serf: EventMemberJoin: workpad.local 10.183.189.160
    2017/05/05 08:19:28 [INFO] consul: Adding LAN server workpad.local (Addr: tcp/10.183.189.160:8300) (DC: dc1)
    2017/05/05 08:19:28 [INFO] serf: EventMemberJoin: workpad.local.dc1 10.183.189.160
    2017/05/05 08:19:28 [INFO] consul: Handled member-join event for server "workpad.local.dc1" in area "wan"
    2017/05/05 08:19:34 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2017/05/05 08:19:34 [INFO] raft: Node at 10.183.189.160:8300 [Candidate] entering Candidate state in term 2
    2017/05/05 08:19:34 [INFO] raft: Election won. Tally: 1
    2017/05/05 08:19:34 [INFO] raft: Node at 10.183.189.160:8300 [Leader] entering Leader state
    2017/05/05 08:19:34 [INFO] consul: cluster leadership acquired
    2017/05/05 08:19:34 [INFO] consul: New leader elected: workpad.local
    2017/05/05 08:19:34 [INFO] consul: member 'workpad.local' joined, marking health alive
    2017/05/05 08:19:35 [WARN] agent: Service 'consul' registration blocked by ACLs
    2017/05/05 08:19:35 [WARN] agent: Node info update blocked by ACLs
    2017/05/05 08:19:58 [ERR] agent: coordinate update error: Permission denied
    ...

Create a token for the server

$ curl \
    --request PUT \
    --data \
'{
  "Name": "Server Token",
  "Type": "client",
  "Rules": "node \"workpad.local\" { policy = \"write\" } service \"consul\" { policy = \"write\" }"
}' http://127.0.0.1:8500/v1/acl/create?token=root

{"ID":"fe3b8d40-0ee0-8783-6cc2-ab1aa9bb16c1"}

Configure server with token

Update acl.json with the token from the previous step:

{
  "acl_datacenter": "dc1",
  "acl_master_token": "root",
  "acl_agent_token": "fe3b8d40-0ee0-8783-6cc2-ab1aa9bb16c1",
  "acl_default_policy": "deny"
}

Stop and start the Consul server. Note that now it can register itself and the consul service with no ACL errors:

./consul agent -server -data-dir=/tmp/consul-node-1 -bootstrap -config-file=acl.json
==> WARNING: Bootstrap mode enabled! Do not enable unless necessary
==> Starting Consul agent...
==> Consul agent running!
           Version: 'v0.8.1'
           Node ID: '774af59f-23d8-9255-a00c-066067a5db52'
         Node name: 'workpad.local'
        Datacenter: 'dc1'
            Server: true (bootstrap: true)
       Client Addr: 127.0.0.1 (HTTP: 8500, HTTPS: -1, DNS: 8600)
      Cluster Addr: 10.183.189.160 (LAN: 8301, WAN: 8302)
    Gossip encrypt: false, RPC-TLS: false, TLS-Incoming: false
             Atlas: <disabled>

==> Log data will now stream in as it occurs:

    2017/05/05 08:25:18 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:10.183.189.160:8300 Address:10.183.189.160:8300}]
    2017/05/05 08:25:18 [INFO] raft: Node at 10.183.189.160:8300 [Follower] entering Follower state (Leader: "")
    2017/05/05 08:25:18 [INFO] serf: EventMemberJoin: workpad.local 10.183.189.160
    2017/05/05 08:25:18 [WARN] serf: Failed to re-join any previously known node
    2017/05/05 08:25:18 [INFO] consul: Adding LAN server workpad.local (Addr: tcp/10.183.189.160:8300) (DC: dc1)
    2017/05/05 08:25:18 [INFO] serf: EventMemberJoin: workpad.local.dc1 10.183.189.160
    2017/05/05 08:25:18 [WARN] serf: Failed to re-join any previously known node
    2017/05/05 08:25:18 [INFO] consul: Handled member-join event for server "workpad.local.dc1" in area "wan"
    2017/05/05 08:25:25 [ERR] agent: failed to sync remote state: No cluster leader
    2017/05/05 08:25:26 [WARN] raft: Heartbeat timeout from "" reached, starting election
    2017/05/05 08:25:26 [INFO] raft: Node at 10.183.189.160:8300 [Candidate] entering Candidate state in term 3
    2017/05/05 08:25:26 [INFO] raft: Election won. Tally: 1
    2017/05/05 08:25:26 [INFO] raft: Node at 10.183.189.160:8300 [Leader] entering Leader state
    2017/05/05 08:25:26 [INFO] consul: cluster leadership acquired
    2017/05/05 08:25:26 [INFO] consul: New leader elected: workpad.local
    2017/05/05 08:25:28 [INFO] agent: Synced service 'consul'
    ...
@beardedeagle
Copy link

beardedeagle commented Jun 5, 2017

This only appears to kinda work? When I implement this change I cannot issue consul members, as it returns nothing and consul info returns this:

# consul info
Error querying agent: Unexpected response code: 403 (Permission denied)

for now I have had to entirely disable v8 acl specifics in order to make my quorum run as expected

@slackpad
Copy link
Author

slackpad commented Jun 8, 2017

@beardedeagle You'd need run consul members -token=<token> where the token has node read rights to the nodes you want to see, or you need to give your anonymous token read access to all nodes. To run consul info you need to pass a token that has agent read access, or add that to anonymous as well.

@beardedeagle
Copy link

Ah, ok. Maybe a opportunity for updating the docs. Thanks.

@slackpad
Copy link
Author

slackpad commented Jun 9, 2017

@beardedeagle good call - we should add the same ACL tables like we have for the APIs. Opened hashicorp/consul#3134 to track that.

@discointheair
Copy link

Hi @slackpad
What are the consequences when I use the master as agent token?
{
"acl_datacenter": "dc1",
"acl_master_token": "root",
"acl_agent_token": "root",
"acl_default_policy": "deny"
}

@slackpad
Copy link
Author

slackpad commented Jul 9, 2017

@discointheair that should be fine for Consul servers, though they can get by with a much more restrictive token, but since they are servers that's not a huge exposure. I wouldn't propagate the master token out to all the client agents, though.

@discointheair
Copy link

discointheair commented Oct 23, 2017

Hi @slackpad
I still use this “solution” for my servers (not client).

"acl_master_token": "root",
"acl_agent_token": "root",

But I also still not understand why a server needs the acl_agent_token. The server uses already the most possible privileged acl token. Why he needs this token to call himself, or why there is no internal mapping of acl_agent_token=acl_master_token when the node is a server?

@danlsgiga
Copy link

I’m using the master token acl as the acl_agent_token and I constantly get Permission denied on v1/agent/self in the servers logs... why is that and is this behaviour expected?

@EugenMayer
Copy link

i created a complete automation of such a stack start with all aspects including policy: deny

https://github.com/EugenMayer/consul-docker-stability-tests/tree/master/acls

i also had to run into the same inconvinience

https://github.com/EugenMayer/consul-docker-stability-tests/blob/master/acls/bin/server_acl_agent_token.sh#L3

This setup does:

  1. bootstrap 1 server, enables ACL lockdown
  2. configures no anon access and generates an acl_token for the agent clients
  3. it shares the acl_token with the client1 and client2 which start but wait for the server to provide them a token so they can register

Those scripts usually should go into an own image which just adds those and FROM consul and thats about it.

Hope that helps somebody, its more or less the above, just with full automation and 0 interaction on initial stack start and any further start

@Nmishin
Copy link

Nmishin commented Feb 25, 2020

Hi @slackpad,

Could you please help me there - I right understand that after I switched from acl_enforce_version_8 from false to true one Consul ACL master token was split to the three different tokens: acl_master_token, acl_agent_token and acl_agent_master_token?

And for the initial bootstrap of Consul cluster I need to use acl_agent_master_token and after that need to switch to the acl_master_token?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment