Skip to content

Instantly share code, notes, and snippets.

@Milek7

Milek7/attack.md Secret

Last active January 11, 2021 14:41
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save Milek7/8482c4e87947f4748aea5d96b3e89b80 to your computer and use it in GitHub Desktop.
Save Milek7/8482c4e87947f4748aea5d96b3e89b80 to your computer and use it in GitHub Desktop.

Attack

I just realized that security model as implemented in PR is broken when reusing same keypair on multiple servers.

  1. Victim secures their company on legitimate server with pubkey.
  2. Attacker lures victim to connect to malicious server.
  3. Malicious server simultaneously connects to legitimate server and proxies KEYAUTH packets.
  4. After KEYAUTH is completed malicious server is left with authorized socket and can access the company, profit.

Of course it also applies to rcon access if we migrate it to pubkey too.

Possible mitigations:

  1. Generate unique keypair for each IP/port. (permanently enabled privacy mode. cons: friends lists are nonviable, if somebody hosts server on dynamic IP it could lock people out periodically when IP changes)
  2. Client includes server target IP/port in signature, and server verifies that. (doesn't have cons of mitigation 1., but difficult to implement in practice: server might not necessarily know all valid IPs from which it's reachable)
  3. Implement secure key-exchange and authenticate and encrypt all communication.
@TrueBrain
Copy link

I took some effort to look into how libhydrogen does their XX-variant secure key exchange, and basically it is a pretty standard 3-way DH secure-key exchange. An ephemeral key is generated for the encryption on both the client and the server, and the second and third packet as encrypted with the "static" keypair to proof the public key which you get at the end of this exchange are really owned by the other party. (on a side-note, we need to document this somewhere, just so people can read how security is done).

This means that after this exchange, assuming there are no implementation errors in the encryption of course (and looking at libhydrogen, I wouldn't expect that) and implementation errors on our side (I have briefly looked over your code; it would need a further look to make sure that with all the quirks OpenTTD tends to do, the flow is sufficiently guarded), the server knows the public key of the client.

Important to note, for those that do not know what DH does: it does not authenticate the client nor the server. It merely facilitates a secure key (a so called "non-authenticated key-agreement protocol") exchange which, in this case, is used for channel encryption and in-game authentication. For the public key of the client, not much authentication is required, as this is nearly impossible anyway. We simply use the public key as-is, and if it has a match in the access list, the linked role is granted.

However, the lack of authenticating the server does allow for certain attacks, which are variations on the above. To be clear, the above scenario is mitigated with this method; but minor variations on this are not.

One such scenario is very simple: an attack sets up a proxy server, where he terminates all clients on one side, and sets up new connections (per client ofc) to the server.
Next, he replaces the link on a website, that if you click this, you end up on the proxy of the attacker, which routes your traffic to the right server. Especially as OpenTTD is most likely going to get protocol-support, where openttd:// opens the game, and connects to an indicated server, it is to be expected that people will promote their server via a "Join now" button on their website. So if an attack manages to change this, via either social engineering or hacking, he can intercept all clients. Another example is an IP given on Discord, etc.
In the beginning, nothing changes. All existing trust is not compromised. All new trust, how ever, is.

To put this in other words (sorry if you already got the point; but miscommunication is easy, so I am just using wayyyyy to many words for this. Feel free to skip till the next line if you already got the point :D):

  • Client connects to proxy, exchanges public keys.
  • The proxy generates a new secret/public key specific for this client, and connects to the real server.
  • The server sees a new player and lets him in.
  • Neither the client nor the server are aware someone is between, as, neither has authenticated the pubkey of the other (the server cannot, and the client didn't).
  • When the legit client disconnects, the proxy can at any moment reconnect to the server and plays as he was the client.
  • If this proxy is in the path long enough, it is also likely people get assigned owner permissions or even RCON permissions to such players, as the server-admin is not aware there is an attack in his path.
  • This attack works best if you are in front of a server from the start.

As said, this is just a slight variation on the scenario presented earlier. The attack does need a bit more patience, but both cases require effort and both cases need to target a server specific.

In case you, as reader, got lost in this story, compare it with HTTPS: if you use a self-signed certificate on your server which I as client cannot validate, anyone in between you and me on the network path can terminating the encryption on both sides, and none of us would be able to detect this. This is why we have CAs, signed certificates which client should validate (!), and the likes: to validate the "pubkey" of the server is authentic. If you are more curious about how realistic this is, please take a look at https://mitmproxy.org/, which fully automated this MITM attack. So next time you see a Python application that sets verify=False on an SSL.open .. just shake your head and run.


So, as I did earlier, we have to balance this scenario. How likely do we consider this, what is the impact if this happens, and how much effort is the mitigation (as in, is it worth putting N effort to mitigate risk with X chance and Z impact).

In my opinion, especially in combination with protocol-support (openttd://), I would consider this scenario slightly more likely. Still, the impact is very low. How-ever, it would be a witch-hunt to track down what happened if this happens .. so it might be worth putting effort in this.

The 3 mitigations I mentioned earlier are still a valid option for this, but I also still think creating our own CA might be a bit overkill. I am sure other solutions exist too, I would consider this a: the floor is open, situation :)

What might be good to know, this doesn't have to be resolved here and now. Pubkey validation can be added on later to harden the security further. That would also give a bit of time to think it through properly without blocking existing work to continue ahead. But I am sure you can better balance this atm than I can, so I leave this to you @Milek7.

Sadly, generating a new pubkey for every server you connect to (as client) does not resolve this issue :( As that would have been a simple cheesy solution ;)

Anyway, just a random thought that pop'd in my mind. Let me know what you think, if you see the world differently, etc. Floor is yours :)

@Milek7
Copy link
Author

Milek7 commented Jan 11, 2021

More words to follow later (on encryption implementation approach and ways to implement server pubkey trust), but quickly for now:

In my opinion this attack is highly improbable, contrary to original attack described. It is another matter to lure someone you know who have rcon access on particular existing server onto your malicious server, than it is to somehow make them join wrong server in the first place and then expect they will get rcon access there. (assuming trusted network). But obviously this is subjective opinion.

Another take on this, if we consider ideal scenario of model that is currently used: network is trusted, use high-entropy passwords, never reuse them. In that case, KEYAUTH challenge scheme without DH is worse than currently used passwords! While this with DH and encryption is no-worse than passwords. That's why I'm somewhat reluctant on going forward with pubkey PR without encryption.

@TrueBrain
Copy link

That's why I'm somewhat reluctant on going forward with pubkey PR without encryption.

Just so there is no confusion: this is completely up to you. My only request here is that we process encryption as a separate PR, so we can do all kinds of testing to see if nothing breaks. Something not for this gist, but I currently don't have another place to put it, so here we go :P (this just talks easier than IRC, honestly)

Lowering the MTU has consequences, as I believe some packets are nearly exactly full. Some other packets check for MTU size to split into multiple if needed. So we need to check those and make sure they work properly. Nothing we cannot do, just something that requires time :) I believe one of the packets was if you have full companies with maximum custom names and presidents, or something silly.

Another thing we have to validate, but don't have an active OS for anymore, is Big Endian machines. I think it should just work, as all these operations are done on bytes as far as I could see, but that doesn't mean we shouldn't validate. I guess Qemu with ppc64 will do the trick.

And I am sure more of these nasty things pop-up .. it will be "fun" to evaluate the PR, but something we need to do sooner or later anyway, so .. yeah .. let's get it over with :) But this is the reason I would prefer to have it as a single PR, without the password changes etc. As this on its own will already be a few hours if not days to test, validate, etc :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment