I just realized that security model as implemented in PR is broken when reusing same keypair on multiple servers.
- Victim secures their company on legitimate server with pubkey.
- Attacker lures victim to connect to malicious server.
- Malicious server simultaneously connects to legitimate server and proxies KEYAUTH packets.
- After KEYAUTH is completed malicious server is left with authorized socket and can access the company, profit.
Of course it also applies to rcon access if we migrate it to pubkey too.
- Generate unique keypair for each IP/port. (permanently enabled privacy mode. cons: friends lists are nonviable, if somebody hosts server on dynamic IP it could lock people out periodically when IP changes)
- Client includes server target IP/port in signature, and server verifies that. (doesn't have cons of mitigation 1., but difficult to implement in practice: server might not necessarily know all valid IPs from which it's reachable)
- Implement secure key-exchange and authenticate and encrypt all communication.
I took some effort to look into how libhydrogen does their XX-variant secure key exchange, and basically it is a pretty standard 3-way DH secure-key exchange. An ephemeral key is generated for the encryption on both the client and the server, and the second and third packet as encrypted with the "static" keypair to proof the public key which you get at the end of this exchange are really owned by the other party. (on a side-note, we need to document this somewhere, just so people can read how security is done).
This means that after this exchange, assuming there are no implementation errors in the encryption of course (and looking at libhydrogen, I wouldn't expect that) and implementation errors on our side (I have briefly looked over your code; it would need a further look to make sure that with all the quirks OpenTTD tends to do, the flow is sufficiently guarded), the server knows the public key of the client.
Important to note, for those that do not know what DH does: it does not authenticate the client nor the server. It merely facilitates a secure key (a so called "non-authenticated key-agreement protocol") exchange which, in this case, is used for channel encryption and in-game authentication. For the public key of the client, not much authentication is required, as this is nearly impossible anyway. We simply use the public key as-is, and if it has a match in the access list, the linked role is granted.
However, the lack of authenticating the server does allow for certain attacks, which are variations on the above. To be clear, the above scenario is mitigated with this method; but minor variations on this are not.
One such scenario is very simple: an attack sets up a proxy server, where he terminates all clients on one side, and sets up new connections (per client ofc) to the server.
Next, he replaces the link on a website, that if you click this, you end up on the proxy of the attacker, which routes your traffic to the right server. Especially as OpenTTD is most likely going to get protocol-support, where openttd:// opens the game, and connects to an indicated server, it is to be expected that people will promote their server via a "Join now" button on their website. So if an attack manages to change this, via either social engineering or hacking, he can intercept all clients. Another example is an IP given on Discord, etc.
In the beginning, nothing changes. All existing trust is not compromised. All new trust, how ever, is.
To put this in other words (sorry if you already got the point; but miscommunication is easy, so I am just using wayyyyy to many words for this. Feel free to skip till the next line if you already got the point :D):
As said, this is just a slight variation on the scenario presented earlier. The attack does need a bit more patience, but both cases require effort and both cases need to target a server specific.
In case you, as reader, got lost in this story, compare it with HTTPS: if you use a self-signed certificate on your server which I as client cannot validate, anyone in between you and me on the network path can terminating the encryption on both sides, and none of us would be able to detect this. This is why we have CAs, signed certificates which client should validate (!), and the likes: to validate the "pubkey" of the server is authentic. If you are more curious about how realistic this is, please take a look at https://mitmproxy.org/, which fully automated this MITM attack. So next time you see a Python application that sets
verify=False
on anSSL.open
.. just shake your head and run.So, as I did earlier, we have to balance this scenario. How likely do we consider this, what is the impact if this happens, and how much effort is the mitigation (as in, is it worth putting N effort to mitigate risk with X chance and Z impact).
In my opinion, especially in combination with protocol-support (openttd://), I would consider this scenario slightly more likely. Still, the impact is very low. How-ever, it would be a witch-hunt to track down what happened if this happens .. so it might be worth putting effort in this.
The 3 mitigations I mentioned earlier are still a valid option for this, but I also still think creating our own CA might be a bit overkill. I am sure other solutions exist too, I would consider this a: the floor is open, situation :)
What might be good to know, this doesn't have to be resolved here and now. Pubkey validation can be added on later to harden the security further. That would also give a bit of time to think it through properly without blocking existing work to continue ahead. But I am sure you can better balance this atm than I can, so I leave this to you @Milek7.
Sadly, generating a new pubkey for every server you connect to (as client) does not resolve this issue :( As that would have been a simple cheesy solution ;)
Anyway, just a random thought that pop'd in my mind. Let me know what you think, if you see the world differently, etc. Floor is yours :)