I just realized that security model as implemented in PR is broken when reusing same keypair on multiple servers.
- Victim secures their company on legitimate server with pubkey.
- Attacker lures victim to connect to malicious server.
- Malicious server simultaneously connects to legitimate server and proxies KEYAUTH packets.
- After KEYAUTH is completed malicious server is left with authorized socket and can access the company, profit.
Of course it also applies to rcon access if we migrate it to pubkey too.
- Generate unique keypair for each IP/port. (permanently enabled privacy mode. cons: friends lists are nonviable, if somebody hosts server on dynamic IP it could lock people out periodically when IP changes)
- Client includes server target IP/port in signature, and server verifies that. (doesn't have cons of mitigation 1., but difficult to implement in practice: server might not necessarily know all valid IPs from which it's reachable)
- Implement secure key-exchange and authenticate and encrypt all communication.
First of all, I think it is awesome you think about these things :)
The scenario you describe is absolutely possible. Before I go into the scenario, let's run down your suggestions first, as that might give a bit of insight what we are dealing with.
works for your scenario, but initial feedback was pretty strong that it is kinda unwanted to do this. So lets keep it in our backpocket, and see if we can do better. Additionally, I can think of variants of your scenario where this still fails.
for any NAT server (read: most non-dedicated-server setups), this would indeed not work.
this, in core, is a suggestion always given for problems like this: just encrypt the channel. But if we look closer, you will notice that it doesn't resolve the problem. The attacker is between the network of the client and the target server. So he can just terminate any encryption to<->from the client, and to<->from the target server, none being any of the wiser. This is why we have for HTTPS a CA-system. And I will give a few more examples in a bit, but they all boil down to: you need some out-of-band method to validate trust. But if we can do that, we don't even need encryption. So this won't really work, well, except for the fact that we make it more complicated for the attacker.
So, what if we look around how other people solved this problem. As I always assume it cannot be a new problem, so people must have solved it :D
As mentioned before, HTTPS solve this by introducing an authority (CAs), which tells you that you are really talking to the server you think you are talking too. This is also the reason why leaking private keys is so harmful, or being able to request a certificate for a domain you don't own .. the horror stories :D If the trust breaks, there is nothing on protocol level you can do to notice.
SSH also has a solution for this; they don't use a authority, but they show you the host key if you connect to a server the first time. If the key doesn't change, it must be true you talk to the same server (because of the method of validation used with SSH). So after the initial connect, you can be sure that everything is fine. For us, this would work too: on initial connect you accept the server-key, you setup an encrypted channel, and the server can white-list you. Now no attacker can become between you and the server. If he tries, and you connect next time, it is not possible for him to either have the same public key, or see the traffic between you and the target (either/or; he cannot have both).
But, realistic here, it requires the human to notice that the key has changed and act on it. Most people just ignore it (remove it from there known_hosts file), and think: someone must have changed something, reinstalled the server, what-ever. For OpenTTD that would be much more true, as most of our player-base wouldn't even understand what it means. So I wouldn't consider this a viable solution either.
So, before we look further what solutions might work, we also have to balance out how likely it is this happens. Basically, a few things have to happen:
This is a very very far fetched scenario, in my opinion. So I keep this in mind when looking for a solution, as the question:
As let's be real, even if it was about an RCON access, the worst the attacker can do is ... restart the server? close it? This is a game after all, with no real access to the machine the server is hosted on. So in terms of risk evaluation, the chance of this happening is very small, and the impact if this happens is very small. That makes this in most definitions a "low risk" situation. So to consider what you implemented "broken" is a false image of reality. In those definitions, all forms of encryption is "broken" ;) It is about risks and what is considered acceptable. I know that sounds a bit weird, but that is how security works :) So your security model is not broken; it just has a risk. And we have 2 choices: mitigate it, or accept it.
So, let's talk a bit more if/how we can mitigate it. To repeat what I wrote earlier: this is a very common problem for any authentication. You have to trust the other party somehow. So, what are other common solutions can I think off?
There might be other solutions, but they don't really come to mind atm. Maybe others have some ideas. But I like to stress that we are solving a low-risk problem, which I even consider unlikely to happen in the real world. It requires so much social information about a target, why would you go for OpenTTD in that scenario, and not just bank access or something :P I am not sure .. I can be convinced otherwise :)
I personally think my second option, if we want to do anything about this, is already a huge help in mitigating the impact if this situation starts to exist. The first for sure works, and we have a place to publish public keys (master-server); it just requires additional effort. What the nice thing is about the first solution, it can be added later; no need to do that now. Although adding server private/public keys might be a great help if we do that already.
Anyway, something to think about .. let me know what you think and if this gave you other inspiration and ideas. Would love to hear if you see other ways to mitigate this and/or if you evaluate this risk differently :)