SSL certs - fairly standard and easy to implement assuming a root ca is available. Can have issues with certificates being stolen - no checks for revoked certificates. Need to restart clients to update certs. Certs always run out at the wrong time - it always surprises people.
Sasl plain - username and password. Stored in file by default. Sent over plain text by default
Sasl Scramm - avoids passwords being sent as plain text. Passwords are stored in zookeeper.
Sasl gssapi - Kerberos. Can integrate with Active Directory. Passwords or keytabs supported. Lots of tickets for large-scale, can put a lot of pressure on Active Directory servers.
Kafka delegation tokens (pretty new) - after any auth mechanism, Kafka returns a token for future auth. Can reuse that token for multiple consumers. Now you only need to auth against a primary auth system once a day from a single client. Can easily be revoked. Used for long-running streaming jobs.
ACLs stored in zookeeper - for topics, consumer groups or cluster. Super user access available which overrides ACLs. Can limit access by ip. Not intuitive, can use custom authorisation classes, nobody ever does. Needs zookeeper access to change, so we now need to think about zookeeper security - new tooling coming soon which doesn’t need direct zookeeper access. Always per-user, need to build something custom for groups. Cloudera and hortonworks do have custom authorisation classes to integrate with their platforms. https://docs.confluent.io/current/kafka/authorization.html
Slides from this talk will be useful.