Kafka – Using Authorization/ACL (without Kerberos) with SSL Configuration in a Docker container

Kafka is an open source tool that is a distributed streaming platform mainly used for consuming and producing records in real-time (similar to a messaging system) while being fault tolerant when configured in a cluster.

More info can be found at Kafka

This guide will mainly show you how you can configure Kafka with SSL and also utilize its built-in Authorizer to restrict and give access depending on topic, user and host IP using a docker container (as shown in the featured blog image).

Service configuration

Will be using a CentOS docker image for this guide. Will go through the necessary kafka installation and configurations as we go through the guide

  • Run the following commands wherever you have docker installed
    docker run -itd -p 9093:9093 -h kafka.mintopsblog.local centos:latest bash
  • Once run, this should pull the latest centos image from the public repositories and echo back a container id.
  • You can then access the container by doing the following
    docker ps | grep -i centos:latest (to get the container id that was created with the centos:latest image)
    docker exec -it *your-container-id* bash


  • When inside the container we would then need to get a clean installation of Kafka so that we can then configure it with SSL and ACLs.
  • We’ll start by creating a kafka directory where the installation will be
    mkdir -p /opt/kafka
    cd /opt/kafka
  • Downloading the latest Apache Kafka version 1.1.0 with Scala version 2.11
    yum -y install wget
    wget http://www-eu.apache.org/dist/kafka/1.1.0/kafka_2.11-1.1.0.tgz
    tar xvf kafka_2.11-1.1.0.tgz
    cd kafka_2.11-1.1.0/config
  • We will now configure the server.properties for the Kafka Broker. We will leave most of the file default since we are only configuring 1 test broker
    vi server.properties
  • You will need to find the following configs
  • And change them as follows
  • The above config is not enough to make Kafka work with SSL, so we would need to add some other custom properties inside server.properties file and later on also create an SSL certificate. Add the following at the end file.
  • All the necessary configurations are now done for the Kafka Broker. Now, we will proceed in creating the directories and SSL certificates to make this work
    mkdir -p /opt/ssl
    cd /opt/ssl


  • Install JAVA 1.8 on the docker container since we will make use of the “keytool” command and also “openssl
    yum -y install java-1.8.0-openjdk openssl
  • You can then use the following script to create the necessary server and client certificates/keystores.
  • You would need to manually intervene when asked to provide some information (e.g. Kafka broker, passphrases and also what user you would like to give access to)
    ###Set Environment variable
    read -p "Please specify the Kafka broker hostname without the port: " KAFKA_HOST
    echo -n "Please specify a passphrase for the server ssl certificate: " 
    echo -e "\n"
    mkdir -p $KAFKA_SSL
    cd $KAFKA_SSL
    ###Create CA and server keystore/truststore###
    openssl req -new -x509 -keyout ca-key -out ca-cert -days $VALIDITY -subj $CA_INFO -passout pass:$SSLPASSPHRASE
    keytool -noprompt -keystore kafka.server.keystore.jks -alias $KAFKA_HOST -validity $VALIDITY -genkey -dname $CERTIFICATE_INFO -keypass $SSLPASSPHRASE -storepass $SSLPASSPHRASE
    keytool -noprompt -keystore kafka.server.truststore.jks -alias CARoot -import -file ca-cert -storepass $SSLPASSPHRASE
    keytool -noprompt -keystore kafka.server.keystore.jks -alias $KAFKA_HOST -certreq -file cert-file-$KAFKA_HOST -storepass $SSLPASSPHRASE
    openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file-$KAFKA_HOST -out cert-signed-$KAFKA_HOST -days $VALIDITY -CAcreateserial -passin pass:$SSLPASSPHRASE
    keytool -noprompt -keystore kafka.server.keystore.jks -alias CARoot -import -file ca-cert -storepass $SSLPASSPHRASE
    keytool -noprompt -keystore kafka.server.keystore.jks -alias $KAFKA_HOST -import -file cert-signed-$KAFKA_HOST -storepass $SSLPASSPHRASE
    ###Create client keystore and truststore###
    read -p "Please specify a username to give Kafka ACL permissions: " USERNAME
    echo -n "Please specify a passphrase for the client ssl certificate: "
    echo -e "\n"
    keytool -noprompt -keystore kafka.client.keystore.jks -alias $USERNAME -validity $VALIDITY -genkey -dname $CLIENT_CERTIFICATE_INFO -keypass $CLIENT_SSLPASSPHRASE -storepass $CLIENT_SSLPASSPHRASE
    keytool -noprompt -keystore kafka.client.truststore.jks -alias CARoot -import -file ca-cert -storepass $CLIENT_SSLPASSPHRASE
    keytool -noprompt -keystore kafka.client.keystore.jks -alias $USERNAME -certreq -file cert-file-client-$USERNAME -storepass $CLIENT_SSLPASSPHRASE
    openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file-client-$USERNAME -out cert-signed-client-$USERNAME -days $VALIDITY -CAcreateserial -passin pass:$SSLPASSPHRASE
    ###Add client certificate to server truststore###
    keytool -keystore kafka.server.truststore.jks -alias $USERNAME -import -file cert-signed-client-$USERNAME -storepass $SSLPASSPHRASE


  • Once the certificates and keystores are created, we can start the Kafka Broker and the embedded/local zookeeper which comes with the installation.
    cd /opt/kafka/kafka_2.11-1.1.0/bin/
  • First you would need to start the zookeeper service
    nohup ./zookeeper-server-start.sh ../config/zookeeper.properties > zookeeper.out &
  • After zookeeper has started, start the kafka broker
    nohup ./kafka-server-start.sh ../config/server.properties > kafka.out &
  • Next, we will create a kafka topic so that we can then configure the ACLs.
    ./kafka-topics.sh --zookeeper localhost:2181 --create --topic mintopsblog --partitions 1 --replication-factor 1
  • Now we will configure the ACLs on this newly created topic for the user that was created in the certificate (in my case the user is “mintopsblog”).
  • Manually (Producer)
./kafka-acls.sh –authorizer-properties zookeeper.connect=localhost:2181 –add –allow-principal User:”CN=mintopsblog,OU=kafka,O=kafka,L=kafka,ST=kafka,C=XX” –producer –topic mintopsblog
  • Manually (Consumer)
./kafka-acls.sh –authorizer-properties zookeeper.connect=localhost:2181 –add –allow-principal User:”CN=mintopsblog,OU=kafka,O=kafka,L=kafka,ST=kafka,C=XX” –consumer –topic mintopsblog –group mintopsblog-consumer
  • Or use the following script if you wish.
    set -x
    read -p "Please specify the username for the Kafka ACLs permissions: " USERNAME
    echo -n "Please specify the passphrase for the client ssl certificate: "
    echo -e "\n"
    read -p "Please specify the kafka /bin directory: " KAFKA_DIRECTORY
    read -p "Please specify a kafka topic for Kafka ACL: " TOPIC
    read -p "Specify zookeeper hosts (host:port): " ZKHOST
    read -p "Please specify permissions (producer or consumer): " PERMISSION
    if [[ "$PERMISSION" == "consumer" ]]; 
    ###Consumer permissions###
     read -p "Please specify consumer group id: " CONSUMERID
     $KAFKA_DIRECTORY/kafka-acls --authorizer-properties zookeeper.connect=$ZKHOST --add --allow-principal User:$CLIENT_CERTIFICATE_INFO --$PERMISSION --topic $TOPIC --group $CONSUMERID
     ###Producer permissions###
     $KAFKA_DIRECTORY/kafka-acls --authorizer-properties zookeeper.connect=$ZKHOST --add --allow-principal User:$CLIENT_CERTIFICATE_INFO --$PERMISSION --topic $TOPIC
    cat > $KAFKA_SSL/client-ssl.properties << EOF
  • You would need to create another file for the client so that it can use the client-keystore and client-truststore that were previously created. Change the password and necessary configs depending on how you previously set them up
    cat > /opt/ssl/client-ssl.properties << EOF 
  • You can then check whether you can produce or consume using the following commands
    • Consumer
      ./kafka-console-consumer.sh --bootstrap-server kafka.mintopsblog.local:9093 --topic mintopsblog --consumer.config /opt/ssl/client-ssl.properties --group mintopsblog-consumer --from-beginning
    • Producer
      ./kafka-console-producer.sh --broker-list kafka.mintopsblog.local:9093 --topic mintopsblog --producer.config /opt/ssl/client-ssl.properties

Should you not have ACLs on a specific topic you will get the following alert:


N.B. For the ACLs to work properly, you would need to create a different certificate for each user you will have. This certificate would then need to be added to the Kafka server truststore and then the Kafka broker would need to be restarted so that the SSL handshake can happen.

Hope this guide helps you out, if you have any difficulties don’t hesitate to post a comment. Also any needed improvements or mistakes done in the guides feel free to point them out.


  1. Yes, this has to be done on each node for it to function properly. The kafka cluster will communicate with each other through SSL, just make sure that the truststore is configured on each node so that the verification can be done properly and each node can verify each other


  2. Hi, thanks for posting this. In the first script, you create a CA, but if I have multiple servers I want one CA correct? How do you sign those remotely or do you sign them for all brokers and distribute the relevant keystores somehow?


  3. Hi David,

    You’re correct, one CA is all that is needed to sign the certificates (change the necessary info of the CA however you like). What you can do is, to have all the broker keystores signed by the same CA, so that all the kafka brokers will have the same CA info in their keystores/certificates. You can either do this on one server and then distribute the relevant keystores to their specific brokers, or have a mounted network drive on all brokers, put the CA certificate in this drive, make sure that all brokers will have central access to it, and run the necessary script as needed on each broker.

    N.B: If Kafka SSL is all you need (without Kafka Authorizer), you could make it work by creating a wildcard certificate (e.g. *.mintopsblog.com). Basically creating one certificate/keystore/truststore and all brokers will use the same one.

    Hope that helps…


  4. Thanks Elton. That’s what I thought. I think I’m going to create dummy stores, create the signing requests then distribute the certs to the relevant hosts individually, importing to the actual stores. That way I can sign them all once every so often with a script and update them (semi) easily without having to keep the ca-key available all the time.

    I’m just finding my feet with the authentication bit so this was really helpful.


    Liked by 1 person

  5. Hi Elton,
    thanks for nice writeup. I followed same and used your scripts. But, i keep getting
    [2019-07-06 11:43:01,759] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 4 : {kafka-acl-topic=TOPIC_AUTHORIZATION_FAILED} (org.apache.kafka.clients.NetworkClient)

    I verified below

    * Topic exists and ACL shows proper definations for user
    * signed certificate of user has proper ownership name
    * Broker Trust store has imported client cert, reflects the owner name
    * server.properties are intact with list you have given
    * client properties files reflects what you have provided above
    * hostnames are rechable

    Iam clueless



  6. Hi,

    I am getting below error while executing

    echo “Hello, World” | ./kafka-console-producer.sh –broker-list smartsensor.coe001.net:9093 –topic coetest –producer.config /opt/ssl/client-ssl.properties > /dev/null

    [2019-07-12 18:10:49,720] ERROR Error when sending message to topic coetest with key: null, value: 12 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
    org.apache.kafka.common.errors.TimeoutException: Topic coetest not present in metadata after 60000 ms.


  7. Hi Kiran & Guru, sorry for the late reply. Best way to debug this issue is to enable some more logging on the authorizer itself. Add the following inside log4j.properties

    log4j.logger.kafka.authorizer.logger=DEBUG, authorizerAppender

    This should give you some more info on what might be happening.


  8. Pretty good tutorial man thanks… one suggestion… maybe avoid redirecting std. error to /dev/null in your Bash script. For example… I was being lazy and made a four-letter password (for testing only)… The script failed with no error message. Also I thought I had installed openssl in my docker container (it had failed without me realizing – it was late at night :-)… Therefore the script failed again with no warning as to why. I ended up just going through each command one by one to find out what was failing.

    Other than that thanks for the tutorial…


  9. Thanks, mostly the /dev/null was set in the script to not give the user alot of log messages on the screen in relation to the certificate creation, but you’re right, as this basically hides any errors that the user might experience. Will remove them accordingly.


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: