Access Strimzi Kafka using Ingress Part-II

Welcome Back Guys. This is Part-II of the series. If you haven’t read part-I, I recommend that you do so before diving into part-II.
- Part-I
- Part-II (This Blog)
Now in this part we will cover how to create the ingress to access the kafka-cluster using the kafka-cli.
To Access the Kafka-Cluster using the Ingress, as I am using the kong-ingress controller in my setup, to transfer the tcp port traffic, we need to create the Kind: TCPIngress in the kong ingress controller, here is my configuration.
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
annotations:
kubernetes.io/ingress.class: internal-tcp
name: kafka-tcp-ing
spec:
rules:
- backend:
serviceName: kafka-cluster-kafka-tls-bootstrap
servicePort: 9094
port: 9094
---
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
annotations:
kubernetes.io/ingress.class: internal-tcp
name: kafka-tcp-ing-0
spec:
rules:
- backend:
serviceName: kafka-cluster-kafka-tls-0
servicePort: 9094
port: 9095
---
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
annotations:
kubernetes.io/ingress.class: internal-tcp
name: kafka-tcp-ing-1
spec:
rules:
- backend:
serviceName: kafka-cluster-kafka-tls-1
servicePort: 9094
port: 9096
---
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
annotations:
kubernetes.io/ingress.class: internal-tcp
name: kafka-tcp-ing-2
spec:
rules:
- backend:
serviceName: kafka-cluster-kafka-tls-2
servicePort: 9094
port: 9097
In the configuration, I have mentioned that each service port and on which port from outside it should be called, for e.g
TCPIngress [kafka-tcp-ing-0], will listen on the port 9095 and forward the traffic to svc [kafka-cluster-kafka-tls-0] on port 9094
Now we need to make the dns entry in the Route53 as i am using the AWS Environment. Entries will be like this.
kafka.beta.example.com -----> Kong-nlb-loadbalancer-dns [beta-kong-internal-tcp-nlb-5a1a556e4ec3cde1.elb.ap-south-1.amazonaws.com]
kafka-0.beta.example.com -----> Kong-nlb-loadbalancer-dns[beta-kong-internal-tcp-nlb-5a1a556e4ec3cde1.elb.ap-south-1.amazonaws.com]
kafka-1.beta.example.com -----> Kong-nlb-loadbalancer-dns [beta-kong-internal-tcp-nlb-5a1a556e4ec3cde1.elb.ap-south-1.amazonaws.com]
kafka-2.beta.example.com -----> Kong-nlb-loadbalancer-dns [beta-kong-internal-tcp-nlb-5a1a556e4ec3cde1.elb.ap-south-1.amazonaws.com]
Now, After this if we tried to access the kafka using the kafka-cli, by creating the user and generating the cert, truststore and keystore.
for e.g:-
kafka-console-producer.sh --broker-list kafka.beta.example.com:9094 --topic test-topic - producer.config client-ssl.properties
and if the connectivity is okay and all, you will get the error
no subject alternative DNS name matching kafka.beta.example.com found
More-Details:- https://github.com/strimzi/strimzi-kafka-operator/issues/1486
To Fix this we need to make one final change in the kafka-cluster configuration, that is to include the dns-names in the alternativeNames section
Final-Kafka-Configuration.yaml
apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: kafka-cluster
spec:
entityOperator:
topicOperator: {}
userOperator: {}
kafka:
authorization:
type: simple
superUsers:
- CN=admin-user
version: 3.5.0
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9094
type: cluster-ip
tls: true
authentication:
type: tls
configuration:
bootstrap:
alternativeNames:
- kafka.beta.example.com
- kafka-0.beta.example.com
- kafka-1.beta.example.com
- kafka-2.beta.example.com
- kafka-cluster-kafka-tls-bootstrap.kafka.svc
# This i have used if we want to access the kafka from inside
# the cluster by using the service name
brokers:
- broker: 0
advertisedHost: kafka-0.beta.example.com
advertisedPort: 9095
- broker: 1
advertisedHost: kafka-1.beta.example.com
advertisedPort: 9096
- broker: 2
advertisedHost: kafka-2.beta.example.com
advertisedPort: 9097
config:
offsets.topic.replication.factor: 1
transaction.state.log.replication.factor: 1
transaction.state.log.min.isr: 1
log.message.format.version: "2.4"
storage:
type: persistent-claim
size: 2Gi
deleteClaim: true
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 1Gi
deleteClaim: true
Now after applying this, your kafka pods will get restarted and then again try to run the same producer command, you will be able to access the Kafka cli get connected to the cluster.
client-ssl.properties
security.protocol=SSL
ssl.keystore.location=/user-keystore.jks
ssl.keystore.password=ak99j2fxABjViAnvUMbn95osVbRjwWbw
ssl.key.password=ak99j2fxABjViAnvUMbn95osVbRjwWbw
ssl.truststore.location=/user-truststore.jks
ssl.truststore.password=ak99j2fxABjViAnvUMbn95osVbRjwWbwkafka-console-producer.sh --broker-list kafka.beta.example.com:9094 --topic test-topic - producer.config client-ssl.properties
Command to access the Kafka-Cluster, after this you see the prompt (>)
kafka-console-producer.sh --broker-list kafka-cluster-kafka-bootstrap.kafka.svc:9094 --topic test-topi123c --producer.config client-ssl.properties
Better-Approach: There is one more thing that can be done is to use the same 9094 port for all the kafka-brokers and for the kafka-bootstrap service and transfer the traffic based on the hostname, that is possible using the SNI Based Routing in the Kong-Ingress.
(More-Details:- https://docs.konghq.com/kubernetes-ingress-controller/latest/guides/services/tcp/)
But In this you need to create the secret having the certificate of the domain which you are using and you need to manage the expiry of certs and all, if that ok, then you can also check this feature, i have tried this, works perfectly.
apiVersion: configuration.konghq.com/v1beta1
kind: TCPIngress
metadata:
annotations:
kubernetes.io/ingress.class: internal-tcp
konghq.com/protocols: tls_passthrough
name: kafka-tcp-ing
namespace: kafka
spec:
tls:
- secretName: kafka-ssl-secrets-123
hosts:
- kafka.beta.example.com
- kafka-0.beta.example.com
- kafka-1.beta.example.com
- kafka-2.beta.example.com
rules:
- host: kafka.beta.example.com
backend:
serviceName: kafka-cluster-kafka-tls-bootstrap
servicePort: 9094
port: 9094
- host: kafka-0.beta.example.com
backend:
serviceName: kafka-cluster-kafka-tls-0
servicePort: 9094
port: 9094
- host: kafka-1.beta.example.com
backend:
serviceName: kafka-cluster-kafka-tls-1
servicePort: 9094
port: 9094
- host: kafka-2.beta.example.com
backend:
serviceName: kafka-cluster-kafka-tls-2
servicePort: 9094
port: 9094
Note:- Make sure the annotations: konghq.com/protocols: tls_passthrough, is present otherwise it won’t work, this is the same thing mentioned in this blog [https://strimzi.io/blog/2019/05/23/accessing-kafka-part-5/], but with the nginx-ingress controller.
During implementation, I faced a NotLeaderOrFollowerException, which I discussed on github.com [https://github.com/orgs/strimzi/discussions/9493]
Thanks for reading it guys. That’s it for this series of accessing kafka using the ingress, See you soon in any other blog and I think this guide will help you to achieve the same setup and learn something new about the strimzi kafka ingress.
In Case of Any Issues and Questions and Suggestion, you can connect me on LinkedIn: