为了账号安全,请及时绑定邮箱和手机立即绑定

使用 Bitnami Helm 安装 Kafka

标签:
Kubernetes

服务器端 K3S 上部署 Kafka Server

Kafka 安装

输入如下命令添加 Helm 仓库:

> helm repo add tkemarket https://market-tke.tencentcloudcr.com/chartrepo/opensource-stable
"tkemarket" has been added to your repositories
> helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

🔥 Tip:

tkemarket 镜像没有及时更新,建议使用 bitnami 仓库。

但是 bitmani 在海外,有连不上的风险。

查找 Helm Chart kafka:

> helm search repo kafka
NAME            CHART VERSION   APP VERSION     DESCRIPTION                                      
tkemarket/kafka 11.0.0          2.5.0           Apache Kafka is a distributed streaming platform.
bitnami/kafka                   15.3.0          3.1.0           Apache Kafka is a distributed streaming platfor...
bitnami/dataplatform-bp1        9.0.8           1.0.1           OCTO Data platform Kafka-Spark-Solr Helm Chart    
bitnami/dataplatform-bp2        10.0.8          1.0.1           OCTO Data platform Kafka-Spark-Elasticsearch He...

使用 bitnami 的 helm 安装 kafka:

helm install kafka bitnami/kafka \
  --namespace kafka --create-namespace \
  --set global.storageClass=<storageClass-name> \
  --set kubeVersion=<theKubeVersion> \
  --set image.tag=3.1.0-debian-10-r22 \
  --set replicaCount=3 \
  --set service.type=ClusterIP \
  --set externalAccess.enabled=true \
  --set externalAccess.service.type=LoadBalancer \
  --set externalAccess.service.ports.external=9094 \
  --set externalAccess.autoDiscovery.enabled=true \
  --set serviceAccount.create=true \
  --set rbac.create=true \
  --set persistence.enabled=true \
  --set logPersistence.enabled=true \
  --set metrics.kafka.enabled=false \
  --set zookeeper.enabled=true \
  --set zookeeper.persistence.enabled=true \
  --wait

🔥 Tip:

参数说明如下:

  1. --namespace kafka --create-namespace: 安装在 kafka namespace, 如果没有该 ns 就创建;
  2. global.storageClass=<storageClass-name> 使用指定的 storageclass
  3. kubeVersion=<theKubeVersion> 让 bitnami/kafka helm 判断是否满足版本需求,不满足就无法创建
  4. image.tag=3.1.0-debian-10-r22: 20220219 的最新镜像,使用完整的名字保证尽量减少从互联网 pull 镜像;
  5. replicaCount=3: kafka 副本数为 3
  6. service.type=ClusterIP : 创建 kafka service, 用于 k8s 集群内部,所以 ClusterIP 就可以
  7. --set externalAccess.enabled=true --set externalAccess.service.type=LoadBalancer --set externalAccess.service.ports.external=9094 --set externalAccess.autoDiscovery.enabled=true --set serviceAccount.create=true --set rbac.create=true 创建用于 k8s 集群外访问的 kafka-<0|1|2>-external 服务 (因为前面 kafka 副本数为 3)
  8. persistence.enabled=true: Kafka 数据持久化,容器中的目录为 /bitnami/kafka
  9. logPersistence.enabled=true: Kafka 日志持久化,容器中的目录为 /opt/bitnami/kafka/logs
  10. metrics.kafka.enabled=false 不启用 kafka 的监控 (Kafka 监控收集数据是通过 kafka-exporter 实现的)
  11. zookeeper.enabled=true: 安装 kafka 需要先安装 zookeeper
  12. zookeeper.persistence.enabled=true: Zookeeper 日志持久化,容器中的目录为:/bitnami/zookeeper
  13. --wait: helm 命令会一直等待创建的结果

输出如下:

creating 1 resource(s)
creating 12 resource(s)
beginning wait for 12 resources with timeout of 5m0s
Service does not have load balancer ingress IP address: kafka/kafka-0-external
...
StatefulSet is not ready: kafka/kafka-zookeeper. 0 out of 1 expected pods are ready
...
StatefulSet is not ready: kafka/kafka. 0 out of 1 expected pods are ready
NAME: kafka
LAST DEPLOYED: Sat Feb 19 05:04:53 2022
NAMESPACE: kafka
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: kafka
CHART VERSION: 15.3.0
APP VERSION: 3.1.0
---------------------------------------------------------------------------------------------
 WARNING

    By specifying "serviceType=LoadBalancer" and not configuring the authentication
    you have most likely exposed the Kafka service externally without any
    authentication mechanism.

    For security reasons, we strongly suggest that you switch to "ClusterIP" or
    "NodePort". As alternative, you can also configure the Kafka authentication.

---------------------------------------------------------------------------------------------

** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.kafka.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-0.kafka-headless.kafka.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.1.0-debian-10-r22 --namespace kafka --command -- sleep infinity
    kubectl exec --tty -i kafka-client --namespace kafka -- bash

    PRODUCER:
        kafka-console-producer.sh \
            
            --broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            
            --bootstrap-server kafka.kafka.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

To connect to your Kafka server from outside the cluster, follow the instructions below:

  NOTE: It may take a few minutes for the LoadBalancer IPs to be available.
        Watch the status with: 'kubectl get svc --namespace kafka -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka,pod" -w'

    Kafka Brokers domain: You will have a different external IP for each Kafka broker. You can get the list of external IPs using the command below:

        echo "$(kubectl get svc --namespace kafka -l "app.kubernetes.io/name=kafka,app.kubernetes.io/instance=kafka,app.kubernetes.io/component=kafka,pod" -o jsonpath='{.items[*].status.loadBalancer.ingress[0].ip}' | tr ' ' '\n')"

    Kafka Brokers port: 9094

Kafka 测试验证

测试消息:

先用如下命令创建一个 kafka-client pod:

kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:3.1.0-debian-10-r22 --namespace kafka --command -- sleep infinity

然后进入到 kafka-client 中,运行如下命令测试:

kafka-console-producer.sh  --broker-list kafka-0.kafka-headless.kafka.svc.cluster.local:9092  --topic test
kafka-console-consumer.sh --bootstrap-server kafka-0.kafka-headless.kafka.svc.cluster.local:9092  --topic test --from-beginning

kafka-console-producer.sh  --broker-list 10.109.205.245:9094  --topic test
kafka-console-consumer.sh --bootstrap-server 10.109.205.245:9094  --topic test --from-beginning

效果如下:

外部 producer

external consumer

🎉至此,kafka 安装完成。

Kafka 卸载

Danger

(按需)删除整个 kafka 的命令:

helm delete kafka --namespace kafka

总结

Kafka

  1. Kafka 通过 Helm Chart bitnami 安装,安装于:K8S 集群的 kafka namespace;

    1. 安装模式:三节点
    2. Kafka 版本:3.1.0
    3. Kafka 实例:3 个
    4. Zookeeper 实例:1 个
    5. Kafka、Zookeeper、Kafka 日志均已持久化,位于:/data/rancher/k3s/storage
    6. 未配置 sasl 及 tls
  2. 在 K8S 集群内部,可以通过该地址访问 Kafka:

    kafka.kafka.svc.cluster.local:9092

  3. 在 K8S 集群外部,可以通过该地址访问 Kafka:

    <loadbalancer-ip>:9094

Kafka 的持久化数据截图如下:

image-20220219133316787

三人行, 必有我师; 知识共享, 天下为公. 本文由东风微鸣技术博客 EWhisper.cn 编写.

点击查看更多内容
TA 点赞

若觉得本文不错,就分享一下吧!

评论

作者其他优质文章

正在加载中
  • 推荐
  • 评论
  • 收藏
  • 共同学习,写下你的评论
感谢您的支持,我会继续努力的~
扫码打赏,你说多少就多少
赞赏金额会直接到老师账户
支付方式
打开微信扫一扫,即可进行扫码打赏哦
今天注册有机会得

100积分直接送

付费专栏免费学

大额优惠券免费领

立即参与 放弃机会
意见反馈 帮助中心 APP下载
官方微信

举报

0/150
提交
取消