1、单节点 单 broker
1.1、ZK 的安装
1、首先下载 ZK,解压到/app,配置下环境变量
2、进入$ZOOKEEPER_HOME/conf
,配置一份 zk.conf
(从zoo_sample.cfg
拷贝)
3、修改 data 的路径为dataDir=/app/zookeeper-3.4.12/data
,诸葛目录要手动构建
4、启动服务 zkServer.sh start
1.2、KafKa安装
1、下载 kafka_2.11-2.0.0,注意对应的 Scala,
2、配置环境变量
1.3、配置server.properties
broker.id=0 集群的时候用,每个 cluster 该 id 不同
listeners=PLAINTEXT://localhost:9092 默认端口9092
host.name=localhost 当前机器
log.dirs=/app/kafka_2.11-2.0.0/kafaka-logs kafaka 日志
zookeeper.connect=localhost:2181 ZK 地址
1.4、启动
kafka-server-start.sh $KAFKA_HOME/config/server.properties
jps查看进程
1.5、创建 Topic
kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic hello_topic
其中:
--zookeeper 指定 zk 地址
--replication-factor 指定副本数
--partitions 指定分区数
--topic 指定名称
1.6、查看所有 Topic
kafka-topics.sh --list --zookeeper localhost:2181
查看状态
kafka-topics.sh --describe --zookeeper localhost:2181 --topic hello_topic
image
1.7、产生消息
kafka-console-producer.sh --broker-list localhost:9092 --topic hello_topic
生成消息是送入 Topic 里面,这里需要指定--broker-list
,进入阻塞模式
1.8、消费消息
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic hello_topic --from-beginning
--from-beginning 表示从第一条消息开始
然后进入阻塞状态
1.9、调试
在生产的状态下,发送消息,然后在消费的状态下,可以看到消息正常消费
2、单节点多 broker
2.1 启动 ZK
同上
2.2 配置多份 server.properties
cp $KAFKA_HOME/config/server.properties $KAFKA_HOME/config/server-1.properties cp $KAFKA_HOME/config/server.properties $KAFKA_HOME/config/server-2.properties
修改其中的
#config/server-1.properties: broker.id=1 listeners=PLAINTEXT://localhost:9093 log.dirs=/tmp/kafka-logs-1 #config/server-2.properties: broker.id=2 listeners=PLAINTEXT://localhost:9094 log.dirs=/tmp/kafka-logs-2
2.3 后台运行
kafka-server-start.sh $KAFKA_HOME/config/server.properties & kafka-server-start.sh $KAFKA_HOME/config/server-1.properties & kafka-server-start.sh $KAFKA_HOME/config/server-2.properties &
jps查看
2.4 创建 Topic
kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
2.5 查看这个 Topic
kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
查看多 broker 的 Topic
leader 表示标号是2的 broker 是主
replicas 表示副本是3个
Isr 表示活着的有broker
2.6 发送消息和消费消息
kafka-console-producer.sh --broker-list PLAINTEXT://localhost:9092,PLAINTEXT://localhost:9093,PLAINTEXT://localhost:9094 --topic my-replicated-topic
kafka-console-consumer.sh --bootstrap-server PLAINTEXT://localhost:9092,PLAINTEXT://localhost:9093,PLAINTEXT://localhost:9094 --from-beginning --topic my-replicated-topic
2.7 分别停掉其中的 broker,进行测试
3、使用 API
引入 pom
<dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka_2.11</artifactId> <version>2.0.0</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>2.0.0</version> </dependency>
生产者
import java.util.ArrayList;import java.util.List;import java.util.Properties;import java.util.concurrent.TimeUnit;import org.apache.kafka.clients.producer.KafkaProducer;import org.apache.kafka.clients.producer.Producer;import org.apache.kafka.clients.producer.ProducerRecord;import org.apache.kafka.common.PartitionInfo;public class MyProducer { public static void main(String[] args) { Properties props = new Properties(); props.put("bootstrap.servers", "192.168.31.122:9092"); props.put("acks", "all"); props.put("retries", 0); props.put("batch.size", 16384); props.put("linger.ms", 1); props.put("buffer.memory", 33554432); props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer"); props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer"); //生产者发送消息 String topic = "my-replicated-topic"; Producer<String, String> procuder = new KafkaProducer<String,String>(props); for (int i = 1; i <= 10; i++) { String value = "value_" + i; ProducerRecord<String, String> msg = new ProducerRecord<String, String>(topic, value); procuder.send(msg); } //列出topic的相关信息 List<PartitionInfo> partitions = new ArrayList<PartitionInfo>() ; partitions = procuder.partitionsFor(topic); for(PartitionInfo p:partitions) { System.out.println(p); } System.out.println("send message over."); procuder.close(100,TimeUnit.MILLISECONDS); } }
消费者
import java.util.Arrays;import java.util.Properties;import org.apache.kafka.clients.consumer.ConsumerRecord;import org.apache.kafka.clients.consumer.ConsumerRecords;import org.apache.kafka.clients.consumer.KafkaConsumer;public class MyConsumer { public static void main(String[] args) { Properties props = new Properties(); props.put("bootstrap.servers", "192.168.31.122:9092"); props.put("group.id", "test");//消费者的组id props.put("enable.auto.commit", "true"); props.put("auto.commit.interval.ms", "1000"); props.put("session.timeout.ms", "30000"); props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer"); KafkaConsumer<String, String> consumer = new KafkaConsumer<String, String>(props); //订阅主题列表topic consumer.subscribe(Arrays.asList("my-replicated-topic")); while (true) { ConsumerRecords<String, String> records = consumer.poll(100); for (ConsumerRecord<String, String> record : records) System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value()+"\n"); } } }
作者:breezedancer
链接:https://www.jianshu.com/p/f92a2f907a01
共同学习,写下你的评论
评论加载中...
作者其他优质文章