
flume1.9.0
测试 filechannel kafkasink 修改参数 如下:
a1.sinks.k1.kafka.producer.retries = 1
a1.sinks.k1.kafka.producer.max.block.ms = 0
测试 发现 先写满 file 在sink 至 kafka
测试 3W条数据 总需要时间 256.008/s
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
|
-----------------------------------配置开始 --------------------------------------
a1.sources = r1
a1.channels = c1
a1.sinks = k1
a1.sources.r1.type = http
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 55000
a1.sources.r1.contextPath = agent
a1.sources.r1.channels = c1
a1.sources.r1.handler=com.zzwl.flume.source.ZzwlHttpServerHandler
a1.channels.c1.type=file
a1.channels.c1.checkpointDir=/home/bigdata/flume/data/datacheck
a1.channels.c1.dataDirs=/home/bigdata/flume/data
a1.channels.c1.useDualCheckpoints=true
a1.channels.c1.backupCheckpointDir=/home/bigdata/flume/data/bakdatacheck
a1.channels.c1.checkpointInterval=30000
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.topic = flume-hlwx-stream
a1.sinks.k1.kafka.bootstrap.servers = vm0104:9092,vm0204:9092,vm0402:9092
a1.sinks.k1.kafka.flumeBatchSize = 20
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1
a1.sinks.k1.kafka.producer.compression.type = snappy
a1.sinks.k1.kafka.producer.client.id = flume-kafka-producer
a1.sinks.k1.kafka.producer.max.block.ms = 0
a1.sinks.k1.kafka.producer.retries = 1
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
-----------------------------------配置结束 --------------------------------------
|
测试 filechannel kafkasink 修改参数 如下:
a1.sinks.k1.kafka.producer.retries = 1
a1.sinks.k1.kafka.producer.max.block.ms = 0
测试 发现 memorychannel 实时往 sink 中写入 kafka
测试 3W条数据 总需要时间 256.008/s