當(dāng)前位置:首頁(yè) > IT技術(shù) > Web編程 > 正文

大數(shù)據(jù)技術(shù)Flume框架詳解
2022-08-29 23:57:25

Flume的概述

Flume是Cloudera提供的一個(gè)高可用的,高可靠的,分布式的海量日 志采集、聚合和傳輸?shù)南到y(tǒng)。Flume基于流式架構(gòu),靈活簡(jiǎn)單。

  • 高可用(HA) flume框架(故障轉(zhuǎn)移機(jī)制)
  • 高可靠 數(shù)據(jù)采集的可靠性
  • 分布式 分布式集群搭建

Flume的作用

最主要的作用:實(shí)時(shí)讀取服務(wù)器本地磁盤(pán)的數(shù)據(jù),將數(shù)據(jù)寫(xiě)到HDFS、Kafka

Flume的優(yōu)點(diǎn)

可以和任意存儲(chǔ)進(jìn)程集成。

  • 支持不同的采集源
  • 支持多類(lèi)型的目標(biāo)源

輸入的的數(shù)據(jù)速率大于寫(xiě)入目的存儲(chǔ)的速率,flume會(huì)進(jìn)行緩沖,減小 hdfs的壓力。

flume中的事務(wù)基于channel,使用了兩個(gè)事務(wù)模型(sender + receiver),確保消息被可靠發(fā)送

Flume使用兩個(gè)獨(dú)立的事務(wù)分別負(fù)責(zé)從soucrce到channel,以及從 channel到sink的事件傳遞。一旦事務(wù)中所有的數(shù)據(jù)全部成功提交到 channel,那么source才認(rèn)為該數(shù)據(jù)讀取完成。同理,只有成功被sink 寫(xiě)出去的數(shù)據(jù),才會(huì)從channel中移除。

Flume的組成結(jié)構(gòu)

1、Flume組成架構(gòu)

2、Agent

a、簡(jiǎn)介

Agent是一個(gè)JVM進(jìn)程,它以事件的形式將數(shù)據(jù)從源頭送至目的。Agent 主要有3個(gè)部分組成,Source、Channel、Sink。

b、Source

Source是負(fù)責(zé)接收數(shù)據(jù)到Flume Agent的組件。Source組件可以處理 各種類(lèi)型、各種格式的日志數(shù)據(jù),包括avro、thrift、exec、jms、 spooling directory、netcat、sequence generator、syslog、 http、legacy。

c、Channel

Channel是位于Source和Sink之間的緩沖區(qū)。因此,Channel允許 Source和Sink運(yùn)作在不同的速率上。Channel是線程安全的,可以同 時(shí)處理幾個(gè)Source的寫(xiě)入操作和幾個(gè)Sink的讀取操作。 Flume自帶兩種Channel:Memory Channel和File Channel。 Memory Channel是內(nèi)存中的隊(duì)列。Memory Channel在不需要關(guān)心 數(shù)據(jù)丟失的情景下適用。如果需要關(guān)心數(shù)據(jù)丟失,那么Memory Channel就不應(yīng)該使用,因?yàn)槌绦蛩劳?、機(jī)器宕機(jī)或者重啟都會(huì)導(dǎo)致數(shù) 據(jù)丟失。File Channel將所有事件寫(xiě)到磁盤(pán)。因此在程序關(guān)閉或機(jī)器宕 機(jī)的情況下不會(huì)丟失數(shù)據(jù)。

d、Sink

Sink不斷地輪詢(xún)Channel中的事件且批量地移除它們,并將這些事件批 量寫(xiě)入到存儲(chǔ)或索引系統(tǒng)、或者被發(fā)送到另一個(gè)Flume Agent。Sink 是完全事務(wù)性的。在從Channel批量刪除數(shù)據(jù)之前,每個(gè)Sink用 Channel啟動(dòng)一個(gè)事務(wù)。批量事件一旦成功寫(xiě)出到存儲(chǔ)系統(tǒng)或下一個(gè) Flume Agent,Sink就利用Channel提交事務(wù)。事務(wù)一旦被提交,該 Channel從自己的內(nèi)部緩沖區(qū)刪除事件。Sink組件目的地包括hdfs、 logger、avro、thrift、ipc、file、null、HBase、solr、自定義。

e、Event

傳輸單元,F(xiàn)lume數(shù)據(jù)傳輸?shù)幕締卧允录男问綄?shù)據(jù)從源頭送 至目的地。 Event由可選的header和載有數(shù)據(jù)的一個(gè)byte array 構(gòu)成。Header是容納了key-value字符串對(duì)的HashMap。

Flume agent的配置文件

單數(shù)據(jù)源單出口案例

這種模式是將多個(gè)flume給順序連接起來(lái)了,從最初的source開(kāi)始到最 終sink傳送的目的存儲(chǔ)系統(tǒng)。此模式不建議橋接過(guò)多的flume數(shù)量, flume數(shù)量過(guò)多不僅會(huì)影響傳輸速率,而且一旦傳輸過(guò)程中某個(gè)節(jié)點(diǎn) flume宕機(jī),會(huì)影響整個(gè)傳輸系統(tǒng)。

flume實(shí)現(xiàn)監(jiān)控端口數(shù)據(jù)案例:

用netcat工具向本機(jī)端口號(hào):44444發(fā)送消息,flume監(jiān)聽(tīng)

# Name the components on this agent
# r1:表示a1的輸入源	a1:表示agent的名稱(chēng)
a1.sources = r1
# k1:表示a1的輸出目的地
a1.sinks = k1
# c1:表示a1的緩沖區(qū)
a1.channels = c1

# Describe/configure the source
# 表示a1的輸入源類(lèi)型為netcat端口類(lèi)型
a1.sources.r1.type = netcat
# 表示a1的監(jiān)聽(tīng)主機(jī)
a1.soucres.r1.bind = localhost
# 表示a1的監(jiān)聽(tīng)的端口號(hào)
a1.sources.r1.port = 44444

# Describe the sink
# 表示a1的輸出目的地是控制臺(tái)logger類(lèi)型
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
# 表示a1的channel類(lèi)型是memory內(nèi)存型
a1.channels.c1.type = memory
# 表示a1的channel總?cè)萘?000個(gè)event
a1.channels.c1.capacity = 1000
# 表示a1的channel傳輸時(shí)收集到100條event以后再去提交事務(wù)
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
# 表示講r1和c1鏈接起來(lái)
a1.sources.r1.channels = c1
# 表示將k1和c1鏈接起來(lái)
a1.sinks.k1.channel = c1

啟動(dòng)flume

  • 方法一:bin/flume-ng agent --conf conf/ --name a1 --conf-file job/flume-netcat-logger.conf - Dflume.root.logger=INFO,console

  • 方法二:bin/flume-ng agent -c conf/ -n a1 –f job/flume-netcat-logger.conf -Dflume.root.logger=INFO,console

參數(shù)說(shuō)明

  • --conf conf/ :表示(conf)配置文件存儲(chǔ)在conf/目錄
  • --name a1 :表示給agent起名為a1
  • --conf-file job/flume-netcat.conf :flume本次啟動(dòng)讀取的配置 文件是在job文件夾下的flume-telnet.conf文件。
  • -Dflume.root.logger==INFO,console :-D表示flume運(yùn)行時(shí)動(dòng) 態(tài)修改flume.root.logger參數(shù)屬性值,并將控制臺(tái)日志打印級(jí)別設(shè) 置為INFO級(jí)別。日志級(jí)別包括:log、info、warn、error。

實(shí)時(shí)采集文件到HDFS上案例

用flume實(shí)時(shí)監(jiān)聽(tīng)某文件,當(dāng)該文件的內(nèi)容變化時(shí),上傳該數(shù)據(jù)到HDFS上。

# Name the components on this agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2

# Describe/configure the source
# 定義數(shù)據(jù)源文件的類(lèi)型
a2.sources.r2.type = exec
# 監(jiān)聽(tīng)該目錄下的access.log文件
a2.sources.r2.command = tail -F /home/hadoop/nginx/logs/access.log
a2.sources.r2.shell = /bin/bash -c

# Describe the sink
a2.sinks.k2.type = hdfs
# 上傳文件的路徑 %Y%m%d為時(shí)間戳,自動(dòng)生成對(duì)應(yīng)時(shí)間 年月日
a2.sinks.k2.hdfs.path = hdfs://192.168.137.128:9000/flume/%Y%m%d/%H
#上傳文件的前綴
a2.sinks.k2.hdfs.filePrefix = logs-
#是否按照時(shí)間滾動(dòng)文件夾
a2.sinks.k2.hdfs.round = true
#多少時(shí)間單位創(chuàng)建一個(gè)新的文件夾
a2.sinks.k2.hdfs.roundValue = 1
#重新定義時(shí)間單位
a2.sinks.k2.hdfs.roundUnit = hour
#是否使用本地時(shí)間戳
a2.sinks.k2.hdfs.useLocalTimeStamp = true
#積攢多少個(gè)Event才flush到HDFS一次
a2.sinks.k2.hdfs.batchSize = 1000
#設(shè)置文件類(lèi)型,可支持壓縮
a2.sinks.k2.hdfs.fileType = DataStream
#多久生成一個(gè)新的文件
a2.sinks.k2.hdfs.rollInterval = 60
#設(shè)置每個(gè)文件的滾動(dòng)大小
a2.sinks.k2.hdfs.rollSize = 134217700
#文件的滾動(dòng)與Event數(shù)量無(wú)關(guān)
a2.sinks.k2.hdfs.rollCount = 0
# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100
# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2

實(shí)時(shí)讀取目錄文件到HDFS上案例

使用flume實(shí)時(shí)監(jiān)聽(tīng)整個(gè)目錄文件,當(dāng)該目錄文件新增時(shí),上傳該文件到HDFS上。

a3.sources = r3
a3.sinks = k3
a3.channels = c3

# Describe/configure the source
# 定義source類(lèi)型為目錄
a3.sources.r3.type = spooldir
# 定義監(jiān)控目錄
a3.sources.r3.spoolDir = /home/hadoop/bigdatasoftware/flume/upload
# 定義文件上傳完的后綴名
a3.sources.r3.fileSuffix = .COMPLETED
# 是否有五年間頭
a3.sources.r3.fileHeader = true
#忽略所有以.tmp結(jié)尾的文件,不上傳
a3.sources.r3.ignorePattern = ([^ ]*.tmp)

# Describe the sink
a3.sinks.k3.type = hdfs
a3.sinks.k3.hdfs.path = hdfs://192.168.137.128:9000/flume/upload/%Y%m%d/%H
#上傳文件的前綴
a3.sinks.k3.hdfs.filePrefix = upload-
#是否按照時(shí)間滾動(dòng)文件夾
a3.sinks.k3.hdfs.round = true
#多少時(shí)間單位創(chuàng)建一個(gè)新的文件夾
a3.sinks.k3.hdfs.roundValue = 1
#重新定義時(shí)間單位
a3.sinks.k3.hdfs.roundUnit = hour
#是否使用本地時(shí)間戳
a3.sinks.k3.hdfs.useLocalTimeStamp = true
#積攢多少個(gè)Event才flush到HDFS一次
a3.sinks.k3.hdfs.batchSize = 100
#設(shè)置文件類(lèi)型,可支持壓縮
a3.sinks.k3.hdfs.fileType = DataStream
#多久生成一個(gè)新的文件
a3.sinks.k3.hdfs.rollInterval = 60
#設(shè)置每個(gè)文件的滾動(dòng)大小大概是128M
a3.sinks.k3.hdfs.rollSize = 134217700
#文件的滾動(dòng)與Event數(shù)量無(wú)關(guān)
a3.sinks.k3.hdfs.rollCount = 0
# Use a channel which buffers events in memory
a3.channels.c3.type = memory
a3.channels.c3.capacity = 1000
a3.channels.c3.transactionCapacity = 100

# Bind the source and sink to the channel
a3.sources.r3.channels = c3
a3.sinks.k3.channel = c3

單數(shù)據(jù)源多出口案例(選擇器)

Flume支持將事件流向一個(gè)或者多個(gè)目的地。這種模式將數(shù)據(jù)源復(fù)制到 多個(gè)channel中,每個(gè)channel都有相同的數(shù)據(jù),sink可以選擇傳送的 不同的目的地。

flume1監(jiān)控文件的變動(dòng),并將變動(dòng)的內(nèi)容傳遞給flume2和flume3。

flume2負(fù)責(zé)輸出到HDFS上

flume3負(fù)責(zé)輸出到本地上

三個(gè)flume在同一臺(tái)設(shè)備上

flume1:

# Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2
# 將數(shù)據(jù)流復(fù)制給所有channel
a1.sources.r1.selector.type = replicating

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/hadoop/bigdatasoftware/nginx/logs/access.log
a1.sources.r1.shell = /bin/bash -c

# Describe the sink
# sink端的avro是一個(gè)數(shù)據(jù)發(fā)送者
a1.sinks.k1.type = avro
# 設(shè)置其中一個(gè)flume接收的地址
a1.sinks.k1.hostname = 192.168.137.128
a1.sinks.k1.port = 4141
a1.sinks.k2.type = avro
# 設(shè)置另一個(gè)flume的接收地址
a1.sinks.k2.hostname = 192.168.137.128
a1.sinks.k2.port = 4142

# Describe the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2

flume2:

# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# Describe/configure the source
# source端的avro是一個(gè)數(shù)據(jù)接收服務(wù)
a2.sources.r1.type = avro
# 設(shè)置本機(jī)地址,注意端口號(hào)
a2.sources.r1.bind = 192.168.137.128
a2.sources.r1.port = 4141

# Describe the sink
a2.sinks.k1.type = hdfs
a2.sinks.k1.hdfs.path =
hdfs://192.168.137.128:9000/flume2/%Y%m%d/%H
#上傳文件的前綴
a2.sinks.k1.hdfs.filePrefix = flume2-
#是否按照時(shí)間滾動(dòng)文件夾
a2.sinks.k1.hdfs.round = true
#多少時(shí)間單位創(chuàng)建一個(gè)新的文件夾
a2.sinks.k1.hdfs.roundValue = 1
#重新定義時(shí)間單位
a2.sinks.k1.hdfs.roundUnit = hour
#是否使用本地時(shí)間戳
a2.sinks.k1.hdfs.useLocalTimeStamp = true
#積攢多少個(gè)Event才flush到HDFS一次
a2.sinks.k1.hdfs.batchSize = 100
#設(shè)置文件類(lèi)型,可支持壓縮
a2.sinks.k1.hdfs.fileType = DataStream
#多久生成一個(gè)新的文件
a2.sinks.k1.hdfs.rollInterval = 600
#設(shè)置每個(gè)文件的滾動(dòng)大小大概是128M
a2.sinks.k1.hdfs.rollSize = 134217700
#文件的滾動(dòng)與Event數(shù)量無(wú)關(guān)
a2.sinks.k1.hdfs.rollCount = 0

# Describe the channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

flume3:

# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c2

# Describe/configure the source
a3.sources.r1.type = avro
a3.sources.r1.bind = 192.168.137.128
a3.sources.r1.port = 4142

# Describe the sink
a3.sinks.k1.type = file_roll
a3.sinks.k1.sink.directory = /opt/module/data/flume3

# Describe the channel
a3.channels.c2.type = memory
a3.channels.c2.capacity = 1000
a3.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a3.sources.r1.channels = c2
a3.sinks.k1.channel = c2

注意:接收方與發(fā)送方的地址和端口號(hào)要對(duì)應(yīng)

單數(shù)據(jù)源多出口案例(Sink組)

Flume支持使用將多個(gè)sink邏輯上分到一個(gè)sink組,flume將數(shù)據(jù)發(fā)送 到不同的sink,主要解決負(fù)載均衡和故障轉(zhuǎn)移問(wèn)題。

配置1個(gè)接收日志文件的source和1個(gè)channel、兩個(gè)sink,分別輸送給flume-flume-console1和flume-flume-console2。

flume1:

a1.sources = r1
a1.channels = c1
a1.sinks = k1 k2

a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 22222

#定義一個(gè)sink組
#一個(gè)channel對(duì)應(yīng)多個(gè)sink時(shí)要設(shè)置一個(gè)sinkgroups
a1.sinkgroups = g1
#指明sink組中的sink實(shí)例
a1.sinkgroups.g1.sinks = k1 k2
#設(shè)置sinkProcessor的類(lèi)型(負(fù)載均衡)
a1.sinkgroups.g1.processor.type = load_balance
#①random-隨機(jī)分配  ②round_robin-輪循
a1.sinkgroups.g1.processor.selector = random


a1.channels.c1.type = memory


a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.137.128
a1.sinks.k1.port = 33333

a1.sinks.k2.type = avro
a1.sinks.k2.hostname = 192.168.137.129
a1.sinks.k2.port = 44444


a1.sources.r1.channels = c1 
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c1

flume2:

a1.sources = r1
a1.channels = c1
a1.sinks = k1

a1.sources.r1.type = avro
a1.sources.r1.bind = 192.168.137.128
a1.sources.r1.port = 33333

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000

a1.sinks.k1.type = logger

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

flume3:

a1.sources = r1
a1.channels = c1
a1.sinks = k1

a1.sources.r1.type = avro
a1.sources.r1.bind = 192.168.137.129
a1.sources.r1.port = 44444

a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000

a1.sinks.k1.type = logger


a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

多數(shù)據(jù)源匯總

這種模式是我們最常見(jiàn)的,也非常實(shí)用,日常web應(yīng)用通常分布在上百 個(gè)服務(wù)器,大者甚至上千個(gè)、上萬(wàn)個(gè)服務(wù)器。產(chǎn)生的日志,處理起來(lái)也 非常麻煩。用flume的這種組合方式能很好的解決這一問(wèn)題,每臺(tái)服務(wù) 器部署一個(gè)flume采集日志,傳送到一個(gè)集中收集日志的flume,再由 此flume上傳到hdfs、hive、hbase、jms等,進(jìn)行日志分析

flume1監(jiān)控一個(gè)文件的變動(dòng)

flume2監(jiān)控一個(gè)端口的數(shù)據(jù)

flume1和flume2將數(shù)據(jù)發(fā)送給flume3,flume3最終將數(shù)據(jù)打印到控制臺(tái)。

flume1:

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/hadoop/nginx/logs/access.log
a1.sources.r1.shell = /bin/bash -c

# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.137.129
a1.sinks.k1.port = 4141

# Describe the channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

flume2:

# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# Describe/configure the source
a2.sources.r1.type = netcat
a2.sources.r1.bind = 198.168.137.128
a2.sources.r1.port = 44444

# Describe the sink
a2.sinks.k1.type = avro
a2.sinks.k1.hostname = 192.168.137.129
a2.sinks.k1.port = 4141

# Use a channel which buffers events in memory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

flume3:

# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1

# Describe/configure the source
a3.sources.r1.type = avro
a3.sources.r1.bind = 192.168.137.129
a3.sources.r1.port = 4141

# Describe the sink
# Describe the sink
a3.sinks.k1.type = logger

# Describe the channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

本文摘自 :https://www.cnblogs.com/

開(kāi)通會(huì)員,享受整站包年服務(wù)立即開(kāi)通 >