深入了解如何基于Python讀寫Kafka
這篇文章主要介紹了深入了解如何基于Python讀寫Kafka,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友可以參考下
本篇會給出如何使用python來讀寫kafka, 包含生產(chǎn)者和消費者.
以下使用kafka-python客戶端
生產(chǎn)者
爬蟲大多時候作為消息的發(fā)送端, 在消息發(fā)出去后最好能記錄消息被發(fā)送到了哪個分區(qū), offset是多少, 這些記錄在很多情況下可以幫助快速定位問題, 所以需要在send方法后加入callback函數(shù), 包括成功和失敗的處理
# -*- coding: utf-8 -*-
'''
callback也是保證分區(qū)有序的, 比如2條消息, a先發(fā)送, b后發(fā)送, 對于同一個分區(qū), 那么會先回調(diào)a的callback, 再回調(diào)b的callback
'''
import json
from kafka import KafkaProducer
topic = 'demo'
def on_send_success(record_metadata):
print(record_metadata.topic)
print(record_metadata.partition)
print(record_metadata.offset)
def on_send_error(excp):
print('I am an errback: {}'.format(excp))
def main():
producer = KafkaProducer(
bootstrap_servers='localhost:9092'
)
producer.send(topic, value=b'{"test_msg":"hello world"}').add_callback(on_send_success).add_callback(
on_send_error)
# close() 方法會阻塞等待之前所有的發(fā)送請求完成后再關(guān)閉 KafkaProducer
producer.close()
def main2():
'''
發(fā)送json格式消息
:return:
'''
producer = KafkaProducer(
bootstrap_servers='localhost:9092',
value_serializer=lambda m: json.dumps(m).encode('utf-8')
)
producer.send(topic, value={"test_msg": "hello world"}).add_callback(on_send_success).add_callback(
on_send_error)
# close() 方法會阻塞等待之前所有的發(fā)送請求完成后再關(guān)閉 KafkaProducer
producer.close()
if __name__ == '__main__':
# main()
main2()
消費者
kafka的消費模型比較復(fù)雜, 我會分以下幾種情況來進(jìn)行說明
1.不使用消費組(group_id=None)
不使用消費組的情況下可以啟動很多個消費者, 不再受限于分區(qū)數(shù), 即使消費者數(shù)量 > 分區(qū)數(shù), 每個消費者也都可以收到消息
# -*- coding: utf-8 -*-
'''
消費者: group_id=None
'''
from kafka import KafkaConsumer
topic = 'demo'
def main():
consumer = KafkaConsumer(
topic,
bootstrap_servers='localhost:9092',
auto_offset_reset='latest',
# auto_offset_reset='earliest',
)
for msg in consumer:
print(msg)
print(msg.value)
consumer.close()
if __name__ == '__main__':
main()
2.指定消費組
以下使用pool方法來拉取消息
pool 每次拉取只能拉取一個分區(qū)的消息, 比如有2個分區(qū)1個consumer, 那么會拉取2次
pool 是如果有消息馬上進(jìn)行拉取, 如果timeout_ms內(nèi)沒有新消息則返回空dict, 所以可能出現(xiàn)某次拉取了1條消息, 某次拉取了max_records條
# -*- coding: utf-8 -*-
'''
消費者: 指定group_id
'''
from kafka import KafkaConsumer
topic = 'demo'
group_id = 'test_id'
def main():
consumer = KafkaConsumer(
topic,
bootstrap_servers='localhost:9092',
auto_offset_reset='latest',
group_id=group_id,
)
while True:
try:
# return a dict
batch_msgs = consumer.poll(timeout_ms=1000, max_records=2)
if not batch_msgs:
continue
'''
{TopicPartition(topic='demo', partition=0): [ConsumerRecord(topic='demo', partition=0, offset=42, timestamp=1576425111411, timestamp_type=0, key=None, value=b'74', headers=[], checksum=None, serialized_key_size=-1, serialized_value_size=2, serialized_header_size=-1)]}
'''
for tp, msgs in batch_msgs.items():
print('topic: {}, partition: {} receive length: '.format(tp.topic, tp.partition, len(msgs)))
for msg in msgs:
print(msg.value)
except KeyboardInterrupt:
break
consumer.close()
if __name__ == '__main__':
main()
關(guān)于消費組
我們根據(jù)配置參數(shù)分為以下幾種情況
- group_id=None
- auto_offset_reset='latest': 每次啟動都會從最新出開始消費, 重啟后會丟失重啟過程中的數(shù)據(jù)
- auto_offset_reset='latest': 每次從最新的開始消費, 不會管哪些任務(wù)還沒有消費
- 指定group_id
- 全新group_id
- auto_offset_reset='latest': 只消費啟動后的收到的數(shù)據(jù), 重啟后會從上次提交offset的地方開始消費
- auto_offset_reset='earliest': 從最開始消費全量數(shù)據(jù)
- 舊group_id(即kafka集群中還保留著該group_id的提交記錄)
- auto_offset_reset='latest': 從上次提交offset的地方開始消費
- auto_offset_reset='earliest': 從上次提交offset的地方開始消費
- 全新group_id
性能測試
以下是在本地進(jìn)行的測試, 如果要在線上使用kakfa, 建議提前進(jìn)行性能測試
producer
# -*- coding: utf-8 -*-
'''
producer performance
environment:
mac
python3.7
broker 1
partition 2
'''
import json
import time
from kafka import KafkaProducer
topic = 'demo'
nums = 1000000
def main():
producer = KafkaProducer(
bootstrap_servers='localhost:9092',
value_serializer=lambda m: json.dumps(m).encode('utf-8')
)
st = time.time()
cnt = 0
for _ in range(nums):
producer.send(topic, value=_)
cnt += 1
if cnt % 10000 == 0:
print(cnt)
producer.flush()
et = time.time()
cost_time = et - st
print('send nums: {}, cost time: {}, rate: {}/s'.format(nums, cost_time, nums // cost_time))
if __name__ == '__main__':
main()
'''
send nums: 1000000, cost time: 61.89236712455749, rate: 16157.0/s
send nums: 1000000, cost time: 61.29534196853638, rate: 16314.0/s
'''
consumer
# -*- coding: utf-8 -*-
'''
consumer performance
'''
import time
from kafka import KafkaConsumer
topic = 'demo'
group_id = 'test_id'
def main1():
nums = 0
st = time.time()
consumer = KafkaConsumer(
topic,
bootstrap_servers='localhost:9092',
auto_offset_reset='latest',
group_id=group_id
)
for msg in consumer:
nums += 1
if nums >= 500000:
break
consumer.close()
et = time.time()
cost_time = et - st
print('one_by_one: consume nums: {}, cost time: {}, rate: {}/s'.format(nums, cost_time, nums // cost_time))
def main2():
nums = 0
st = time.time()
consumer = KafkaConsumer(
topic,
bootstrap_servers='localhost:9092',
auto_offset_reset='latest',
group_id=group_id
)
running = True
batch_pool_nums = 1
while running:
batch_msgs = consumer.poll(timeout_ms=1000, max_records=batch_pool_nums)
if not batch_msgs:
continue
for tp, msgs in batch_msgs.items():
nums += len(msgs)
if nums >= 500000:
running = False
break
consumer.close()
et = time.time()
cost_time = et - st
print('batch_pool: max_records: {} consume nums: {}, cost time: {}, rate: {}/s'.format(batch_pool_nums, nums,
cost_time,
nums // cost_time))
if __name__ == '__main__':
# main1()
main2()
'''
one_by_one: consume nums: 500000, cost time: 8.018627166748047, rate: 62354.0/s
one_by_one: consume nums: 500000, cost time: 7.698841094970703, rate: 64944.0/s
batch_pool: max_records: 1 consume nums: 500000, cost time: 17.975456953048706, rate: 27815.0/s
batch_pool: max_records: 1 consume nums: 500000, cost time: 16.711708784103394, rate: 29919.0/s
batch_pool: max_records: 500 consume nums: 500369, cost time: 6.654940843582153, rate: 75187.0/s
batch_pool: max_records: 500 consume nums: 500183, cost time: 6.854053258895874, rate: 72976.0/s
batch_pool: max_records: 1000 consume nums: 500485, cost time: 6.504687070846558, rate: 76942.0/s
batch_pool: max_records: 1000 consume nums: 500775, cost time: 7.047331809997559, rate: 71058.0/s
'''
以上就是本文的全部內(nèi)容,希望對大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
相關(guān)文章
python2.7讀取文件夾下所有文件名稱及內(nèi)容的方法
python,本身來說是一門高級編程語言,python它入門簡單,有基礎(chǔ)的學(xué)起來很快就能有簡單的應(yīng)用,但是在非常高的抽象計算中,高級的python程序設(shè)計也是非常難學(xué)的。接下來給大家介紹python2.7讀取文件夾下所有文件名稱及內(nèi)容的方法,一起看看吧2018-02-02
解決安裝torch后,torch.cuda.is_available()結(jié)果為false的問題
這篇文章主要介紹了解決安裝torch后,torch.cuda.is_available()結(jié)果為false的問題,具有很好的參考價值,希望對大家有所幫助。如有錯誤或未考慮完全的地方,望不吝賜教2022-12-12
Python學(xué)習(xí)筆記之圖片人臉檢測識別實例教程
這篇文章主要給大家介紹了關(guān)于Python學(xué)習(xí)筆記之圖片人臉檢測識別的相關(guān)資料,文中通過示例代碼以及圖文介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面來一起學(xué)習(xí)學(xué)習(xí)吧2019-03-03

