淺談Python實時檢測CPU和GPU的功耗
前言
相關一些檢測工具挺多的,比如powertop、powerstat、s-tui等。但如何通過代碼的方式來實時檢測,是個麻煩的問題。通過許久的搜索和自己的摸索,發(fā)現(xiàn)了可以檢測CPU和GPU功耗的方法。如果有什么不對,或有更好的方法,歡迎評論留言!
文末附完整功耗分析的示例代碼!
GPU功耗檢測方法
如果是常規(guī)的工具,可以使用官方的NVML。但這里需要Python控制,所以使用了對應的封裝:pynvml。
先安裝:
pip install pynvml
關于這個庫,網上的使用教程挺多的。這里直接給出簡單的示例代碼:
import pynvml pynvml.nvmlInit() handle = pynvml.nvmlDeviceGetHandleByIndex(0) powerusage = pynvml.nvmlDeviceGetPowerUsage(handle) / 1000
這個方法獲取的值,跟使用“nvidia-smi”指令得到的是一樣的。
附贈一個來自網上的獲取更詳細信息的函數(shù):
def get_sensor_values(): """ get Sensor values :return: """ values = list() # get gpu driver version version = pynvml.nvmlSystemGetDriverVersion() values.append("GPU_device_driver_version:" + version.decode()) gpucount = pynvml.nvmlDeviceGetCount() # 顯示有幾塊GPU for gpu_id in range(gpucount): handle = pynvml.nvmlDeviceGetHandleByIndex(gpu_id) name = pynvml.nvmlDeviceGetName(handle).decode() meminfo = pynvml.nvmlDeviceGetMemoryInfo(handle) # print(meminfo.total) # 顯卡總的顯存大小 gpu_id = str(gpu_id) values.append("GPU " + gpu_id + " " + name + " 總共顯存大小:" + str(common.bytes2human(meminfo.total))) # print(meminfo.used) # 顯存使用大小 values.append("GPU " + gpu_id + " " + name + " 顯存使用大小:" + str(common.bytes2human(meminfo.used))) # print(meminfo.free) # 顯卡剩余顯存大小 values.append("GPU " + gpu_id + " " + name + " 剩余顯存大小:" + str(common.bytes2human(meminfo.free))) values.append("GPU " + gpu_id + " " + name + " 剩余顯存比例:" + str(int((meminfo.free / meminfo.total) * 100))) utilization = pynvml.nvmlDeviceGetUtilizationRates(handle) # print(utilization.gpu) # gpu利用率 values.append("GPU " + gpu_id + " " + name + " GPU利用率:" + str(utilization.gpu)) powerusage = pynvml.nvmlDeviceGetPowerUsage(handle) # print(powerusage / 1000) # 當前功耗, 原始單位是mWa values.append("GPU " + gpu_id + " " + name + " 當前功耗(W):" + str(powerusage / 1000)) # 當前gpu power capacity # pynvml.nvmlDeviceGetEnforcedPowerLimit(handle) # 通過以下方法可以獲取到gpu的溫度,暫時采用ipmi sdr獲取gpu的溫度,此處暫不處理 # temp = pynvml.nvmlDeviceGetTemperature(handle,0) print('\n'.join(values)) return values
CPU功耗檢測方法
這個沒有找到開源可以直接用的庫。但經過搜索,發(fā)現(xiàn)大家都在用的s-tui工具是開源的!通過查看源碼,發(fā)現(xiàn)他是有獲取CPU功耗部分的代碼,所以就參考他的源碼寫了一下。
先安裝:
sudo apt install s-tui pip install s-tui
先直接運行工具看一下效果(不使用sudo是不會出來Power的):
sudo s-tui
說明這個工具確實能獲取到CPU的功耗。其中package就是2個CPU,dram是內存條功耗(一般不準,可以不用)。
直接給出簡單的示例代碼:
from s_tui.sources.rapl_power_source import RaplPowerSource source.update() summary = dict(source.get_sensors_summary()) cpu_power_total = str(sum(list(map(float, [summary[key] for key in summary.keys() if key.startswith('package')]))))
不過注意!由于需要sudo權限,所以運行這個py文件時候,也需要sudo方式,比如:
sudo python demo.py
sudo的困擾與解決
上面提到,由于必須要sudo方式,但sudo python就換了運行腳本的環(huán)境了呀,這個比較棘手。后來想了個方法,曲線救國一下。通過sudo運行一個腳本,并開啟socket監(jiān)聽;而我們自己真正的腳本,在需要獲取CPU功耗時候,連接一下socket就行。
為什么這里使用socket而不是http呢?因為socket更高效一點!
我們寫一個“power_listener.py”來監(jiān)聽:
from s_tui.sources.rapl_power_source import RaplPowerSource import socket import json def output_to_terminal(source): results = {} if source.get_is_available(): source.update() source_name = source.get_source_name() results[source_name] = source.get_sensors_summary() for key, value in results.items(): print(str(key) + ": ") for skey, svalue in value.items(): print(str(skey) + ": " + str(svalue) + ", ") source = RaplPowerSource() # output_to_terminal(source) s = socket.socket() host = socket.gethostname() port = 8888 s.bind((host, port)) s.listen(5) print("等待客戶端連接...") while True: c, addr = s.accept() source.update() summary = dict(source.get_sensors_summary()) #msg = json.dumps(summary) # package表示CPU,dram表示內存(一般不準) power_total = str(sum(list(map(float, [summary[key] for key in summary.keys() if key.startswith('package')])))) print(f'發(fā)送給{addr}:{power_total}') c.send(power_total.encode('utf-8')) c.close() # 關閉連接
因此,在需要獲取CPU功耗時候,只需要:
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host = socket.gethostname() port = 8888 s.connect((host, port)) msg = s.recv(1024) s.close() power_usage_cpu = float(msg.decode('utf-8'))
完整功耗分析示例代碼
提供一個我自己編寫和使用的功耗分析代碼,僅供參考。(注意上面的power_listener.py需要運行著)
import cv2 import socket import sys import threading import json import statistics from psutil import _common as common import pynvml pynvml.nvmlInit() class Timer: def __init__(self, name = '', is_verbose = False): self._name = name self._is_verbose = is_verbose self._is_paused = False self._start_time = None self._accumulated = 0 self._elapsed = 0 self.start() def start(self): self._accumulated = 0 self._start_time = cv2.getTickCount() def pause(self): now_time = cv2.getTickCount() self._accumulated += (now_time - self._start_time)/cv2.getTickFrequency() self._is_paused = True def resume(self): if self._is_paused: # considered only if paused self._start_time = cv2.getTickCount() self._is_paused = False def elapsed(self): if self._is_paused: self._elapsed = self._accumulated else: now = cv2.getTickCount() self._elapsed = self._accumulated + (now - self._start_time)/cv2.getTickFrequency() if self._is_verbose is True: name = self._name if self._is_paused: name += ' [paused]' message = 'Timer::' + name + ' - elapsed: ' + str(self._elapsed) timer_print(message) return self._elapsed class PowerUsage: ''' demo: power_usage = PowerUsage() power_usage.analyze_start() time.sleep(2) time_used, power_usage_gpu, power_usage_cpu = power_usage.analyze_end() print(time_used) print(power_usage_gpu) print(power_usage_cpu) ''' def __init__(self): self.start_analyze = False self.power_usage_gpu_values = list() self.power_usage_cpu_values = list() self.thread = None self.timer = Timer(name='GpuPowerUsage', is_verbose=False) def analyze_start(self, gpu_id=0, delay=0.1): handle = pynvml.nvmlDeviceGetHandleByIndex(gpu_id) def start(): self.power_usage_gpu_values.clear() self.power_usage_cpu_values.clear() self.start_analyze = True self.timer.start() while self.start_analyze: powerusage = pynvml.nvmlDeviceGetPowerUsage(handle) self.power_usage_gpu_values.append(powerusage/1000) s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) host = socket.gethostname() port = 8888 s.connect((host, port)) msg = s.recv(1024) s.close() self.power_usage_cpu_values.append(float(msg.decode('utf-8'))) time.sleep(delay) self.thread = threading.Thread(target=start, daemon=True) self.thread.start() def analyze_end(self, mean=True): self.start_analyze = False while self.thread and self.thread.isAlive(): time.sleep(0.01) time_used = self.timer.elapsed() self.thread = None power_usage_gpu = statistics.mean(self.power_usage_gpu_values) if mean else self.power_usage_gpu_values power_usage_cpu = statistics.mean(self.power_usage_cpu_values) if mean else self.power_usage_cpu_values return time_used, power_usage_gpu, power_usage_cpu power_usage = PowerUsage() def power_usage_api(func, note=''): @wraps(func) def wrapper(*args, **kwargs): power_usage.analyze_start() result = func(*args, **kwargs) print(f'{note}{power_usage.analyze_end()}') return result return wrapper def power_usage_api2(note=''): def decorator(func): @wraps(func) def wrapper(*args, **kwargs): power_usage.analyze_start() result = func(*args, **kwargs) print(f'{note}{power_usage.analyze_end()}') return result return wrapper return decorator
用法示例:
power_usage = PowerUsage() power_usage.analyze_start() # ---------------------- # xxx 某一段待分析的代碼 # 這里以sleep表示運行時長 time.sleep(2) # ---------------------- time_used, power_usage_gpu, power_usage_cpu = power_usage.analyze_end() print(f'time_used: {time_used}') print(f'power_usage_gpu: {power_usage_gpu}') print(f'power_usage_cpu: {power_usage_cpu}')
到此這篇關于淺談Python實時檢測CPU和GPU的功耗的文章就介紹到這了,更多相關Python CPU和GPU功耗內容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
- Python基于pyCUDA實現(xiàn)GPU加速并行計算功能入門教程
- 關于Python的GPU編程實例近鄰表計算的講解
- Python實現(xiàn)GPU加速的基本操作
- Python3實現(xiàn)打格點算法的GPU加速實例詳解
- GPU排隊腳本實現(xiàn)空閑觸發(fā)python腳本實現(xiàn)示例
- python 詳解如何使用GPU大幅提高效率
- python沒有gpu,如何改用cpu跑代碼
- 一文詳解如何用GPU來運行Python代碼
- Python Pytorch gpu 分析環(huán)境配置
- 利用Python進行全面的GPU環(huán)境檢測與分析
- Python調用GPU算力的實現(xiàn)步驟
相關文章
pyinstaller打包可執(zhí)行文件,存放路徑包含中文無法運行的解決方案
這篇文章主要介紹了pyinstaller打包可執(zhí)行文件,存放路徑包含中文無法運行的解決方案,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2021-03-03