欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

詳解python的webrtc庫實(shí)現(xiàn)語音端點(diǎn)檢測

 更新時(shí)間:2017年05月31日 08:47:24   作者:舒林笑笑生  
這篇文章主要介紹了詳解python的webrtc庫實(shí)現(xiàn)語音端點(diǎn)檢測,小編覺得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過來看看吧

引言

語音端點(diǎn)檢測最早應(yīng)用于電話傳輸和檢測系統(tǒng)當(dāng)中,用于通信信道的時(shí)間分配,提高傳輸線路的利用效率.端點(diǎn)檢測屬于語音處理系統(tǒng)的前端操作,在語音檢測領(lǐng)域意義重大.

但是目前的語音端點(diǎn)檢測,尤其是檢測 人聲 開始和結(jié)束的端點(diǎn)始終是屬于技術(shù)難點(diǎn),各家公司始終處于 能判斷,但是不敢保證 判別準(zhǔn)確性 的階段.

Screenshot from 2017-05-25 22-42-50.png 

現(xiàn)在基于云端語義庫的聊天機(jī)器人層出不窮,其中最著名的當(dāng)屬amazon的 Alexa/Echo 智能音箱.

timg.jpg

國內(nèi)如雨后春筍般出現(xiàn)了各種搭載語音聊天的智能音箱(如前幾天在知乎上廣告的若琪機(jī)器人)和各類智能機(jī)器人產(chǎn)品.國內(nèi)語音服務(wù)提供商主要面對中文語音服務(wù),由于語音不像圖像有分辨率等等較為客觀的指標(biāo),很多時(shí)候憑主觀判斷,所以較難判斷各家語音識別和合成技術(shù)的好壞.但是我個(gè)人認(rèn)為,國內(nèi)的中文語音服務(wù)和國外的英文語音服務(wù),在某些方面已經(jīng)有超越的趨勢.

timg (1).jpg

通常搭建機(jī)器人聊天系統(tǒng)主要包括以下三個(gè)方面: 

  1.  語音轉(zhuǎn)文字(ASR/STT)
  2.  語義內(nèi)容(NLU/NLP)
  3.  文字轉(zhuǎn)語音(TTS)

語音轉(zhuǎn)文字(ASR/STT)

在將語音傳給云端API之前,是本地前端的語音采集,這部分主要包括如下幾個(gè)方面: 

  1.  麥克風(fēng)降噪
  2.  聲源定位
  3.  回聲消除
  4.  喚醒詞
  5.  語音端點(diǎn)檢測
  6.  音頻格式壓縮

python 端點(diǎn)檢測

由于實(shí)際應(yīng)用中,單純依靠能量檢測特征檢測等方法很難判斷人聲說話的起始點(diǎn),所以市面上大多數(shù)的語音產(chǎn)品都是使用喚醒詞判斷語音起始.另外加上聲音回路,還可以做語音打斷.這樣的交互方式可能有些傻,每次必須喊一下 喚醒詞 才能繼續(xù)聊天.這種方式聊多了,個(gè)人感覺會(huì)嘴巴疼:-O .現(xiàn)在github上有snowboy喚醒詞的開源庫,大家可以登錄snowboy官網(wǎng)訓(xùn)練自己的喚醒詞模型. 

  1.  Kitt-AI : Snowboy 
  2.  Sensory : Sensory

考慮到用喚醒詞嘴巴會(huì)累,所以大致調(diào)研了一下,Python擁有豐富的庫,直接import就能食用.這種方式容易受強(qiáng)噪聲干擾,適合一個(gè)人在家玩玩. 

  1.  pyaudio: pip install pyaudio 可以從設(shè)備節(jié)點(diǎn)讀取原始音頻流數(shù)據(jù),音頻編碼是PCM格式;
  2.  webrtcvad: pip install webrtcvad 檢測判斷一組語音數(shù)據(jù)是否為空語音;

當(dāng)檢測到持續(xù)時(shí)間長度 T1 vad檢測都有語音活動(dòng),可以判定為語音起始;

當(dāng)檢測到持續(xù)時(shí)間長度 T2 vad檢測都沒有有語音活動(dòng),可以判定為語音結(jié)束;

完整程序代碼可以從我的github下載

程序很簡單,相信看一會(huì)兒就明白了

'''
Requirements:
+ pyaudio - `pip install pyaudio`
+ py-webrtcvad - `pip install webrtcvad`
'''
import webrtcvad
import collections
import sys
import signal
import pyaudio

from array import array
from struct import pack
import wave
import time

FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 16000
CHUNK_DURATION_MS = 30    # supports 10, 20 and 30 (ms)
PADDING_DURATION_MS = 1500  # 1 sec jugement
CHUNK_SIZE = int(RATE CHUNK_DURATION_MS / 1000) # chunk to read
CHUNK_BYTES = CHUNK_SIZE 2 # 16bit = 2 bytes, PCM
NUM_PADDING_CHUNKS = int(PADDING_DURATION_MS / CHUNK_DURATION_MS)
# NUM_WINDOW_CHUNKS = int(240 / CHUNK_DURATION_MS)
NUM_WINDOW_CHUNKS = int(400 / CHUNK_DURATION_MS) # 400 ms/ 30ms ge
NUM_WINDOW_CHUNKS_END = NUM_WINDOW_CHUNKS 2

START_OFFSET = int(NUM_WINDOW_CHUNKS CHUNK_DURATION_MS 0.5 RATE)

vad = webrtcvad.Vad(1)

pa = pyaudio.PyAudio()
stream = pa.open(format=FORMAT,
         channels=CHANNELS,
         rate=RATE,
         input=True,
         start=False,
         # input_device_index=2,
         frames_per_buffer=CHUNK_SIZE)


got_a_sentence = False
leave = False


def handle_int(sig, chunk):
  global leave, got_a_sentence
  leave = True
  got_a_sentence = True


def record_to_file(path, data, sample_width):
  "Records from the microphone and outputs the resulting data to 'path'"
  # sample_width, data = record()
  data = pack('<' + ('h' len(data)), data)
  wf = wave.open(path, 'wb')
  wf.setnchannels(1)
  wf.setsampwidth(sample_width)
  wf.setframerate(RATE)
  wf.writeframes(data)
  wf.close()


def normalize(snd_data):
  "Average the volume out"
  MAXIMUM = 32767 # 16384
  times = float(MAXIMUM) / max(abs(i) for i in snd_data)
  r = array('h')
  for i in snd_data:
    r.append(int(i times))
  return r

signal.signal(signal.SIGINT, handle_int)

while not leave:
  ring_buffer = collections.deque(maxlen=NUM_PADDING_CHUNKS)
  triggered = False
  voiced_frames = []
  ring_buffer_flags = [0] NUM_WINDOW_CHUNKS
  ring_buffer_index = 0

  ring_buffer_flags_end = [0] NUM_WINDOW_CHUNKS_END
  ring_buffer_index_end = 0
  buffer_in = ''
  # WangS
  raw_data = array('h')
  index = 0
  start_point = 0
  StartTime = time.time()
  print(" recording: ")
  stream.start_stream()

  while not got_a_sentence and not leave:
    chunk = stream.read(CHUNK_SIZE)
    # add WangS
    raw_data.extend(array('h', chunk))
    index += CHUNK_SIZE
    TimeUse = time.time() - StartTime

    active = vad.is_speech(chunk, RATE)

    sys.stdout.write('1' if active else '_')
    ring_buffer_flags[ring_buffer_index] = 1 if active else 0
    ring_buffer_index += 1
    ring_buffer_index %= NUM_WINDOW_CHUNKS

    ring_buffer_flags_end[ring_buffer_index_end] = 1 if active else 0
    ring_buffer_index_end += 1
    ring_buffer_index_end %= NUM_WINDOW_CHUNKS_END

    # start point detection
    if not triggered:
      ring_buffer.append(chunk)
      num_voiced = sum(ring_buffer_flags)
      if num_voiced > 0.8 NUM_WINDOW_CHUNKS:
        sys.stdout.write(' Open ')
        triggered = True
        start_point = index - CHUNK_SIZE 20 # start point
        # voiced_frames.extend(ring_buffer)
        ring_buffer.clear()
    # end point detection
    else:
      # voiced_frames.append(chunk)
      ring_buffer.append(chunk)
      num_unvoiced = NUM_WINDOW_CHUNKS_END - sum(ring_buffer_flags_end)
      if num_unvoiced > 0.90 NUM_WINDOW_CHUNKS_END or TimeUse > 10:
        sys.stdout.write(' Close ')
        triggered = False
        got_a_sentence = True

    sys.stdout.flush()

  sys.stdout.write('\n')
  # data = b''.join(voiced_frames)

  stream.stop_stream()
  print(" done recording")
  got_a_sentence = False

  # write to file
  raw_data.reverse()
  for index in range(start_point):
    raw_data.pop()
  raw_data.reverse()
  raw_data = normalize(raw_data)
  record_to_file("recording.wav", raw_data, 2)
  leave = True

stream.close()

程序運(yùn)行方式sudo python vad.py

以上就是本文的全部內(nèi)容,希望對大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。

相關(guān)文章

最新評論