欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

Python tornado隊(duì)列示例-一個(gè)并發(fā)web爬蟲(chóng)代碼分享

 更新時(shí)間:2018年01月09日 16:23:00   作者:self-motivation  
這篇文章主要介紹了Python tornado隊(duì)列示例-一個(gè)并發(fā)web爬蟲(chóng)代碼分享,具有一定借鑒價(jià)值,需要的朋友可以參考下

Queue

Tornado的tornado.queue模塊為基于協(xié)程的應(yīng)用程序?qū)崿F(xiàn)了一個(gè)異步生產(chǎn)者/消費(fèi)者模式的隊(duì)列。這與python標(biāo)準(zhǔn)庫(kù)為多線程環(huán)境實(shí)現(xiàn)的queue模塊類(lèi)似。

一個(gè)協(xié)程執(zhí)行到y(tǒng)ieldqueue.get會(huì)暫停,直到隊(duì)列中有條目。如果queue有上限,一個(gè)協(xié)程執(zhí)行yieldqueue.put將會(huì)暫停,直到隊(duì)列中有空閑的位置。

在一個(gè)queue內(nèi)部維護(hù)了一個(gè)未完成任務(wù)的引用計(jì)數(shù),每調(diào)用一次put操作便會(huì)增加引用計(jì)數(shù),而調(diào)用task_done操作將會(huì)減少引用計(jì)數(shù)。

下面是一個(gè)簡(jiǎn)單的web爬蟲(chóng)的例子:

最開(kāi)始,queue只包含一個(gè)基準(zhǔn)url。當(dāng)一個(gè)worker從中取出一個(gè)url后,它會(huì)從對(duì)應(yīng)的頁(yè)面中解析中所包含的url并將其放入隊(duì)列,然后調(diào)用task_done減少引用計(jì)數(shù)一次。

最后,worker會(huì)取出一個(gè)url,而這個(gè)url頁(yè)面中的所有url都已經(jīng)被處理過(guò)了,這時(shí)隊(duì)列中也沒(méi)有url了。這時(shí)調(diào)用task_done會(huì)將引用計(jì)數(shù)減少至0.

這樣,在main協(xié)程里,join操作將會(huì)解除掛起并結(jié)束主協(xié)程。

這個(gè)爬蟲(chóng)使用了HTMLParse來(lái)解析html頁(yè)面。

import time
from datetime import timedelta

try:
 from HTMLParser import HTMLParser
 from urlparse import urljoin, urldefrag
except ImportError:
 from html.parser import HTMLParser
 from urllib.parse import urljoin, urldefrag

from tornado import httpclient, gen, ioloop, queues

base_url = 'http://www.tornadoweb.org/en/stable/'
concurrency = 10


@gen.coroutine
def get_links_from_url(url):
 """Download the page at `url` and parse it for links.

 Returned links have had the fragment after `#` removed, and have been made
 absolute so, e.g. the URL 'gen.html#tornado.gen.coroutine' becomes
 'http://www.tornadoweb.org/en/stable/gen.html'.
 """
 try:
  response = yield httpclient.AsyncHTTPClient().fetch(url)
  print('fetched %s' % url)

  html = response.body if isinstance(response.body, str) \
   else response.body.decode()
  urls = [urljoin(url, remove_fragment(new_url))
    for new_url in get_links(html)]
 except Exception as e:
  print('Exception: %s %s' % (e, url))
  raise gen.Return([])

 raise gen.Return(urls)

#用于從一個(gè)包含片段的url中提取中真正的url.
def remove_fragment(url):  
 pure_url, frag = urldefrag(url)
 return pure_url


def get_links(html):
 class URLSeeker(HTMLParser):
  def __init__(self):
   HTMLParser.__init__(self)
   self.urls = []

	#從所有a標(biāo)簽中提取中href屬性。
  def handle_starttag(self, tag, attrs):
   href = dict(attrs).get('href')
   if href and tag == 'a':
    self.urls.append(href)

 url_seeker = URLSeeker()
 url_seeker.feed(html)
 return url_seeker.urls


@gen.coroutine
def main():
 q = queues.Queue()
 start = time.time()
 fetching, fetched = set(), set()

 @gen.coroutine
 def fetch_url():
  current_url = yield q.get()
  try:
   if current_url in fetching:
    return

   print('fetching %s' % current_url)
   fetching.add(current_url)
   urls = yield get_links_from_url(current_url)
   fetched.add(current_url)

   for new_url in urls:
    # Only follow links beneath the base URL
    if new_url.startswith(base_url):
     yield q.put(new_url)

  finally:
   q.task_done()

 @gen.coroutine
 def worker():
  while True:
   yield fetch_url()

 q.put(base_url)

 # Start workers, then wait for the work queue to be empty.
 for _ in range(concurrency):
  worker()
 yield q.join(timeout=timedelta(seconds=300))
 assert fetching == fetched
 print('Done in %d seconds, fetched %s URLs.' % (
  time.time() - start, len(fetched)))


if __name__ == '__main__':
 import logging
 logging.basicConfig()
 io_loop = ioloop.IOLoop.current()
 io_loop.run_sync(main)

總結(jié)

以上所述,來(lái)自Tornado官方網(wǎng)站用戶指南的介紹和實(shí)例,這位同學(xué)進(jìn)行了簡(jiǎn)單的翻譯,然后把代碼拿過(guò)來(lái)了。時(shí)間有些倉(cāng)促,小編并未進(jìn)行tornado的安裝和對(duì)本段代碼進(jìn)行測(cè)試,故無(wú)結(jié)果演示,大家請(qǐng)見(jiàn)諒。

有關(guān)Python tornado隊(duì)列示例-一個(gè)并發(fā)web爬蟲(chóng)代碼分享的介紹就到這里了,希望對(duì)大家有所幫助。感興趣的朋友可以繼續(xù)參閱本站其他相關(guān)專題,如有不足之處,歡迎留言指出。感謝朋友們對(duì)本站的支持!

相關(guān)文章

最新評(píng)論