欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

scrapy利用selenium爬取豆瓣閱讀的全步驟

 更新時間:2020年09月20日 15:29:46   作者:工藤新二z  
這篇文章主要給大家介紹了關(guān)于scrapy利用selenium爬取豆瓣閱讀的相關(guān)資料,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧

首先創(chuàng)建scrapy項目

命令:scrapy startproject douban_read

創(chuàng)建spider

命令:scrapy genspider douban_spider url

網(wǎng)址:https://read.douban.com/charts

關(guān)鍵注釋代碼中有,若有不足,請多指教

scrapy項目目錄結(jié)構(gòu)如下

douban_spider.py文件代碼

爬蟲文件

import scrapy
import re, json

from ..items import DoubanReadItem


class DoubanSpiderSpider(scrapy.Spider):
 name = 'douban_spider'
 # allowed_domains = ['www']
 start_urls = ['https://read.douban.com/charts']

 def parse(self, response):
 # print(response.text)
 # 獲取圖書分類的url
 type_urls = response.xpath('//div[@class="rankings-nav"]/a[position()>1]/@href').extract()
 # print(type_urls)
 for type_url in type_urls:
  # /charts?type=unfinished_column&index=featured&dcs=charts&dcm=charts-nav
  part_param = re.search(r'charts\?(.*?)&dcs', type_url).group(1)
  # https://read.douban.com/j/index//charts?type=intermediate_finalized&index=science_fiction&verbose=1
  ajax_url = 'https://read.douban.com/j/index//charts?{}&verbose=1'.format(part_param)
  yield scrapy.Request(ajax_url, callback=self.parse_ajax, encoding='utf-8', meta={'request_type': 'ajax'})

 def parse_ajax(self, response):

 # print(response.text)
 # 獲取分類中圖書的json數(shù)據(jù)
 json_data = json.loads(response.text)
 for data in json_data['list']:
  item = DoubanReadItem()
  item['book_id'] = data['works']['id']
  item['book_url'] = data['works']['url']
  item['book_title'] = data['works']['title']
  item['book_author'] = data['works']['author']
  item['book_cover_image'] = data['works']['cover']
  item['book_abstract'] = data['works']['abstract']
  item['book_wordCount'] = data['works']['wordCount']
  item['book_kinds'] = data['works']['kinds']
  # 把item yield給Itempipeline
  yield item

item.py文件代碼

項目的目標(biāo)文件

# Define here the models for your scraped items
#
# See documentation in:
# https://docs.scrapy.org/en/latest/topics/items.html

import scrapy


class DoubanReadItem(scrapy.Item):
 # define the fields for your item here like:
 book_id = scrapy.Field()
 book_url = scrapy.Field()
 book_title = scrapy.Field()
 book_author = scrapy.Field()
 book_cover_image = scrapy.Field()
 book_abstract = scrapy.Field()
 book_wordCount = scrapy.Field()
 book_kinds = scrapy.Field()


my_download_middle.py文件代碼

所有request都會經(jīng)過下載中間件,可以通過定制中間件,來完成設(shè)置代理,動態(tài)設(shè)置請求頭,自定義下載等操作

import random
import time
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from scrapy.http.response.html import HtmlResponse


class MymiddleWares(object):
 def __init__(self):
 # 請求頭列表
 self.USER_AGENT_LIST = [
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
  "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
  "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
  "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
  "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
  "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
  "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
  "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)",
  "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
  "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
  "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
  "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
 ]

 def process_request(self, request, spider):
 '''
 下載中間件處理requests的方法
 :param request:馬上要被下載器下載request
 :param spider:
 :return:
 '''
 # 在spider中設(shè)置了meta的request_type的值為ajax meta參數(shù)會貫穿整個scrapy
 request_type = request.meta.get('request_type')
 # 如果不是ajax請求就需要通過selenium來自定義下載request
 if not request_type:
  print('in middler')
  # 1、創(chuàng)建driver
  driver = webdriver.Chrome()
  # 2、請求url
  driver.get(request.url)
  # 3、等待
  # driver.implicitly_wait(20)
  time.sleep(3)
  # 4、獲取頁面內(nèi)容
  html_str = driver.page_source
  # 直接返回HtmlResponse給spider解析 下載器就不會下載這個request 達(dá)到自定義下載的目的
  return HtmlResponse(url=request.url, body=html_str, request=request, encoding='utf-8')

 else:
  # 如果是ajax請求就需要通過scrapy下載器來下載request
  # ajax請求直接返回json數(shù)據(jù)不適合上面的selenium下載
  ua = random.choice(self.USER_AGENT_LIST)
  # 設(shè)置請求頭
  if ua:
  request.headers.setdefault('User-Agent', ua)
  request.headers.setdefault('X-Requested-With', 'XMLHttpRequest')

pipeline.py文件代碼

項目的管道文件

import pymongo
from itemadapter import ItemAdapter


class MongoPipeline:
 # 存儲集合名字
 collection_name = 'book'

 def __init__(self, mongo_uri, mongo_db):
 self.mongo_uri = mongo_uri
 self.mongo_db = mongo_db

 @classmethod
 def from_crawler(cls, crawler):
 return cls(
  mongo_uri=crawler.settings.get('MONGO_URI'),
  mongo_db=crawler.settings.get('MONGO_DATABASE', 'items')
 )

 def open_spider(self, spider):
 '''
 當(dāng)spider啟動的時候調(diào)用
 :param spider:
 :return:
 '''
 self.client = pymongo.MongoClient(self.mongo_uri)
 self.db = self.client[self.mongo_db]

 def close_spider(self, spider):
 self.client.close()

 # 保存到mongo的douban_read數(shù)據(jù)庫下的book集合中
 def process_item(self, item, spider):
 self.db[self.collection_name].update({'book_id': item['book_id']}, {'$set': dict(item)}, True)
 # True:有則修改 無則新增
 print(item)
 return item

settings.py文件代碼

配置信息

 # Scrapy settings for douban_read project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
# https://docs.scrapy.org/en/latest/topics/settings.html
# https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# https://docs.scrapy.org/en/latest/topics/spider-middleware.html

BOT_NAME = 'douban_read'

SPIDER_MODULES = ['douban_read.spiders']
NEWSPIDER_MODULE = 'douban_read.spiders'


# Crawl responsibly by identifying yourself (and your website) on the user-agent
#USER_AGENT = 'douban_read (+http://www.yourdomain.com)'

# Obey robots.txt rules
# robot協(xié)議
ROBOTSTXT_OBEY = False

# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32

# Configure a delay for requests for the same website (default: 0)
# See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16

# Disable cookies (enabled by default)
#COOKIES_ENABLED = False

# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False

# Override the default request headers:
# 默認(rèn)請求頭
DEFAULT_REQUEST_HEADERS = {
 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
 'Accept-Language': 'en',
 'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/85.0.4183.102 Safari/537.36',

}

# Enable or disable spider middlewares
# See https://docs.scrapy.org/en/latest/topics/spider-middleware.html
#SPIDER_MIDDLEWARES = {
# 'douban_read.middlewares.DoubanReadSpiderMiddleware': 543,
#}

# Enable or disable downloader middlewares
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html
# 配置下載器中間件
DOWNLOADER_MIDDLEWARES = {
 'douban_read.my_download_middle.MymiddleWares': 543,
}

# Enable or disable extensions
# See https://docs.scrapy.org/en/latest/topics/extensions.html
#EXTENSIONS = {
# 'scrapy.extensions.telnet.TelnetConsole': None,
#}

# Configure item pipelines
# See https://docs.scrapy.org/en/latest/topics/item-pipeline.html
# 配置ITEM_PIPELINES
ITEM_PIPELINES = {
 'douban_read.pipelines.MongoPipeline': 300,
}

# Enable and configure the AutoThrottle extension (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/autothrottle.html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False

# Enable and configure HTTP caching (disabled by default)
# See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

# 配置mongo
MONGO_URI = 'localhost'
# 創(chuàng)建數(shù)據(jù)庫:douban_read
MONGO_DATABASE = 'douban_read'

最后啟動該項目即可

scrapy crawl douban_spider

數(shù)據(jù)就保存到mongo數(shù)據(jù)庫了

總結(jié)

到此這篇關(guān)于scrapy利用selenium爬取豆瓣閱讀的文章就介紹到這了,更多相關(guān)scrapy用selenium爬取豆瓣閱讀內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

  • 基于OpenCV的攝像頭測距的實現(xiàn)示例

    基于OpenCV的攝像頭測距的實現(xiàn)示例

    本文主要介紹了基于OpenCV的攝像頭測距的實現(xiàn)示例,文中通過示例代碼介紹的非常詳細(xì),具有一定的參考價值,感興趣的小伙伴們可以參考一下
    2022-01-01
  • Python3.6使用tesseract-ocr的正確方法

    Python3.6使用tesseract-ocr的正確方法

    今天小編就為大家分享一篇關(guān)于Python3.6使用tesseract-ocr的正確方法,小編覺得內(nèi)容挺不錯的,現(xiàn)在分享給大家,具有很好的參考價值,需要的朋友一起跟隨小編來看看吧
    2018-10-10
  • python jinja2模板的使用示例

    python jinja2模板的使用示例

    這篇文章主要介紹了python jinja2模板的使用示例,幫助大家更好的理解和學(xué)習(xí)使用python,感興趣的朋友可以了解下
    2021-03-03
  • Python簡易計算器制作方法代碼詳解

    Python簡易計算器制作方法代碼詳解

    這篇文章主要介紹了Python簡易計算器制作方法,文中代碼主要用到了python中的tkinter庫,代碼簡單易懂,非常不錯,具有一定的參考借鑒價值,需要的朋友可以參考下
    2019-10-10
  • Python爬蟲之Selenium庫的使用方法

    Python爬蟲之Selenium庫的使用方法

    這篇文章主要介紹了Python爬蟲之Selenium庫的使用方法,幫助大家更好的理解和使用爬蟲,感興趣的朋友可以了解下
    2021-01-01
  • django2+uwsgi+nginx上線部署到服務(wù)器Ubuntu16.04

    django2+uwsgi+nginx上線部署到服務(wù)器Ubuntu16.04

    這篇文章主要介紹了django2+uwsgi+nginx上線部署到服務(wù)器Ubuntu16.04,小編覺得挺不錯的,現(xiàn)在分享給大家,也給大家做個參考。一起跟隨小編過來看看吧
    2018-06-06
  • python中圖像通道分離與合并實例

    python中圖像通道分離與合并實例

    今天小編就為大家分享一篇python中圖像通道分離與合并實例,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧
    2020-01-01
  • pip/anaconda修改鏡像源,加快python模塊安裝速度的操作

    pip/anaconda修改鏡像源,加快python模塊安裝速度的操作

    這篇文章主要介紹了pip/anaconda修改鏡像源,加快python模塊安裝速度的操作,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧
    2021-03-03
  • 如何在scrapy中集成selenium爬取網(wǎng)頁的方法

    如何在scrapy中集成selenium爬取網(wǎng)頁的方法

    這篇文章主要介紹了如何在scrapy中集成selenium爬取網(wǎng)頁的方法,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2020-11-11
  • Python中斷點調(diào)試pdb包的用法詳解

    Python中斷點調(diào)試pdb包的用法詳解

    pdb(python debugger) 是 python 中的一個命令行調(diào)試包,為 python 程序提供了一種交互的源代碼調(diào)試功能,下面就跟隨小編一起學(xué)習(xí)一下它的具體使用吧
    2024-01-01

最新評論