分布式爬蟲scrapy-redis的實戰(zhàn)踩坑記錄
一、安裝redis
因為是在CentOS系統(tǒng)下安裝的,并且是服務(wù)器。遇到的困難有點多不過。
1.首先要下載相關(guān)依賴
首先先檢查是否有c語言的編譯環(huán)境,你問我問什么下載這個,我只能說它是下載安裝redis的前提,就像水和魚一樣。
rpm -q gcc```
如果輸出版本號,則證明下載好了,否則就執(zhí)行下面的命令,安裝gcc,
2.然后編譯redis
下載你想要的redis版本注意下面的3.0.6是版本號,根據(jù)自己想要的下載版本號,解壓
yum install gcc-c++ cd /usr/local/redis wget http://download.redis.io/releases/redis-3.0.6.tar.gz tar zxvf redis-3.0.6.tar.gz make && make install
什么?你問我沒有redis文件夾怎么辦,mkdir創(chuàng)建啊!!!
一定要先進入目錄再去執(zhí)行下載編譯,這樣下載的redis才會進入系統(tǒng)變量。
redis-server redis-cli
啟動服務(wù)你是下面這樣的嗎?
是的就不正常了??!你才下載好了,你會發(fā)現(xiàn)你可以開啟服務(wù)了,但是退不出來,無法進入命令行了,變成下面的這鬼摸樣了,別急,你還沒配置好,慢慢來。
還記得你剛剛創(chuàng)建的redis文件夾嗎?進入那里面,找到redis.conf,修改這個配置文件。
redis-server redis-cli
找到這三個并改正。
- 首先將bind進行注釋,因為如果不注釋的話,你就只能本機訪問了,我相信你下載肯定不只是自己訪問吧。這就意味著所有ip都可以訪問這個數(shù)據(jù)庫,但你又問了,這會不會影響安全性能呢?答:你都是租的服務(wù)器了,就算你想讓別人訪問,你還有安全組規(guī)則限制的啊,你問我什么是安全組?快去百度!!
- 將守護模式關(guān)閉,這樣你才能遠程讀寫數(shù)據(jù)庫
- 開啟后臺模式,你才能像我那樣,而不是退不出來
保存退出,重啟redis,這樣,redis就配置好了,還可以設(shè)置密碼,但是我懶,不想設(shè)置。
至此數(shù)據(jù)庫配置成功
二、scrapy框架出現(xiàn)的問題
1.AttributeError: TaocheSpider object has no attribute make_requests_from_url 原因:
新版本的scrapy框架已經(jīng)丟棄了這個函數(shù)的功能,但是并沒有完全移除,雖然函數(shù)已經(jīng)移除,但是還是在某些地方用到了這個,出現(xiàn)矛盾。
解決方法
自己在相對應(yīng)的報錯文件中重寫一下這個方法
就是在
def make_requests_from_url(self,url): return scrapy.Request(url,dont_filter=True)
2.ValueError: unsupported format character : (0x3a) at index 9 問題:
我開起了redis的管道,將數(shù)據(jù)保存在了redis中,但是每次存儲總是失敗報錯。
原因:
我在settings.py文件中重寫了保存的方法,但是保存的寫法不對導(dǎo)致我一直以為是源碼的錯誤
# item存儲鍵的設(shè)置 REDIS_ITEMS_KEY = '%(spider):items'
源碼是
return self.spider % {"spider":spider.name}
太坑了,我為了這個錯誤差點重寫了一個scrapy框架…
注意!如果你覺得你的主代碼一點問題都沒有,那就一定是配置文件的問題,大小寫,配置環(huán)境字母不對等
三、scrapy正確的源代碼
1.items.py文件
import scrapy class MyspiderItem(scrapy.Item): # define the fields for your item here like: lazyimg = scrapy.Field() title = scrapy.Field() resisted_data = scrapy.Field() mileage = scrapy.Field() city = scrapy.Field() price = scrapy.Field() sail_price = scrapy.Field()
2.settings.py文件
# Scrapy settings for myspider project # # For simplicity, this file contains only settings considered important or # commonly used. You can find more settings consulting the documentation: # # https://docs.scrapy.org/en/latest/topics/settings.html # https://docs.scrapy.org/en/latest/topics/downloader-middleware.html # https://docs.scrapy.org/en/latest/topics/spider-middleware.html BOT_NAME = 'myspider' SPIDER_MODULES = ['myspider.spiders'] NEWSPIDER_MODULE = 'myspider.spiders' # Crawl responsibly by identifying yourself (and your website) on the user-agent # Obey robots.txt rules # LOG_LEVEL = "WARNING" # Configure maximum concurrent requests performed by Scrapy (default: 16) #CONCURRENT_REQUESTS = 32 # Configure a delay for requests for the same website (default: 0) # See https://docs.scrapy.org/en/latest/topics/settings.html#download-delay # See also autothrottle settings and docs #DOWNLOAD_DELAY = 3 # The download delay setting will honor only one of: #CONCURRENT_REQUESTS_PER_DOMAIN = 16 #CONCURRENT_REQUESTS_PER_IP = 16 # Disable cookies (enabled by default) #COOKIES_ENABLED = False # Disable Telnet Console (enabled by default) #TELNETCONSOLE_ENABLED = False # Override the default request headers: #DEFAULT_REQUEST_HEADERS = { # 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', # 'Accept-Language': 'en', #} # Enable or disable spider middlewares # See https://docs.scrapy.org/en/latest/topics/spider-middleware.html #SPIDER_MIDDLEWARES = { # 'myspider.middlewares.MyspiderSpiderMiddleware': 543, #} # Enable or disable downloader middlewares # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html #DOWNLOADER_MIDDLEWARES = { # 'myspider.middlewares.MyspiderDownloaderMiddleware': 543, #} # Enable or disable extensions # See https://docs.scrapy.org/en/latest/topics/extensions.html #EXTENSIONS = { # 'scrapy.extensions.telnet.TelnetConsole': None, #} # Configure item pipelines # See https://docs.scrapy.org/en/latest/topics/item-pipeline.html # Crawl responsibly by identifying yourself (and your website) on the user-agent USER_AGENT = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36' LOG_LEVEL = 'WARNING' LOG_FILE = './log.log' # Obey robots.txt rules ROBOTSTXT_OBEY = False # 指定管道 ,scrapy-redis組件幫我們寫好 ITEM_PIPELINES = { "scrapy_redis.pipelines.RedisPipeline":400 } # 指定redis REDIS_HOST = '' # redis的服務(wù)器地址,我們現(xiàn)在用的是虛擬機上的回環(huán)地址 REDIS_PORT = # virtual Box轉(zhuǎn)發(fā)redistribution的端口 # 去重容器類配置 作用:redis的set集合來存儲請求的指紋數(shù)據(jù),從而實現(xiàn)去重的持久化 DUPEFILTER_CLASS = 'scrapy_redis.dupefilter.RFPDupeFilter' # 使用scrapy-redis的調(diào)度器 SCHEDULER = 'scrapy_redis.scheduler.Scheduler' # 配置調(diào)度器是否需要持久化,爬蟲結(jié)束的時候要不要清空redis中請求隊列和指紋的set集合,要持久化設(shè)置為True SCHEDULER_PERSIST = True # 最大閑置時間,防止爬蟲在分布式爬取的過程中關(guān)閉 # 這個僅在隊列是SpiderQueue 或者 SpiderStack才會有作用, # 也可以阻塞一段時間,當(dāng)你的爬蟲剛開始時(因為剛開始時,隊列是空的) SCHEDULER_IDLE_BEFORE_CLOSE = 10 # Enable and configure the AutoThrottle extension (disabled by default) # See https://docs.scrapy.org/en/latest/topics/autothrottle.html #AUTOTHROTTLE_ENABLED = True # The initial download delay #AUTOTHROTTLE_START_DELAY = 5 # The maximum download delay to be set in case of high latencies #AUTOTHROTTLE_MAX_DELAY = 60 # The average number of requests Scrapy should be sending in parallel to # each remote server #AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0 # Enable showing throttling stats for every response received: #AUTOTHROTTLE_DEBUG = False # Enable and configure HTTP caching (disabled by default) # See https://docs.scrapy.org/en/latest/topics/downloader-middleware.html#httpcache-middleware-settings #HTTPCACHE_ENABLED = True #HTTPCACHE_EXPIRATION_SECS = 0 #HTTPCACHE_DIR = 'httpcache' #HTTPCACHE_IGNORE_HTTP_CODES = [] #HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'
3.taoche.py文件
import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from scrapy_redis.spiders import RedisCrawlSpider from ..items import MyspiderItem import logging log = logging.getLogger(__name__) class TaocheSpider(RedisCrawlSpider): name = 'taoche' # allowed_domains = ['taoche.com'] # 不做域名限制 # start_urls = ['http://taoche.com/'] # 起始的url應(yīng)該去redis(公共調(diào)度器) 里面獲取 redis_key = 'taoche' # 回去redis(公共調(diào)度器)里面獲取key為taoche的數(shù)據(jù) taoche:[] # 老師,我給你找一下我改的源碼在哪里,看看是那的錯誤嗎 rules = ( # LinkExtractor 鏈接提取器,根據(jù)正則規(guī)則提取url地址 # callback 提取出來的url地址發(fā)送請求獲取響應(yīng),會把響應(yīng)對象給callback指定的函數(shù)進行處理 # follow 獲取的響應(yīng)頁面是否再次經(jīng)過rules進行提取url Rule(LinkExtractor(allow=r'/\?page=\d+?'), callback='parse_item', follow=True), ) def parse_item(self, response): print("開始解析數(shù)據(jù)") car_list = response.xpath('//div[@id="container_base"]/ul/li') for car in car_list: lazyimg = car.xpath('./div[1]/div/a/img/@src').extract_first() title = car.xpath('./div[2]/a/span/text()').extract_first() resisted_data = car.xpath('./div[2]/p/i[1]/text()').extract_first() mileage = car.xpath('./div[2]/p/i[2]/text()').extract_first() city = car.xpath('./div[2]/p/i[3]/text()').extract_first() city = city.replace('\n', '') city = city.strip() price = car.xpath('./div[2]/div[1]/i[1]/text()').extract_first() sail_price = car.xpath('./div[2]/div[1]/i[2]/text()').extract_first() item = MyspiderItem() item['lazyimg'] = lazyimg item['title'] = title item['resisted_data'] = resisted_data item['mileage'] = mileage item['city'] = city item['price'] = price item['sail_price'] = sail_price log.warning(item) # scrapy.Request(url=function,dont_filter=True) yield item
4.其余文件
- 中間件沒有用到所以就沒有寫
- 管道用的是scrapy_redis里面的,自己也就不用寫
總結(jié)
到此這篇關(guān)于分布式爬蟲scrapy-redis踩坑的文章就介紹到這了,更多相關(guān)分布式爬蟲scrapy-redis踩坑內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
Django Rest framework之認證的實現(xiàn)代碼
這篇文章主要介紹了Django Rest framework之認證的實現(xiàn)代碼,小編覺得挺不錯的,現(xiàn)在分享給大家,也給大家做個參考。一起跟隨小編過來看看吧2018-12-12關(guān)于tensorflow和keras版本的對應(yīng)關(guān)系
這篇文章主要介紹了關(guān)于tensorflow和keras版本的對應(yīng)關(guān)系,具有很好的參考價值,希望對大家有所幫助。如有錯誤或未考慮完全的地方,望不吝賜教2023-06-06Python將圖片轉(zhuǎn)為漫畫風(fēng)格的示例
本文主要介紹了Python將圖片轉(zhuǎn)為漫畫風(fēng)格的示例,文中通過示例代碼介紹的非常詳細,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧2023-04-04windows及l(fā)inux環(huán)境下永久修改pip鏡像源的方法
不知道有沒有人跟我一樣,在剛接觸Linux時被系統(tǒng)更新源問題搞得暈頭轉(zhuǎn)向,不同的Linux更新源配置也是不一樣的,另外由于默認安裝時的源大都是外國的更新源,速度相對國內(nèi)會慢很多,接下來本文主要介紹在windows和linux兩種系統(tǒng)環(huán)境中更新系統(tǒng)源的方法。2016-11-11