詳解基于Scrapy的IP代理池搭建
一、為什么要搭建爬蟲(chóng)代理池
在眾多的網(wǎng)站防爬措施中,有一種是根據(jù)ip的訪問(wèn)頻率進(jìn)行限制,即在某一時(shí)間段內(nèi),當(dāng)某個(gè)ip的訪問(wèn)次數(shù)達(dá)到一定的閥值時(shí),該ip就會(huì)被拉黑、在一段時(shí)間內(nèi)禁止訪問(wèn)。
應(yīng)對(duì)的方法有兩種:
1. 降低爬蟲(chóng)的爬取頻率,避免IP被限制訪問(wèn),缺點(diǎn)顯而易見(jiàn):會(huì)大大降低爬取的效率。
2. 搭建一個(gè)IP代理池,使用不同的IP輪流進(jìn)行爬取。
二、搭建思路
1、從代理網(wǎng)站(如:西刺代理、快代理、云代理、無(wú)憂代理)爬取代理IP;
2、驗(yàn)證代理IP的可用性(使用代理IP去請(qǐng)求指定URL,根據(jù)響應(yīng)驗(yàn)證代理IP是否生效);
3、將可用的代理IP保存到數(shù)據(jù)庫(kù);
在《Python爬蟲(chóng)代理池搭建》一文中我們已經(jīng)使用Python的 requests 模塊簡(jiǎn)單實(shí)現(xiàn)了一個(gè)IP代理池搭建,但是爬取速度較慢。由于西刺代理、快代理和云代理等網(wǎng)站需要爬取的IP代理列表頁(yè)多達(dá)上千頁(yè),使用此種方法來(lái)爬取其實(shí)并不適合。
本文將以快代理網(wǎng)站的IP代理爬取為例,示例如何使用 Scrapy-Redis 來(lái)爬取代理IP。
三、搭建代理池
scrapy 項(xiàng)目的目錄結(jié)構(gòu)如下:
items.py
# -*- coding: utf-8 -*- import re import scrapy from proxy_pool.settings import PROXY_URL_FORMATTER schema_pattern = re.compile(r'http|https$', re.I) ip_pattern = re.compile(r'^([0-9]{1,3}.){3}[0-9]{1,3}$', re.I) port_pattern = re.compile(r'^[0-9]{2,5}$', re.I) class ProxyPoolItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() ''' { "schema": "http", # 代理的類型 "ip": "127.0.0.1", # 代理的IP地址 "port": "8050", # 代理的端口號(hào) "original":"西刺代理", "used_total": 11, # 代理的使用次數(shù) "success_times": 5, # 代理請(qǐng)求成功的次數(shù) "continuous_failed": 3, # 使用代理發(fā)送請(qǐng)求,連續(xù)失敗的次數(shù) "created_time": "2018-05-02" # 代理的爬取時(shí)間 } ''' schema = scrapy.Field() ip = scrapy.Field() port = scrapy.Field() original = scrapy.Field() used_total = scrapy.Field() success_times = scrapy.Field() continuous_failed = scrapy.Field() created_time = scrapy.Field() # 檢查IP代理的格式是否正確 def _check_format(self): if self['schema'] is not None and self['ip'] is not None and self['port'] is not None: if schema_pattern.match(self['schema']) and ip_pattern.match(self['ip']) and port_pattern.match( self['port']): return True return False # 獲取IP代理的url def _get_url(self): return PROXY_URL_FORMATTER % {'schema': self['schema'], 'ip': self['ip'], 'port': self['port']}
kuai_proxy.py
# -*- coding: utf-8 -*- import re import time import scrapy from proxy_pool.utils import strip, logger from proxy_pool.items import ProxyPoolItem class KuaiProxySpider(scrapy.Spider): name = 'kuai_proxy' allowed_domains = ['kuaidaili.com'] start_urls = ['https://www.kuaidaili.com/free/inha/1/', 'https://www.kuaidaili.com/free/intr/1/'] def parse(self, response): logger.info('正在爬?。?lt; ' + response.request.url + ' >') tr_list = response.css("div#list>table>tbody tr") for tr in tr_list: ip = tr.css("td[data-title='IP']::text").get() port = tr.css("td[data-title='PORT']::text").get() schema = tr.css("td[data-title='類型']::text").get() if schema.lower() == "http" or schema.lower() == "https": item = ProxyPoolItem() item['schema'] = strip(schema).lower() item['ip'] = strip(ip) item['port'] = strip(port) item['original'] = '快代理' item['created_time'] = time.strftime('%Y-%m-%d', time.localtime(time.time())) if item._check_format(): yield item next_page = response.xpath("http://a[@class='active']/../following-sibling::li/a/@href").get() if next_page is not None: next_url = 'https://www.kuaidaili.com' + next_page yield scrapy.Request(next_url)
middlewares.py
# -*- coding: utf-8 -*- import random from proxy_pool.utils import logger # 隨機(jī)選擇 IP 代理下載器中間件 class RandomProxyMiddleware(object): # 從 settings 的 PROXIES 列表中隨機(jī)選擇一個(gè)作為代理 def process_request(self, request, spider): proxy = random.choice(spider.settings['PROXIES']) request.meta['proxy'] = proxy return None # 隨機(jī)選擇 User-Agent 的下載器中間件 class RandomUserAgentMiddleware(object): def process_request(self, request, spider): # 從 settings 的 USER_AGENTS 列表中隨機(jī)選擇一個(gè)作為 User-Agent user_agent = random.choice(spider.settings['USER_AGENT_LIST']) request.headers['User-Agent'] = user_agent return None def process_response(self, request, response, spider): # 驗(yàn)證 User-Agent 設(shè)置是否生效 logger.info("headers ::> User-Agent = " + str(request.headers['User-Agent'], encoding="utf8")) return response
pipelines.py
# -*- coding: utf-8 -*- import json import redis from proxy_pool.settings import REDIS_HOST,REDIS_PORT,REDIS_PARAMS,PROXIES_UNCHECKED_LIST,PROXIES_UNCHECKED_SET server = redis.StrictRedis(host=REDIS_HOST, port=REDIS_PORT,password=REDIS_PARAMS['password']) class ProxyPoolPipeline(object): # 將可用的IP代理添加到代理池隊(duì)列 def process_item(self, item, spider): if not self._is_existed(item): server.rpush(PROXIES_UNCHECKED_LIST, json.dumps(dict(item),ensure_ascii=False)) # 檢查IP代理是否已經(jīng)存在 def _is_existed(self,item): added = server.sadd(PROXIES_UNCHECKED_SET, item._get_url()) return added == 0
settings.py
# -*- coding: utf-8 -*- BOT_NAME = 'proxy_pool' SPIDER_MODULES = ['proxy_pool.spiders'] NEWSPIDER_MODULE = 'proxy_pool.spiders' # 保存未檢驗(yàn)代理的Redis key PROXIES_UNCHECKED_LIST = 'proxies:unchecked:list' # 已經(jīng)存在的未檢驗(yàn)HTTP代理和HTTPS代理集合 PROXIES_UNCHECKED_SET = 'proxies:unchecked:set' # 代理地址的格式化字符串 PROXY_URL_FORMATTER = '%(schema)s://%(ip)s:%(port)s' # 通用請(qǐng)求頭字段 DEFAULT_REQUEST_HEADERS = { 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8,zh-TW;q=0.7', 'Connection': 'keep-alive' } # 請(qǐng)求太頻繁會(huì)導(dǎo)致 503 ,在此設(shè)置 5 秒請(qǐng)求一次 DOWNLOAD_DELAY = 5 # 250 ms of delay USER_AGENT_LIST = [ "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1", "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6", "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6", "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5", "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3", "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3", "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3", "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3", "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24", "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24" ] # Obey robots.txt rules ROBOTSTXT_OBEY = False # Enable or disable downloader middlewares # See https://doc.scrapy.org/en/latest/topics/downloader-middleware.html DOWNLOADER_MIDDLEWARES = { 'proxy_pool.middlewares.RandomUserAgentMiddleware': 543, # 'proxy_pool.middlewares.RandomProxyMiddleware': 544, } ITEM_PIPELINES = { 'proxy_pool.pipelines.ProxyPoolPipeline': 300, } PROXIES = [ "https://171.13.92.212:9797", "https://164.163.234.210:8080", "https://143.202.73.219:8080", "https://103.75.166.15:8080" ] ###################################################### ##############下面是Scrapy-Redis相關(guān)配置################ ###################################################### # 指定Redis的主機(jī)名和端口 REDIS_HOST = '172.16.250.238' REDIS_PORT = 6379 REDIS_PARAMS = {'password': '123456'} # 調(diào)度器啟用Redis存儲(chǔ)Requests隊(duì)列 SCHEDULER = "scrapy_redis.scheduler.Scheduler" # 確保所有的爬蟲(chóng)實(shí)例使用Redis進(jìn)行重復(fù)過(guò)濾 DUPEFILTER_CLASS = "scrapy_redis.dupefilter.RFPDupeFilter" # 將Requests隊(duì)列持久化到Redis,可支持暫停或重啟爬蟲(chóng) SCHEDULER_PERSIST = True # Requests的調(diào)度策略,默認(rèn)優(yōu)先級(jí)隊(duì)列 SCHEDULER_QUEUE_CLASS = 'scrapy_redis.queue.PriorityQueue'
utils.py
# -*- coding: utf-8 -*- import logging # 設(shè)置日志輸出格式 logging.basicConfig(level=logging.INFO, format='[%(asctime)-15s] [%(levelname)8s] [%(name)10s ] - %(message)s (%(filename)s:%(lineno)s)', datefmt='%Y-%m-%d %T' ) logger = logging.getLogger(__name__) # Truncate header and tailer blanks def strip(data): if data is not None: return data.strip() return data
到此這篇關(guān)于詳解基于Scrapy的IP代理池搭建的文章就介紹到這了,更多相關(guān)Scrapy IP代理池搭建內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
Python 運(yùn)行 shell 獲取輸出結(jié)果的實(shí)例
今天小編就為大家分享一篇Python 運(yùn)行 shell 獲取輸出結(jié)果的實(shí)例,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2019-01-01Python數(shù)據(jù)可視化正態(tài)分布簡(jiǎn)單分析及實(shí)現(xiàn)代碼
這篇文章主要介紹了Python數(shù)據(jù)可視化正態(tài)分布簡(jiǎn)單分析及實(shí)現(xiàn)代碼,具有一定借鑒價(jià)值,需要的朋友可以參考下。2017-12-12Python進(jìn)度條可視化之監(jiān)測(cè)程序運(yùn)行速度
Tqdm是一個(gè)快速,可擴(kuò)展的Python進(jìn)度條,可以在Python長(zhǎng)循環(huán)中添加一個(gè)進(jìn)度提示信息,用戶只需要封裝任意的迭代器即可。本文就主要介紹了通過(guò)進(jìn)度條檢測(cè)程序運(yùn)行速度,感興趣的同學(xué)可以學(xué)習(xí)一下2021-12-12利用Python實(shí)現(xiàn)圖書(shū)超期提醒
很多人喜歡逛圖書(shū)館,時(shí)不時(shí)去借本書(shū),但每本書(shū)可能只可以借兩個(gè)月,一旦超期不還就會(huì)進(jìn)行相應(yīng)的處罰!為什么不寫個(gè)腳本來(lái)通知自己圖書(shū)超期呢?說(shuō)了這么多廢話,我們就進(jìn)入主題吧?。?!2016-08-08Python模擬脈沖星偽信號(hào)頻率實(shí)例代碼
這篇文章主要介紹了Python模擬脈沖星偽信號(hào)頻率實(shí)例代碼,具有一定借鑒價(jià)值,需要的朋友可以參考下2018-01-01Python中的相關(guān)分析correlation analysis的實(shí)現(xiàn)
這篇文章主要介紹了Python中的相關(guān)分析correlation analysis的實(shí)現(xiàn),文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友們下面隨著小編來(lái)一起學(xué)習(xí)學(xué)習(xí)吧2019-08-08Python使用回溯法子集樹(shù)模板解決爬樓梯問(wèn)題示例
這篇文章主要介紹了Python使用回溯法子集樹(shù)模板解決爬樓梯問(wèn)題,簡(jiǎn)單說(shuō)明了爬樓梯問(wèn)題并結(jié)合實(shí)例形式給出了Python回溯法子集樹(shù)模板解決爬樓梯問(wèn)題的相關(guān)操作技巧,需要的朋友可以參考下2017-09-09Python實(shí)現(xiàn)簡(jiǎn)單文本字符串處理的方法
這篇文章主要介紹了Python實(shí)現(xiàn)簡(jiǎn)單文本字符串處理的方法,涉及Python針對(duì)文本字符串的切割、計(jì)算、轉(zhuǎn)換等相關(guān)操作技巧,需要的朋友可以參考下2018-01-01Python實(shí)現(xiàn)計(jì)算AUC的示例代碼
AUC(Area?under?curve)是機(jī)器學(xué)習(xí)常用的二分類評(píng)測(cè)手段,直接含義是ROC曲線下的面積。本文將利用Python語(yǔ)言實(shí)現(xiàn)計(jì)算AUC,感興趣的可以學(xué)習(xí)一下2022-07-07