python爬蟲實現(xiàn)POST request payload形式的請求
1. 背景
最近在爬取某個站點時,發(fā)現(xiàn)在POST數(shù)據(jù)時,使用的數(shù)據(jù)格式是request payload,有別于之前常見的 POST數(shù)據(jù)格式(Form data)。而使用Form data數(shù)據(jù)的提交方式時,無法提交成功。
1.1. Http請求中Form Data 和 Request Payload的區(qū)別
AJAX Post請求中常用的兩種傳參數(shù)的形式:form data 和 request payload
1.1.1. Form data
get請求的時候,我們的參數(shù)直接反映在url里面,形式為key1=value1&key2=value2形式,比如:
http://news.baidu.com/ns?word=NBA&tn=news&from=news&cl=2&rn=20&ct=1
而如果是post請求,那么表單參數(shù)是在請求體中,也是以key1=value1&key2=value2的形式在請求體中。通過chrome的開發(fā)者工具可以看到,如下:
RequestURL:http://127.0.0.1:8080/test/test.do Request Method:POST Status Code:200 OK Request Headers Accept:text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8 Accept-Encoding:gzip,deflate,sdch Accept-Language:zh-CN,zh;q=0.8,en;q=0.6 AlexaToolbar-ALX_NS_PH:AlexaToolbar/alxg-3.2 Cache-Control:max-age=0 Connection:keep-alive Content-Length:25 Content-Type:application/x-www-form-urlencoded Cookie:JSESSIONID=74AC93F9F572980B6FC10474CD8EDD8D Host:127.0.0.1:8080 Origin:http://127.0.0.1:8080 Referer:http://127.0.0.1:8080/test/index.jsp User-Agent:Mozilla/5.0 (Windows NT 6.1)AppleWebKit/537.36 (KHTML, like Gecko) Chrome/33.0.1750.149 Safari/537.36 Form Data name:mikan address:street Response Headers Content-Length:2 Date:Sun, 11 May 2014 11:05:33 GMT Server:Apache-Coyote/1.1
這里要注意post請求的Content-Type為application/x-www-form-urlencoded(默認的),參數(shù)是在請求體中,即上面請求中的Form Data。
前端代碼:提交數(shù)據(jù)
xhr.setRequestHeader("Content-type","application/x-www-form-urlencoded");
xhr.send("name=foo&value=bar");
后端代碼:接收提交的數(shù)據(jù)。在servlet中,可以通過request.getParameter(name)的形式來獲取表單參數(shù)。
/** * 獲取httpRequest的參數(shù) * * @param request * @param name * @return */ protected String getParameterValue(HttpServletRequest request, String name) { return StringUtils.trimToEmpty(request.getParameter(name)); }
1.1.2. Request payload
如果使用原生AJAX POST請求的話,那么請求在chrome的開發(fā)者工具的表現(xiàn)如下,主要是參數(shù)在
Remote Address:192.168.234.240:80 Request URL:http://tuanbeta3.XXX.com/qimage/upload.htm Request Method:POST Status Code:200 OK Request Headers Accept:application/json, text/javascript, */*; q=0.01 Accept-Encoding:gzip,deflate,sdch Accept-Language:zh-CN,zh;q=0.8,en;q=0.6 Connection:keep-alive Content-Length:151 Content-Type:application/json;charset=UTF-8 Cookie:JSESSIONID=E08388788943A651924CA0A10C7ACAD0 Host:tuanbeta3.XXX.com Origin:http://tuanbeta3.XXX.com Referer:http://tuanbeta3.XXX.com/qimage/customerlist.htm?menu=19 User-Agent:Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36 X-Requested-With:XMLHttpRequest Request Payload [{widthEncode:NNNcaXN, heightEncode:NNNN5NN, displayUrl:201409/03/66I5P266rtT86oKq6,…}] Response Headers Connection:keep-alive Content-Encoding:gzip Content-Type:application/json;charset=UTF-8 Date:Thu, 04 Sep 2014 06:49:44 GMT Server:nginx/1.4.7 Transfer-Encoding:chunked Vary:Accept-Encoding
注意請求的Content-Type是application/json;charset=UTF-8,而請求表單的參數(shù)在Request Payload中。
后端代碼:獲取數(shù)據(jù)(這里使用org.apache.commons.io.):
/** * 從 request 獲取 payload 數(shù)據(jù) * * @param request * @return * @throws IOException */ private String getRequestPayload(HttpServletRequest request) throws IOException { return IOUtils.toString(request.getReader()); }
1.1.3. 二者區(qū)別
如果一個請求的Content-Type設(shè)置為application/x-www-form-urlencoded,那么這個Post請求會被認為是Http Post表單請求,那么請求主體將以一個標準的鍵值對和&的querystring形式出現(xiàn)。這種方式是HTML表單的默認設(shè)置,所以在過去這種方式更加常見。
其他形式的POST請求,是放到 Request payload 中(現(xiàn)在是為了方便閱讀,使用了Json這樣的數(shù)據(jù)格式),請求的Content-Type設(shè)置為application/json;charset=UTF-8或者不指定。
2. 環(huán)境
python 3.6.1
系統(tǒng):win7
IDE:pycharm
requests 2.14.2
scrapy 1.4.0
3. 使用requests模塊post payload請求
import json import requests import datetime postUrl = 'https://sellercentral.amazon.com/fba/profitabilitycalculator/getafnfee?profitcalcToken=en2kXFaY81m513NydhTZ9sdb6hoj3D' # payloadData數(shù)據(jù) payloadData = { 'afnPriceStr': 10, 'currency':'USD', 'productInfoMapping': { 'asin': 'B072JW3Z6L', 'dimensionUnit': 'inches', } } # 請求頭設(shè)置 payloadHeader = { 'Host': 'sellercentral.amazon.com', 'Content-Type': 'application/json', } # 下載超時 timeOut = 25 # 代理 proxy = "183.12.50.118:8080" proxies = { "http": proxy, "https": proxy, } r = requests.post(postUrl, data=json.dumps(payloadData), headers=payloadHeader) dumpJsonData = json.dumps(payloadData) print(f"dumpJsonData = {dumpJsonData}") res = requests.post(postUrl, data=dumpJsonData, headers=payloadHeader, timeout=timeOut, proxies=proxies, allow_redirects=True) # 下面這種直接填充json參數(shù)的方式也OK # res = requests.post(postUrl, json=payloadData, headers=header) print(f"responseTime = {datetime.datetime.now()}, statusCode = {res.status_code}, res text = {res.text}")
4. 在scrapy中post payload請求
這兒有個壞消息,那就是scrapy目前還不支持payload這種request請求。而且scrapy對formdata的請求也有很嚴格的要求,具體可以參考這篇文章:http://www.dbjr.com.cn/article/185824.htm
4.1. 分析scrapy源碼
參考注解
# 文件:E:\Miniconda\Lib\site-packages\scrapy\http\request\form.py class FormRequest(Request): def __init__(self, *args, **kwargs): formdata = kwargs.pop('formdata', None) if formdata and kwargs.get('method') is None: kwargs['method'] = 'POST' super(FormRequest, self).__init__(*args, **kwargs) if formdata: items = formdata.items() if isinstance(formdata, dict) else formdata querystr = _urlencode(items, self.encoding) # 這兒寫死了,當提交數(shù)據(jù)時,設(shè)置好Content-Type,也就是form data類型 # 就算改寫這兒,后面也沒有對 json數(shù)據(jù)解析的處理 if self.method == 'POST': self.headers.setdefault(b'Content-Type', b'application/x-www-form-urlencoded') self._set_body(querystr) else: self._set_url(self.url + ('&' if '?' in self.url else '?') + querystr)
4.2. 思路:在scrapy中嵌入requests模塊
分析請求
返回的查詢結(jié)果
第一步:在爬蟲中構(gòu)造請求,把所有的參數(shù)以及必要信息帶進去。
返回的查詢結(jié)果
第一步:在爬蟲中構(gòu)造請求,把所有的參數(shù)以及必要信息帶進去。
# 文件 mySpider.py中 payloadData = {} payloadData['afnPriceStr'] = 0 payloadData['currency'] = asinInfo['currencyCodeHidden'] payloadData['futureFeeDate'] = asinInfo['futureFeeDateHidden'] payloadData['hasFutureFee'] = False payloadData['hasTaxPage'] = True payloadData['marketPlaceId'] = asinInfo['marketplaceIdHidden'] payloadData['mfnPriceStr'] = 0 payloadData['mfnShippingPriceStr'] = 0 payloadData['productInfoMapping'] = {} payloadData['productInfoMapping']['asin'] = dataFieldJson['asin'] payloadData['productInfoMapping']['binding'] = dataFieldJson['binding'] payloadData['productInfoMapping']['dimensionUnit'] = dataFieldJson['dimensionUnit'] payloadData['productInfoMapping']['dimensionUnitString'] = dataFieldJson['dimensionUnitString'] payloadData['productInfoMapping']['encryptedMarketplaceId'] = dataFieldJson['encryptedMarketplaceId'] payloadData['productInfoMapping']['gl'] = dataFieldJson['gl'] payloadData['productInfoMapping']['height'] = dataFieldJson['height'] payloadData['productInfoMapping']['imageUrl'] = dataFieldJson['imageUrl'] payloadData['productInfoMapping']['isAsinLimits'] = dataFieldJson['isAsinLimits'] payloadData['productInfoMapping']['isWhiteGloveRequired'] = dataFieldJson['isWhiteGloveRequired'] payloadData['productInfoMapping']['length'] = dataFieldJson['length'] payloadData['productInfoMapping']['link'] = dataFieldJson['link'] payloadData['productInfoMapping']['originalUrl'] = dataFieldJson['originalUrl'] payloadData['productInfoMapping']['productGroup'] = dataFieldJson['productGroup'] payloadData['productInfoMapping']['subCategory'] = dataFieldJson['subCategory'] payloadData['productInfoMapping']['thumbStringUrl'] = dataFieldJson['thumbStringUrl'] payloadData['productInfoMapping']['title'] = dataFieldJson['title'] payloadData['productInfoMapping']['weight'] = dataFieldJson['weight'] payloadData['productInfoMapping']['weightUnit'] = dataFieldJson['weightUnit'] payloadData['productInfoMapping']['weightUnitString'] = dataFieldJson['weightUnitString'] payloadData['productInfoMapping']['width'] = dataFieldJson['width'] # https://sellercentral.amazon.com/fba/profitabilitycalculator/getafnfee?profitcalcToken=en2kXFaY81m513NydhTZ9sdb6hoj3D postUrl = f"https://sellercentral.amazon.com/fba/profitabilitycalculator/getafnfee?profitcalcToken={asinInfo['tokenValue']}" payloadHeader = { 'Host': 'sellercentral.amazon.com', 'Content-Type': 'application/json', } # scrapy源碼:self.headers.setdefault(b'Content-Type', b'application/x-www-form-urlencoded') print(f"payloadData = {payloadData}") # 這個request并不真正用來調(diào)度,去發(fā)出請求,因為這種方式構(gòu)造方式,是無法提交成功的,會返回404錯誤 # 這樣構(gòu)造主要是把查詢參數(shù)提交出去,在下載中間件部分用request模塊下載,用 “payloadFlag” 標記這種request yield Request(url = postUrl, headers = payloadHeader, meta = {'payloadFlag': True, 'payloadData': payloadData, 'headers': payloadHeader, 'asinInfo': asinInfo}, callback = self.parseAsinSearchFinallyRes, errback = self.error, dont_filter = True )
第二步:在中間件中,用requests模塊處理這個請求
# 文件:middlewares.py class PayLoadRequestMiddleware: def process_request(self, request, spider): # 如果有的請求是帶有payload請求的,在這個里面處理掉 if request.meta.get('payloadFlag', False): print(f"PayLoadRequestMiddleware enter") postUrl = request.url headers = request.meta.get('headers', {}) payloadData = request.meta.get('payloadData', {}) proxy = request.meta['proxy'] proxies = { "http": proxy, "https": proxy, } timeOut = request.meta.get('download_timeout', 25) allow_redirects = request.meta.get('dont_redirect', False) dumpJsonData = json.dumps(payloadData) print(f"dumpJsonData = {dumpJsonData}") # 發(fā)現(xiàn)這個居然是個同步 阻塞的過程,太過影響速度了 res = requests.post(postUrl, data=dumpJsonData, headers=headers, timeout=timeOut, proxies=proxies, allow_redirects=allow_redirects) # res = requests.post(postUrl, json=payloadData, headers=header) print(f"responseTime = {datetime.datetime.now()}, res text = {res.text}, statusCode = {res.status_code}") if res.status_code > 199 and res.status_code < 300: # 返回Response,就進入callback函數(shù)處理,不會再去下載這個請求 return HtmlResponse(url=request.url, body=res.content, request=request, # 最好根據(jù)網(wǎng)頁的具體編碼而定 encoding='utf-8', status=200) else: print(f"request mode getting page error, Exception = {e}") return HtmlResponse(url=request.url, status=500, request=request)
4.3. 遺留下的問題
scrapy之所以強大,就是因為并發(fā)度高。大家都知道,由于Python GIL的原因,導致python無法通過多線程來提高性能。但是至少可以做到下載與解析同步的過程,在下載空檔的時候,進行數(shù)據(jù)的解析,調(diào)度等等,這都歸功于scrapy采用的異步結(jié)構(gòu)。
但是,我們在中間件中使用requests模塊進行網(wǎng)頁下載,因為這是個同步過程,所以會阻塞在這個地方,拉低了整個爬蟲的效率。
所以,需要根據(jù)項目具體的情況,來決定合適的方案。當然這里又涉及到一個新的話題,就是scrapy提供的兩種爬取模式:深度優(yōu)先模式和廣度優(yōu)先模式。如何盡可能最大限度的利用scrapy的并發(fā)?在環(huán)境不穩(wěn)定的情形下如何保證盡可能穩(wěn)定的拿到數(shù)據(jù)?
深度優(yōu)先模式和廣度優(yōu)先模式是在settings中設(shè)置的。
# 文件: settings.py # DEPTH_PRIORITY(默認值為0)設(shè)置為一個正值后,Scrapy的調(diào)度器的隊列就會從LIFO變成FIFO,因此抓取規(guī)則就由DFO(深度優(yōu)先)變成了BFO(廣度優(yōu)先) DEPTH_PRIORITY = 1, # 廣度優(yōu)先(肯呢個會累積大量的request,累計占有大量的內(nèi)存,最終數(shù)據(jù)也在最后一批爬?。?
深度優(yōu)先:DEPTH_PRIORITY = 0
廣度優(yōu)先:DEPTH_PRIORITY = 1
想將這個過程做成異步的,一直沒有思路,歡迎大神提出好的想法
以上這篇python爬蟲實現(xiàn)POST request payload形式的請求就是小編分享給大家的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持腳本之家。
相關(guān)文章
Python實現(xiàn)微信好友數(shù)據(jù)爬取及分析
這篇文章會基于Python對微信好友進行數(shù)據(jù)分析,這里選擇的維度主要有:性別、頭像、簽名、位置,主要采用圖表和詞云兩種形式來呈現(xiàn)結(jié)果,其中,對文本類信息會采用詞頻分析和情感分析兩種方法,感興趣的小伙伴可以了解一下2021-12-12詳談Python基礎(chǔ)之內(nèi)置函數(shù)和遞歸
下面小編就為大家?guī)硪黄狿ython基礎(chǔ)之內(nèi)置函數(shù)和遞歸。小編覺得挺不錯的?,F(xiàn)在就分享給大家,也給大家做個參考。一起跟隨小編過來看看吧2017-06-06Pandas探索之高性能函數(shù)eval和query解析
這篇文章主要介紹了Pandas探索之高性能函數(shù)eval和query解析,小編覺得還是挺不錯的,這里分享給大家,供需要的朋友參考。2017-10-10