python通過偽裝頭部數(shù)據(jù)抵抗反爬蟲的實例
0x00 環(huán)境
系統(tǒng)環(huán)境:win10
編寫工具:JetBrains PyCharm Community Edition 2017.1.2 x64
python 版本:python-3.6.2
抓包工具:Fiddler 4
0x01 頭部數(shù)據(jù)偽裝思路
通過http向服務(wù)器提交數(shù)據(jù),以下是通過Fiddler 抓取python沒有偽裝的報文頭信息
GET /u012870721 HTTP/1.1 Accept-Encoding: identity Host: blog.csdn.net User-Agent: <span style="color:#ff0000;">Python-urllib/3.6</span> Connection: close
Python-urllib/3.6
很明顯啊,我們暴露了。現(xiàn)在要問了,該怎么!模擬瀏覽器,讓自己偽裝成瀏覽器,一下是瀏覽器訪問發(fā)送的頭部數(shù)據(jù)
Connection: keep-alive Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36 Referer: http://write.blog.csdn.net/postlist Accept-Encoding: gzip, deflate Accept-Language: zh-CN,zh;q=0.8
0x02代碼實現(xiàn)
from urllib import request html_url = "http://blog.csdn.net/u012870721"; #偽裝構(gòu)造頭 header ={ "Connection": "keep-alive", "Upgrade-Insecure-Requests": "1", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36", "Accept":" text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8", "Accept-Encoding": "gzip,deflate", "Accept-Language": "zh-CN,zh;q=0.8" }; #int main() #{ req = request.Request(url=html_url, headers=header); resp = request.urlopen(req); # return 0; # }
偽裝后進行發(fā)送的信息頭
GET /u012870721 HTTP/1.1 Host: blog.csdn.net Connection: close Upgrade-Insecure-Requests: 1 User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.113 Safari/537.36 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8 Accept-Encoding: gzip,deflate Accept-Language: zh-CN,zh;q=0.8
以上這篇python通過偽裝頭部數(shù)據(jù)抵抗反爬蟲的實例就是小編分享給大家的全部內(nèi)容了,希望能給大家一個參考,也希望大家多多支持腳本之家。
- python爬蟲爬取淘寶商品比價(附淘寶反爬蟲機制解決小辦法)
- 用sleep間隔進行python反爬蟲的實例講解
- python中繞過反爬蟲的方法總結(jié)
- cookies應(yīng)對python反爬蟲知識點詳解
- python反爬蟲方法的優(yōu)缺點分析
- python 常見的反爬蟲策略
- Python爬蟲與反爬蟲大戰(zhàn)
- Python常見反爬蟲機制解決方案
- Python反爬蟲偽裝瀏覽器進行爬蟲
- 詳解python 破解網(wǎng)站反爬蟲的兩種簡單方法
- python爬蟲 urllib模塊反爬蟲機制UA詳解
- 用python3 urllib破解有道翻譯反爬蟲機制詳解
- Python反爬蟲技術(shù)之防止IP地址被封殺的講解
- Python3爬蟲學習之應(yīng)對網(wǎng)站反爬蟲機制的方法分析
- python網(wǎng)絡(luò)爬蟲之如何偽裝逃過反爬蟲程序的方法
- python解決網(wǎng)站的反爬蟲策略總結(jié)
- Requests什么的通通爬不了的Python超強反爬蟲方案!
相關(guān)文章
Python中的map()函數(shù)和reduce()函數(shù)的用法
這篇文章主要介紹了Python中的map()函數(shù)和reduce()函數(shù)的用法,代碼基于Python2.x版本,需要的朋友可以參考下2015-04-04