Python3學習urllib的使用方法示例
urllib是python的一個獲取url(Uniform Resource Locators,統(tǒng)一資源定址符)了,可以利用它來抓取遠程的數(shù)據(jù)進行保存,本文整理了一些關(guān)于urllib使用中的一些關(guān)于header,代理,超時,認證,異常處理處理方法。
1.基本方法
urllib.request.urlopen(url, data=None, [timeout, ]*, cafile=None, capath=None, cadefault=False, context=None)
- url: 需要打開的網(wǎng)址
- data:Post提交的數(shù)據(jù)
- timeout:設(shè)置網(wǎng)站的訪問超時時間
直接用urllib.request模塊的urlopen()獲取頁面,page的數(shù)據(jù)格式為bytes類型,需要decode()解碼,轉(zhuǎn)換成str類型。
from urllib import request
response = request.urlopen(r'http://python.org/') # <http.client.HTTPResponse object at 0x00000000048BC908> HTTPResponse類型
page = response.read()
page = page.decode('utf-8')
urlopen返回對象提供方法:
- read() , readline() ,readlines() , fileno() , close() :對HTTPResponse類型數(shù)據(jù)進行操作
- info():返回HTTPMessage對象,表示遠程服務(wù)器返回的頭信息
- getcode():返回Http狀態(tài)碼。如果是http請求,200請求成功完成;404網(wǎng)址未找到
- geturl():返回請求的url
1、簡單讀取網(wǎng)頁信息
import urllib.request
response = urllib.request.urlopen('http://python.org/')
html = response.read()
2、使用request
urllib.request.Request(url, data=None, headers={}, method=None)
使用request()來包裝請求,再通過urlopen()獲取頁面。
import urllib.request
req = urllib.request.Request('http://python.org/')
response = urllib.request.urlopen(req)
the_page = response.read()
3、發(fā)送數(shù)據(jù),以登錄知乎為例
'''''
Created on 2016年5月31日
@author: gionee
'''
import gzip
import re
import urllib.request
import urllib.parse
import http.cookiejar
def ungzip(data):
try:
print("嘗試解壓縮...")
data = gzip.decompress(data)
print("解壓完畢")
except:
print("未經(jīng)壓縮,無需解壓")
return data
def getXSRF(data):
cer = re.compile('name=\"_xsrf\" value=\"(.*)\"',flags = 0)
strlist = cer.findall(data)
return strlist[0]
def getOpener(head):
# cookies 處理
cj = http.cookiejar.CookieJar()
pro = urllib.request.HTTPCookieProcessor(cj)
opener = urllib.request.build_opener(pro)
header = []
for key,value in head.items():
elem = (key,value)
header.append(elem)
opener.addheaders = header
return opener
# header信息可以通過firebug獲得
header = {
'Connection': 'Keep-Alive',
'Accept': 'text/html, application/xhtml+xml, */*',
'Accept-Language': 'en-US,en;q=0.8,zh-Hans-CN;q=0.5,zh-Hans;q=0.3',
'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:46.0) Gecko/20100101 Firefox/46.0',
'Accept-Encoding': 'gzip, deflate',
'Host': 'www.zhihu.com',
'DNT': '1'
}
url = 'http://www.zhihu.com/'
opener = getOpener(header)
op = opener.open(url)
data = op.read()
data = ungzip(data)
_xsrf = getXSRF(data.decode())
url += "login/email"
email = "登錄賬號"
password = "登錄密碼"
postDict = {
'_xsrf': _xsrf,
'email': email,
'password': password,
'rememberme': 'y'
}
postData = urllib.parse.urlencode(postDict).encode()
op = opener.open(url,postData)
data = op.read()
data = ungzip(data)
print(data.decode())
4、http錯誤
import urllib.request
req = urllib.request.Request('http://www.lz881228.blog.163.com ')
try:
urllib.request.urlopen(req)
except urllib.error.HTTPError as e:
print(e.code)
print(e.read().decode("utf8"))
5、異常處理
from urllib.request import Request, urlopen
from urllib.error import URLError, HTTPError
req = Request("http://www.abc.com /")
try:
response = urlopen(req)
except HTTPError as e:
print('The server couldn't fulfill the request.')
print('Error code: ', e.code)
except URLError as e:
print('We failed to reach a server.')
print('Reason: ', e.reason)
else:
print("good!")
print(response.read().decode("utf8"))
6、http認證
import urllib.request
# create a password manager
password_mgr = urllib.request.HTTPPasswordMgrWithDefaultRealm()
# Add the username and password.
# If we knew the realm, we could use it instead of None.
top_level_url = "http://www.dbjr.com.cn /"
password_mgr.add_password(None, top_level_url, 'rekfan', 'xxxxxx')
handler = urllib.request.HTTPBasicAuthHandler(password_mgr)
# create "opener" (OpenerDirector instance)
opener = urllib.request.build_opener(handler)
# use the opener to fetch a URL
a_url = "http://www.dbjr.com.cn /"
x = opener.open(a_url)
print(x.read())
# Install the opener.
# Now all calls to urllib.request.urlopen use our opener.
urllib.request.install_opener(opener)
a = urllib.request.urlopen(a_url).read().decode('utf8')
print(a)
7、使用代理
import urllib.request
proxy_support = urllib.request.ProxyHandler({'sock5': 'localhost:1080'})
opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)
a = urllib.request.urlopen("http://www.baidu.com ").read().decode("utf8")
print(a)
8、超時
import socket
import urllib.request
# timeout in seconds
timeout = 2
socket.setdefaulttimeout(timeout)
# this call to urllib.request.urlopen now uses the default timeout
# we have set in the socket module
req = urllib.request.Request('http://www.dbjr.com.cn /')
a = urllib.request.urlopen(req).read()
print(a)
以上就是本文的全部內(nèi)容,希望對大家的學習有所幫助,也希望大家多多支持腳本之家。
相關(guān)文章
Python辦公自動化之將任意文件轉(zhuǎn)為PDF格式
這種把某個文件轉(zhuǎn)為pdf枯燥無聊的工作,既沒有什么技術(shù)含量又累. 今天辰哥就教大家將任意文件批量轉(zhuǎn)為PDF,這里以日常辦公的word、excel、ppt為例,這三種格式的文件轉(zhuǎn)為PDF.需要的朋友可以參考下2021-06-06
在linux下實現(xiàn) python 監(jiān)控usb設(shè)備信號
今天小編就為大家分享一篇在linux下實現(xiàn) python 監(jiān)控usb設(shè)備信號,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2019-07-07

