python中l(wèi)xml模塊的使用詳解
lxml是python的一個解析庫,支持HTML和XML的解析,支持XPath解析方式,而且解析效率非常高
1.lxml的安裝
pip install lxml
2.導(dǎo)入lxml 的 etree 庫
from lxml import etree
3.利用etree.HTML,將字符串轉(zhuǎn)化為Element對象,Element對象具有xpath的方法,返回結(jié)果的列表,能夠接受bytes類型的數(shù)據(jù)和str類型的數(shù)據(jù)。
from lxml import etree
html = etree.HTML(response.text)
ret_list = html.xpath("xpath字符串")也可以這樣使用:
from lxml import etree
htmlDiv = etree.HTML(response.content.decode())
hrefs = htmlDiv.xpath("http://h4//a/@href")4.把轉(zhuǎn)化后的element對象轉(zhuǎn)化為字符串,返回bytes類型,etree.tostring(element)
假設(shè)我們現(xiàn)有如下的html字符換,嘗試對他進行操作:
<div> <ul> <li class="item-1"><a href="link1.html">first item</a></li> <li class="item-1"><a href="link2.html">second item</a></li> <li class="item-inactive"><a href="link3.html">third item</a></li> <li class="item-1"><a href="link4.html">fourth item</a></li> <li class="item-0"><a href="link5.html">fifth item</a> # 注意,此處缺少一個 </li> 閉合標簽 </ul> </div>
代碼示例:
from lxml import etree
text = ''' <div> <ul>
<li class="item-1"><a href="link1.html">first item</a></li>
<li class="item-1"><a href="link2.html">second item</a></li>
<li class="item-inactive"><a href="link3.html">third item</a></li>
<li class="item-1"><a href="link4.html">fourth item</a></li>
<li class="item-0"><a href="link5.html">fifth item</a>
</ul> </div> '''
html = etree.HTML(text)
print(type(html))
handeled_html_str = etree.tostring(html).decode()
print(handeled_html_str)輸出結(jié)果:
<class 'lxml.etree._Element'>
<html><body><div> <ul>
<li class="item-1"><a href="link1.html">first item</a></li>
<li class="item-1"><a href="link2.html">second item</a></li>
<li class="item-inactive"><a href="link3.html">third item</a></li>
<li class="item-1"><a href="link4.html">fourth item</a></li>
<li class="item-0"><a href="link5.html">fifth item</a>
</li></ul> </div> </body></html>
可以發(fā)現(xiàn),lxml確實能夠把確實的標簽補充完成,但是請注意lxml是人寫的,很多時候由于網(wǎng)頁不夠規(guī)范,或者是lxml的bug。
即使參考url地址對應(yīng)的響應(yīng)去提取數(shù)據(jù),任然獲取不到,這個時候我們需要使用etree.tostring的方法,觀察etree到底把html轉(zhuǎn)化成了什么樣子,即根據(jù)轉(zhuǎn)化后的html字符串去進行數(shù)據(jù)的提取。
5.lxml的深入練習
from lxml import etree
text = ''' <div> <ul>
<li class="item-1"><a href="link1.html">first item</a></li>
<li class="item-1"><a href="link2.html">second item</a></li>
<li class="item-inactive"><a href="link3.html">third item</a></li>
<li class="item-1"><a href="link4.html">fourth item</a></li>
<li class="item-0"><a href="link5.html">fifth item</a>
</ul> </div> '''
html = etree.HTML(text)
#獲取href的列表和title的列表
href_list = html.xpath("http://li[@class='item-1']/a/@href")
title_list = html.xpath("http://li[@class='item-1']/a/text()")
#組裝成字典
for href in href_list:
item = {}
item["href"] = href
item["title"] = title_list[href_list.index(href)]
print(item)輸出為:
{'href': 'link1.html', 'title': 'first item'}
{'href': 'link2.html', 'title': 'second item'}
{'href': 'link4.html', 'title': 'fourth item'}
6.lxml模塊的進階使用
返回的是element對象,可以繼續(xù)使用xpath方法,對此我們可以在后面的數(shù)據(jù)提取過程中:先根據(jù)某個標簽進行分組,分組之后再示例如下:
from lxml import etree
text = ''' <div> <ul>
<li class="item-1"><a>first item</a></li>
<li class="item-1"><a href="link2.html">second item</a></li>
<li class="item-inactive"><a href="link3.html">third item</a></li>
<li class="item-1"><a href="link4.html">fourth item</a></li>
<li class="item-0"><a href="link5.html">fifth item</a>
</ul> </div> '''
html = etree.HTML(text)
li_list = html.xpath("http://li[@class='item-1']")
print(li_list)結(jié)果為:
[<Element li at 0x11106cb48>, <Element li at 0x11106cb88>, <Element li at 0x11106cbc8>]數(shù)據(jù)的提取
可以發(fā)現(xiàn)結(jié)果是一個element對象,這個對象能夠繼續(xù)使用xpath方法
先根據(jù)li標簽進行分組,之后再進行數(shù)據(jù)的提取
from lxml import etree
text = ''' <div> <ul>
<li class="item-1"><a>first item</a></li>
<li class="item-1"><a href="link2.html">second item</a></li>
<li class="item-inactive"><a href="link3.html">third item</a></li>
<li class="item-1"><a href="link4.html">fourth item</a></li>
<li class="item-0"><a href="link5.html">fifth item</a>
</ul> </div> '''
#根據(jù)li標簽進行分組
html = etree.HTML(text)
li_list = html.xpath("http://li[@class='item-1']")
#在每一組中繼續(xù)進行數(shù)據(jù)的提取
for li in li_list:
item = {}
item["href"] = li.xpath("./a/@href")[0] if len(li.xpath("./a/@href"))>0 else None
item["title"] = li.xpath("./a/text()")[0] if len(li.xpath("./a/text()"))>0 else None
print(item)結(jié)果是:
{'href': None, 'title': 'first item'}
{'href': 'link2.html', 'title': 'second item'}
{'href': 'link4.html', 'title': 'fourth item'}
7.案列:貼吧極速版

代碼如下:
import requests
from lxml import etree
class TieBaSpider:
def __init__(self, tieba_name):
#1. start_url
self.start_url= "http://tieba.baidu.com/mo/q---C9E0BC1BC80AA0A7CE472600CDE9E9E3%3AFG%3D1-sz%40320_240%2C-1-3-0--2--wapp_1525330549279_782/m?kw={}&lp=6024".format(tieba_name)
self.headers = {"User-Agent": "Mozilla/5.0 (Linux; Android 8.0; Pixel 2 Build/OPD3.170816.012) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.139 Mobile Safari/537.36"}
self.part_url = "http://tieba.baidu.com/mo/q---C9E0BC1BC80AA0A7CE472600CDE9E9E3%3AFG%3D1-sz%40320_240%2C-1-3-0--2--wapp_1525330549279_782"
def parse_url(self,url): #發(fā)送請求,獲取響應(yīng)
# print(url)
response = requests.get(url,headers=self.headers)
return response.content
def get_content_list(self,html_str): #3. 提取數(shù)據(jù)
html = etree.HTML(html_str)
div_list = html.xpath("http://body/div/div[contains(@class,'i')]")
content_list = []
for div in div_list:
item = {}
item["href"] = self.part_url+div.xpath("./a/@href")[0]
item["title"] = div.xpath("./a/text()")[0]
item["img_list"] = self.get_img_list(item["href"], [])
content_list.append(item)
#提取下一頁的url地址
next_url = html.xpath("http://a[text()='下一頁']/@href")
next_url = self.part_url + next_url[0] if len(next_url)>0 else None
return content_list, next_url
def get_img_list(self,detail_url, img_list):
#1. 發(fā)送請求,獲取響應(yīng)
detail_html_str = self.parse_url(detail_url)
#2. 提取數(shù)據(jù)
detail_html = etree.HTML(detail_html_str)
img_list += detail_html.xpath("http://img[@class='BDE_Image']/@src")
#詳情頁下一頁的url地址
next_url = detail_html.xpath("http://a[text()='下一頁']/@href")
next_url = self.part_url + next_url[0] if len(next_url)>0 else None
if next_url is not None: #當存在詳情頁的下一頁,請求
return self.get_img_list(next_url, img_list)
#else不用寫
img_list = [requests.utils.unquote(i).split("src=")[-1] for i in img_list]
return img_list
def save_content_list(self,content_list):#保存數(shù)據(jù)
for content in content_list:
print(content)
def run(self): #實現(xiàn)主要邏輯
next_url = self.start_url
while next_url is not None:
#1. start_url
#2. 發(fā)送請求,獲取響應(yīng)
html_str = self.parse_url(next_url)
#3. 提取數(shù)據(jù)
content_list, next_url = self.get_content_list(html_str)
#4. 保存
self.save_content_list(content_list)
#5. 獲取next_url,循環(huán)2-5
if __name__ == '__main__':
tieba = TieBaSpider("每日中國")
tieba.run()到此這篇關(guān)于python中l(wèi)xml模塊的使用詳解的文章就介紹到這了,更多相關(guān)python lxml內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!
相關(guān)文章
Python使用plt庫實現(xiàn)繪制動態(tài)曲線圖并導(dǎo)出為GIF或MP4
這篇文章主要為大家詳細介紹了Python如何使用plt庫實現(xiàn)繪制動態(tài)曲線圖并導(dǎo)出為GIF或MP4,文中的示例代碼講解詳細,需要的可以了解一下2024-03-03
Python 用Redis簡單實現(xiàn)分布式爬蟲的方法
本篇文章主要介紹了Python 用Redis簡單實現(xiàn)分布式爬蟲的方法,小編覺得挺不錯的,現(xiàn)在分享給大家,也給大家做個參考。一起跟隨小編過來看看吧2017-11-11
Python內(nèi)置函數(shù)issubclass()的具體使用
issubclass()是Python中一個非常有用的內(nèi)置函數(shù),它提供了一種簡單的方式來檢查類的繼承關(guān)系,本文主要介紹了Python內(nèi)置函數(shù)issubclass()的具體使用,文中通過示例代碼介紹的非常詳細,需要的朋友們下面隨著小編來一起學(xué)習學(xué)習吧2007-03-03
Python 字符串操作實現(xiàn)代碼(截取/替換/查找/分割)
這篇文章主要介紹了Python 字符串截取/替換/查找/分割等實現(xiàn)方法,需要的朋友可以參考下2013-06-06

