使用BeautifulSoup4解析XML的方法小結
Beautiful Soup 是一個用來從HTML或XML文件中提取數(shù)據(jù)的Python庫,它利用大家所喜歡的解析器提供了許多慣用方法用來對文檔樹進行導航、查找和修改。
幫助文檔英文版:https://www.crummy.com/software/BeautifulSoup/bs4/doc/
幫助文檔中文版:https://www.crummy.com/software/BeautifulSoup/bs4/doc.zh/
入門示例
以下是電影《愛麗絲夢游仙境》中的一段HTML內容:
我們以此為例,對如何使用BeautifulSoup解析HTML頁面內容進行簡單入門示例:
from bs4 import BeautifulSoup # 《愛麗絲夢游仙境》故事片段 html_doc = """ <html> <head><title>The Dormouse's story</title></head> <body> <p class="title"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>, <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> </body> </html> """ # 構造解析樹 soup = BeautifulSoup(html_doc, "html.parser") # 美化輸出 #soup.prettify()) # 獲取第一個 title 標簽 soup.title # <title>The Dormouse's story</title> # 獲取第一個 title 標簽的名稱 soup.title.name # title # 獲取第一個 title 標簽的文本內容 soup.title.string # The Dormouse's story # 獲取第一個 title 標簽的父標簽的名稱 soup.title.parent.name # head # 獲取第一個 p 標簽 soup.p # <p class="title"><b>The Dormouse's story</b></p> # 獲取第一個 p 標簽的 class 屬性 soup.p['class'] # ['title'] # 獲取第一個 a 標簽 soup.a # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a> # 查找所有的 a 標簽 soup.find_all('a') # [<a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>] # 獲取所有的 a 標簽的 href 屬性 for link in soup.find_all('a'): print(link.get('href')) # http://example.com/elsie # http://example.com/lacie # http://example.com/tillie # 查找 id = link3 的 a 標簽 soup.find(id="link3") # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a> # 獲取解析樹的文本內容 print(soup.get_text()) # The Dormouse's story # # The Dormouse's story # Once upon a time there were three little sisters; and their names were # Elsie, # Lacie and # Tillie; # and they lived at the bottom of a well. # ...
解析器
Beautiful Soup除了支持Python標準庫中的HTML解析器外,還支持一些第三方的解析器,其中一個就是 lxml 。
下表列出了主要的解析器,以及它們的優(yōu)缺點:
解析器 |
使用方法 |
優(yōu)勢 |
劣勢 |
Python標準庫 |
BeautifulSoup(markup, "html.parser") |
Python的內置標準庫 執(zhí)行速度適中 文檔容錯能力強 |
Python 2.7.3 or 3.2.2)前 的版本中文檔容錯能力差 |
lxml HTML 解析器 |
BeautifulSoup(markup, "lxml") |
速度快 文檔容錯能力強 |
需要安裝C語言庫 |
lxml XML 解析器 |
BeautifulSoup(markup, ["lxml", "xml"]) BeautifulSoup(markup, "xml") |
速度快 唯一支持XML的解析器 |
需要安裝C語言庫 |
html5lib |
BeautifulSoup(markup, "html5lib") |
最好的容錯性 以瀏覽器的方式解析文檔 生成HTML5格式的文檔 |
速度慢 不依賴外部擴展 |
推薦使用lxml作為解析器,因為效率更高。 在Python2.7.3之前的版本和Python3中3.2.2之前的版本,必須安裝lxml或html5lib, 因為那些Python版本的標準庫中內置的HTML解析方法不夠穩(wěn)定。
注意: 如果一段HTML或XML文檔格式不正確的話,那么在不同的解析器中返回的結果可能是不一樣的。
解析器之間的區(qū)別
Beautiful Soup為不同的解析器提供了相同的接口,但解析器本身是有區(qū)別的,同一篇文檔被不同的解析器解析后可能會生成不同結構的樹型文檔,區(qū)別最大的是HTML解析器和XML解析器,看下面片段被解析成HTML結構:
html_soup = BeautifulSoup("<a><b/></a>", "lxml") print(html_soup) # <html><body><a><b></b></a></body></html>
因為空標簽<b/>不符合HTML標準,所以解析器把它解析成<b></b>。
同樣的文檔使用XML解析如下(解析XML需要安裝lxml庫)。注意,空標簽<b/>依然被保留,并且文檔前添加了XML頭,而不是被包含在<html>標簽內:
xml_soup = BeautifulSoup("<a><b/></a>", "xml") print(xml_soup) # <?xml version="1.0" encoding="utf-8"?> # <a><b/></a>
HTML解析器之間也有區(qū)別,如果被解析的HTML文檔是標準格式,那么解析器之間沒有任何差別,只是解析速度不同,結果都會返回正確的文檔樹。
但是如果被解析文檔不是標準格式,那么不同的解析器返回結果可能不同。下面例子中,使用lxml解析錯誤格式的文檔,結果</p>標簽被直接忽略掉了:
soup = BeautifulSoup("<a></p>", "lxml") print(soup) # <html><body><a></a></body></html>
使用html5lib庫解析相同文檔會得到不同的結果:
soup = BeautifulSoup("<a></p>", "html5lib") print(soup) # <html><head></head><body><a><p></p></a></body></html>
html5lib庫沒有忽略掉</p>標簽,而是自動補全了標簽,還給文檔樹添加了<head>標簽。
使用pyhton內置庫解析結果如下:
soup = BeautifulSoup("<a></p>", "html.parser") print(soup) # <a></a>
與lxml 庫類似的,Python內置庫忽略掉了</p>標簽,與html5lib庫不同的是標準庫沒有嘗試創(chuàng)建符合標準的文檔格式或將文檔片段包含在<body>標簽內,與lxml不同的是標準庫甚至連<html>標簽都沒有嘗試去添加。
因為文檔片段“<a></p>”是錯誤格式,所以以上解析方式都能算作”正確”,html5lib庫使用的是HTML5的部分標準,所以最接近”正確”,不過所有解析器的結構都能夠被認為是”正?!钡?。
不同的解析器可能影響代碼執(zhí)行結果,如果在分發(fā)給別人的代碼中使用了 BeautifulSoup ,那么最好注明使用了哪種解析器,以減少不必要的麻煩。
創(chuàng)建文檔對象
將一段文檔傳入BeautifulSoup 的構造方法,就能得到一個文檔的對象, 可以傳入一段字符串或一個文件句柄。
from bs4 import BeautifulSoup soup = BeautifulSoup(open("index.html")) soup = BeautifulSoup("<html>data</html>")
首先,文檔被轉換成Unicode,并且HTML的實例都被轉換成Unicode編碼。
soup = BeautifulSoup("Sacré bleu!") print(soup) # <html><body><p>Sacré bleu!</p></body></html>
然后,Beautiful Soup選擇最合適的解析器來解析這段文檔,如果手動指定解析器那么Beautiful Soup會選擇指定的解析器來解析文檔。
對象的種類
Beautiful Soup將復雜HTML文檔轉換成一個復雜的樹形結構,每個節(jié)點都是Python對象,所有對象可以歸納為4種:Tag 、NavigableString、 BeautifulSoup、Comment 。
Tag
Tag 對象與XML或HTML原生文檔中的tag相同:
from bs4 import BeautifulSoup soup = BeautifulSoup('<b class="boldest">Extremely bold</b>',"html.parser") # 獲取第一個 b 標簽 tag = soup.b # 獲取對象類型 type(tag) # <class 'bs4.element.Tag'> # 獲取標簽的名稱 tag.name # b # 修改標簽的名稱 tag.name = "blockquote" tag # <blockquote class="boldest">Extremely bold</blockquote> # 查看標簽的 class 屬性 tag['class'] # ['boldest'] # 修改標簽的 class 屬性 tag['class'] = 'verybold' # 查看標簽的 class 屬性內容 tag.get('class') # verybold # 為標簽新增 id 屬性 tag['id'] = 'title' tag # <blockquote class="verybold" id="title">Extremely bold</blockquote> # 查看標簽的所有屬性 tag.attrs # {'class': ['verybold'], 'id': 'title'} # 刪除標簽的 id 屬性 del tag['id'] tag # <blockquote class="verybold">Extremely bold</blockquote>
可遍歷字符串
字符串常被包含在tag內,Beautiful Soup用 NavigableString 類來包裝tag中的字符串:
from bs4 import BeautifulSoup soup = BeautifulSoup('<b class="boldest">Extremely bold</b>', "html.parser") # 獲取第一個 b 標簽 tag = soup.b # 獲取標簽的文本內容 tag.string # Extremely bold # 獲取標簽的文本內容的類型 type(tag.string) # <class 'bs4.element.NavigableString'>
BeautifulSoup
BeautifulSoup 對象表示的是一個文檔的全部內容,大部分時候,可以把它當作 Tag 對象,它支持 遍歷文檔樹 和 搜索文檔樹 中描述的大部分的方法。
因為 BeautifulSoup 對象并不是真正的HTML或XML的tag,所以它沒有name和attribute屬性。但有時查看它的 .name 屬性是很方便的,所以 BeautifulSoup 對象包含了一個值為 “[document]” 的特殊屬性 .name 。
soup = BeautifulSoup('<b class="boldest">Extremely bold</b>',"html.parser") soup.name # [document]
注釋及特殊字符串
Tag、NavigableString、BeautifulSoup 幾乎覆蓋了html和xml中的所有內容,但是還有一些特殊對象,容易讓人擔心的內容是文檔的注釋部分:
markup = "<b><!--Hey, buddy. Want to buy a used parser?--></b>" soup = BeautifulSoup(markup) comment = soup.b.string type(comment) # <class 'bs4.element.Comment'>
Comment 對象是一個特殊類型的 NavigableString 對象:
comment # Hey, buddy. Want to buy a used parser?
但是當它出現(xiàn)在HTML文檔中時, Comment 對象會使用特殊的格式輸出:
soup.b.prettify() # <b> # <!--Hey, buddy. Want to buy a used parser?--> # </b>
Beautiful Soup中定義的其它類型都可能會出現(xiàn)在XML的文檔中: CData,ProcessingInstruction, Declaration,Doctype。與 Comment 對象類似。這些類都是 NavigableString 的子類,只是添加了一些額外的方法的字符串獨享。下面是用CDATA來替代注釋的例子:
from bs4 import CData cdata = CData("A CDATA block") comment.replace_with(cdata) print(soup.b.prettify()) # <b> # <![CDATA[A CDATA block]]> # </b>
子節(jié)點
一個Tag可能包含多個字符串或其它的Tag,這些都是這個Tag的子節(jié)點。Beautiful Soup提供了許多操作和遍歷子節(jié)點的屬性。
注意: Beautiful Soup中字符串節(jié)點不支持這些屬性,因為字符串沒有子節(jié)點。
繼續(xù)拿上面的《愛麗絲夢游仙境》的文檔來做示例:
from bs4 import BeautifulSoup # 《愛麗絲夢游仙境》故事片段 html_doc = """ <html> <head><title>The Dormouse's story</title></head> <body> <p class="title"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>, <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> </body> </html> """ # 構造解析樹 soup = BeautifulSoup(html_doc, "html.parser") # 通過點取屬性的方式獲得當前名字的第一個tag soup.body.p.b # <b>The Dormouse's story</b> # 查找所有的 a 標簽 soup.find_all('a') # [<a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>] # 通過 .contents 屬性獲取tag 的子節(jié)點列表 soup.head.contents # [<title>The Dormouse's story</title>] # 通過 .children 生成器對tag的子節(jié)點進行遍歷 for child in soup.head.children: print(child) # <title>The Dormouse's story</title> # 通過 .descendants 生成器對tag的后代節(jié)點進行遍歷 for descendant in soup.head.descendants: print(descendant) # <title>The Dormouse's story</title> # The Dormouse's story # 通過 .string 屬性獲取唯一 NavigableString 類型子節(jié)點 soup.head.title.string # The Dormouse's story # 通過 .string 屬性獲取唯一子節(jié)點的NavigableString 類型子節(jié)點 soup.head.string # The Dormouse's story # 通過 .strings 屬性獲取 tag 中的多個字符串 for string in soup.strings: print(repr(string)) # 通過 .stripped_strings 屬性獲取 tag 中去除多余空白內容的多個字符串 for string in soup.stripped_strings: print(repr(string))
注意:如果tag包含了多個子節(jié)點,tag就無法確定 .string 方法應該調用哪個子節(jié)點的內容, .string 的輸出結果是 None 。
父節(jié)點
每個tag或字符串都有父節(jié)點,還是以上面的《愛麗絲夢游仙境》的文檔來舉例:
from bs4 import BeautifulSoup # 《愛麗絲夢游仙境》故事片段 html_doc = """ <html> <head><title>The Dormouse's story</title></head> <body> <p class="title"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>, <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> </body> </html> """ # 構造解析樹 soup = BeautifulSoup(html_doc, "html.parser") # 通過 .parent 屬性來獲取 title 標簽的父節(jié)點 soup.title.parent # <head><title>The Dormouse's story</title></head> # 通過 .parent 屬性來獲取 title 標簽的內字符串的父節(jié)點 soup.title.string.parent # <title>The Dormouse's story</title> # 文檔的頂層節(jié)點 <html> 的父節(jié)點是 BeautifulSoup 對象 type(soup.html.parent) # <class 'bs4.BeautifulSoup'> # BeautifulSoup 對象的 .parent 是None soup.parent for parent in soup.a.parents: print(parent.name) # p # body # html # [document]
兄弟節(jié)點
為了示例如何使用BeautifulSoup來查找兄弟節(jié)點,需要對上例中的《愛麗絲夢游仙境》文檔進行修改,刪掉一些換行符、字符串和標簽。具體示例代碼如下:
from bs4 import BeautifulSoup # 《愛麗絲夢游仙境》故事片段 html_doc = """ <html> <body> <p class="title"><b>Schindler's List</b></p> <p class="names"><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name1">Oskar Schindler</a><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name2">Itzhak Stern</a><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name3">Helen Hirsch</a></p> </body> </html> """ # 構造解析樹 soup = BeautifulSoup(html_doc, "html.parser") # 獲取 ID = name2 的 a 標簽 name2 = soup.find("a", {"id":{"name2"}}) # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name2">Itzhak Stern</a> # 獲取前一個兄弟節(jié)點 name1 = name2.previous_sibling # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name1">Oskar Schindler</a> # 獲取前一個兄弟節(jié)點 name3 = name2.next_sibling # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name3">Helen Hirsch</a> name1.previous_sibling # None name3.next_sibling # None # 通過 .next_siblings 屬性對當前節(jié)點的兄弟節(jié)點進行遍歷 for sibling in soup.find("a", {"id":{"name1"}}).next_siblings: print(repr(sibling)) # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name2">Itzhak Stern</a> # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name3">Helen Hirsch</a> # 通過 .previous_siblings 屬性對當前節(jié)點的兄弟節(jié)點進行遍歷 for sibling in soup.find("a", {"id":{"name3"}}).previous_siblings: print(repr(sibling)) # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name2">Itzhak Stern</a> # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name1">Oskar Schindler</a>
注意:標簽之間包含的字符串、字符或者換行符等內容均會被看作節(jié)點。
回退和前進
繼續(xù)用上一章節(jié)《兄弟節(jié)點》中的HTML文檔進行回退和前進示例:
from bs4 import BeautifulSoup # 《愛麗絲夢游仙境》故事片段 html_doc = """ <html> <body> <p class="title"><b>Schindler's List</b></p> <p class="names"><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name1">Oskar Schindler</a><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name2">Itzhak Stern</a><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name3">Helen Hirsch</a></p> </body> </html> """ # 構造解析樹 soup = BeautifulSoup(html_doc, "html.parser") # 獲取 ID = name2 的 a 標簽 name2 = soup.find("a", {"id":{"name2"}}) # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name2">Itzhak Stern</a> # 獲取前一個節(jié)點 name2.previous_element # Oskar Schindler # 獲取前一個節(jié)點的前一個節(jié)點 name2.previous_element.previous_element # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name1">Oskar Schindler</a> # 獲取后一個節(jié)點 name2.next_element # Itzhak Stern # 獲取后一個節(jié)點的后一個節(jié)點 name2.next_element.next_element # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name3">Helen Hirsch</a> # 通過 .next_elements 屬性對當前節(jié)點的后面節(jié)點進行遍歷 for element in soup.find("a", {"id":{"name1"}}).next_elements: print(repr(element)) # 'Oskar Schindler' # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name2">Itzhak Stern</a> # 'Itzhak Stern' # <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name3">Helen Hirsch</a> # 'Helen Hirsch' # '\n' # '\n' # '\n' # 通過 .previous_elements 屬性對當前節(jié)點的前面節(jié)點進行遍歷 for element in soup.find("a", {"id":{"name1"}}).previous_elements: print(repr(element)) # <p class="names"><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name1">Oskar Schindler</a><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name2">Itzhak Stern</a><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name3">Helen Hirsch</a></p> # '\n' # "Schindler's List" # <b>Schindler's List</b> # <p class="title"><b>Schindler's List</b></p> # '\n' # <body> # <p class="title"><b>Schindler's List</b></p> # <p class="names"><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name1">Oskar Schindler</a><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name2">Itzhak Stern</a><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name3">Helen Hirsch</a></p> # </body> # '\n' # <html> # <body> # <p class="title"><b>Schindler's List</b></p> # <p class="names"><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name1">Oskar Schindler</a><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name2">Itzhak Stern</a><a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="name3">Helen Hirsch</a></p> # </body> # </html> # '\n'
搜索文檔樹
find_all( name , attrs , recursive , text , **kwargs ) find( name , attrs , recursive , text , **kwargs ) find_parents( name , attrs , recursive , text , **kwargs ) find_parent( name , attrs , recursive , text , **kwargs ) find_next_siblings( name , attrs , recursive , text , **kwargs ) find_next_sibling( name , attrs , recursive , text , **kwargs ) find_previous_siblings( name , attrs , recursive , text , **kwargs ) find_previous_sibling( name , attrs , recursive , text , **kwargs ) find_all_next( name , attrs , recursive , text , **kwargs ) find_next( name , attrs , recursive , text , **kwargs ) find_all_previous( name , attrs , recursive , text , **kwargs ) find_previous( name , attrs , recursive , text , **kwargs )
Beautiful Soup定義了很多搜索方法,這里著重對 find_all() 的用法進行舉例。
from bs4 import BeautifulSoup from bs4 import NavigableString import re # 《愛麗絲夢游仙境》故事片段 html_doc = """ <html> <head><title>The Dormouse's story</title></head> <body> <p class="title"><b>The Dormouse's story</b></p> <p class="story">Once upon a time there were three little sisters; and their names were <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link1">Elsie</a>, <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link2">Lacie</a> and <a rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" class="sister" id="link3">Tillie</a>; and they lived at the bottom of a well.</p> <p class="story">...</p> </body> </html> """ # 構造解析樹 soup = BeautifulSoup(html_doc, "html.parser") # 傳入字符串,根據(jù)標簽名稱查找(b)標簽 soup.find_all('b') # [<b>The Dormouse's story</b>] # 傳入兩個字符串參數(shù),返回 class = title 的 p 標簽 soup.find_all("p", "title") # [<p class="title"><b>The Dormouse's story</b></p>] # 返回 id = link2 的標簽 soup.find_all(id='link2') # [<a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>] # href 匹配 elsie 并且 id = link1 的標簽 soup.find_all(href=re.compile("elsie"), id='link1') # [<a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">three</a>] # 返回 id = link1 的標簽 print(soup.find_all(attrs={"id": "link1"})) # [<a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>] # 返回 class = sister 的 a 標簽 soup.find_all("a", class_="sister") # [<a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>] def has_six_characters(css_class): return css_class is not None and len(css_class) == 6 # 返回 class 屬性為6個字符的 標簽 soup.find_all(class_=has_six_characters) # [<a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>] # 返回字符串 soup.find_all(text=["Tillie", "Elsie", "Lacie"]) # 返回前兩個 a 標簽 soup.find_all("a", limit=2) # 返回 title 標簽,不級聯(lián)查詢 soup.html.find_all("title", recursive=False) # 使用 CSS 選擇器進行過濾 soup.select("head > title") # 傳入正則表達式,根據(jù)標簽名稱查找匹配(以字母 b 開頭)標簽 for tag in soup.find_all(re.compile("^b")): print(tag.name) # body # b # 傳入正則表達式,根據(jù)標簽名稱查找匹配(包含字母 t)標簽 for tag in soup.find_all(re.compile("t")): print(tag.name) # html # title # 傳入列表,根據(jù)標簽名稱查找(a和b)標簽 soup.find_all(["a", "b"]) # [<b>The Dormouse's story</b>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link1">Elsie</a>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link2">Lacie</a>, # <a class="sister" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" rel="external nofollow" id="link3">Tillie</a>] # 傳入True,返回除字符串節(jié)點外的所有標簽 for tag in soup.find_all(True): print(tag.name) def has_class_but_no_id(tag): return tag.has_attr('class') and not tag.has_attr('id') # 傳入自定義方法,返回僅包含 class 屬性但不包含 id 屬性的所有標簽 soup.find_all(has_class_but_no_id)
到此這篇關于使用BeautifulSoup4解析XML的方法小結的文章就介紹到這了,更多相關BeautifulSoup4解析XML內容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
相關文章
Python multiprocessing模塊中的Pipe管道使用實例
這篇文章主要介紹了Python multiprocessing模塊中的Pipe管道使用實例,本文直接給出使用實例,需要的朋友可以參考下2015-04-04Python實現(xiàn)Harbor私有鏡像倉庫垃圾自動化清理詳情
這篇文章主要介紹了Python實現(xiàn)Harbor私有鏡像倉庫垃圾自動化清理詳情,文章圍繞主題分享相關詳細代碼,需要的小伙伴可以參考一下2022-05-05基于python OpenCV實現(xiàn)動態(tài)人臉檢測
這篇文章主要為大家詳細介紹了基于python OpenCV實現(xiàn)動態(tài)人臉檢測,具有一定的參考價值,感興趣的小伙伴們可以參考一下2018-05-05python實現(xiàn)unicode轉中文及轉換默認編碼的方法
這篇文章主要介紹了python實現(xiàn)unicode轉中文及轉換默認編碼的方法,結合實例形式分析了Python針對Unicode編碼操作的相關技巧及編碼轉換中的常見問題解決方法,需要的朋友可以參考下2017-04-04