Python爬蟲包BeautifulSoup實例(三)
一步一步構(gòu)建一個爬蟲實例,抓取糗事百科的段子
先不用beautifulsoup包來進行解析
第一步,訪問網(wǎng)址并抓取源碼
# -*- coding: utf-8 -*- # @Author: HaonanWu # @Date: 2016-12-22 16:16:08 # @Last Modified by: HaonanWu # @Last Modified time: 2016-12-22 20:17:13 import urllib import urllib2 import re import os if __name__ == '__main__': # 訪問網(wǎng)址并抓取源碼 url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357' user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} try: request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) content = response.read() except urllib2.HTTPError as e: print e exit() except urllib2.URLError as e: print e exit() print content.decode('utf-8')
第二步,利用正則表達式提取信息
首先先觀察源碼中,你需要的內(nèi)容的位置以及如何識別
然后用正則表達式去識別讀取
注意正則表達式中的 . 是不能匹配\n的,所以需要設(shè)置一下匹配模式。
# -*- coding: utf-8 -*- # @Author: HaonanWu # @Date: 2016-12-22 16:16:08 # @Last Modified by: HaonanWu # @Last Modified time: 2016-12-22 20:17:13 import urllib import urllib2 import re import os if __name__ == '__main__': # 訪問網(wǎng)址并抓取源碼 url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357' user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} try: request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) content = response.read() except urllib2.HTTPError as e: print e exit() except urllib2.URLError as e: print e exit() regex = re.compile('<div class="content">.*?<span>(.*?)</span>.*?</div>', re.S) items = re.findall(regex, content) # 提取數(shù)據(jù) # 注意換行符,設(shè)置 . 能夠匹配換行符 for item in items: print item
第三步,修正數(shù)據(jù)并保存到文件中
# -*- coding: utf-8 -*- # @Author: HaonanWu # @Date: 2016-12-22 16:16:08 # @Last Modified by: HaonanWu # @Last Modified time: 2016-12-22 21:41:32 import urllib import urllib2 import re import os if __name__ == '__main__': # 訪問網(wǎng)址并抓取源碼 url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357' user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} try: request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) content = response.read() except urllib2.HTTPError as e: print e exit() except urllib2.URLError as e: print e exit() regex = re.compile('<div class="content">.*?<span>(.*?)</span>.*?</div>', re.S) items = re.findall(regex, content) # 提取數(shù)據(jù) # 注意換行符,設(shè)置 . 能夠匹配換行符 path = './qiubai' if not os.path.exists(path): os.makedirs(path) count = 1 for item in items: #整理數(shù)據(jù),去掉\n,將<br/>換成\n item = item.replace('\n', '').replace('<br/>', '\n') filepath = path + '/' + str(count) + '.txt' f = open(filepath, 'w') f.write(item) f.close() count += 1
第四步,將多個頁面下的內(nèi)容都抓取下來
# -*- coding: utf-8 -*- # @Author: HaonanWu # @Date: 2016-12-22 16:16:08 # @Last Modified by: HaonanWu # @Last Modified time: 2016-12-22 20:17:13 import urllib import urllib2 import re import os if __name__ == '__main__': # 訪問網(wǎng)址并抓取源碼 path = './qiubai' if not os.path.exists(path): os.makedirs(path) user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} regex = re.compile('<div class="content">.*?<span>(.*?)</span>.*?</div>', re.S) count = 1 for cnt in range(1, 35): print '第' + str(cnt) + '輪' url = 'http://www.qiushibaike.com/textnew/page/' + str(cnt) + '/?s=4941357' try: request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) content = response.read() except urllib2.HTTPError as e: print e exit() except urllib2.URLError as e: print e exit() # print content # 提取數(shù)據(jù) # 注意換行符,設(shè)置 . 能夠匹配換行符 items = re.findall(regex, content) # 保存信息 for item in items: # print item #整理數(shù)據(jù),去掉\n,將<br/>換成\n item = item.replace('\n', '').replace('<br/>', '\n') filepath = path + '/' + str(count) + '.txt' f = open(filepath, 'w') f.write(item) f.close() count += 1 print '完成'
使用BeautifulSoup對源碼進行解析
# -*- coding: utf-8 -*- # @Author: HaonanWu # @Date: 2016-12-22 16:16:08 # @Last Modified by: HaonanWu # @Last Modified time: 2016-12-22 21:34:02 import urllib import urllib2 import re import os from bs4 import BeautifulSoup if __name__ == '__main__': url = 'http://www.qiushibaike.com/textnew/page/1/?s=4941357' user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.99 Safari/537.36' headers = {'User-Agent':user_agent} request = urllib2.Request(url = url, headers = headers) response = urllib2.urlopen(request) # print response.read() soup_packetpage = BeautifulSoup(response, 'lxml') items = soup_packetpage.find_all("div", class_="content") for item in items: try: content = item.span.string except AttributeError as e: print e exit() if content: print content + "\n"
這是用BeautifulSoup去抓取書本以及其價格的代碼
可以通過對比得出到bs4對標簽的讀取以及標簽內(nèi)容的讀取
(因為我自己也沒有學到這一部分,目前只能依葫蘆畫瓢地寫)
# -*- coding: utf-8 -*- # @Author: HaonanWu # @Date: 2016-12-22 20:37:38 # @Last Modified by: HaonanWu # @Last Modified time: 2016-12-22 21:27:30 import urllib2 import urllib import re from bs4 import BeautifulSoup url = "https://www.packtpub.com/all" try: html = urllib2.urlopen(url) except urllib2.HTTPError as e: print e exit() soup_packtpage = BeautifulSoup(html, 'lxml') all_book_title = soup_packtpage.find_all("div", class_="book-block-title") price_regexp = re.compile(u"\s+\$\s\d+\.\d+") for book_title in all_book_title: try: print "Book's name is " + book_title.string.strip() except AttributeError as e: print e exit() book_price = book_title.find_next(text=price_regexp) try: print "Book's price is "+ book_price.strip() except AttributeError as e: print e exit() print ""
以上全部為本篇文章的全部內(nèi)容,希望對大家的學習有所幫助,也希望大家多多支持腳本之家。
- Python3實現(xiàn)爬蟲爬取趕集網(wǎng)列表功能【基于request和BeautifulSoup模塊】
- python3 BeautifulSoup模塊使用字典的方法抓取a標簽內(nèi)的數(shù)據(jù)示例
- python利用beautifulSoup實現(xiàn)爬蟲
- python爬蟲入門教程--HTML文本的解析庫BeautifulSoup(四)
- python3第三方爬蟲庫BeautifulSoup4安裝教程
- python爬蟲之BeautifulSoup 使用select方法詳解
- Python爬蟲beautifulsoup4常用的解析方法總結(jié)
- Python爬蟲包BeautifulSoup簡介與安裝(一)
- Python爬蟲包 BeautifulSoup 遞歸抓取實例詳解
- Python爬蟲庫BeautifulSoup獲取對象(標簽)名,屬性,內(nèi)容,注釋
- Python爬蟲包BeautifulSoup異常處理(二)
- python爬蟲學習筆記之Beautifulsoup模塊用法詳解
相關(guān)文章
使用實現(xiàn)python連接hive數(shù)倉的示例代碼
這篇文章主要為大家詳細介紹了使用實現(xiàn)python連接hive數(shù)倉的相關(guān)知識,文中的示例代碼講解詳細,感興趣的小伙伴可以跟隨小編一起學習一下2024-03-03Python+OpenCV實戰(zhàn)之實現(xiàn)文檔掃描
這篇文章主要為大家詳細介紹了Python+Opencv如何實現(xiàn)文檔掃描的功能,文中的示例代碼講解詳細,感興趣的小伙伴可以跟隨小編一起學習一下2022-09-09Django模型修改及數(shù)據(jù)遷移實現(xiàn)解析
這篇文章主要介紹了Django模型修改及數(shù)據(jù)遷移實現(xiàn)解析,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友可以參考下2019-08-08基于django和dropzone.js實現(xiàn)上傳文件
這篇文章主要介紹了基于django和dropzone.js實現(xiàn)上傳文件,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友可以參考下2020-11-11解決python os.mkdir創(chuàng)建目錄失敗的問題
今天小編就為大家分享一篇解決python os.mkdir創(chuàng)建目錄失敗的問題,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2018-10-10pycharm中:OSError:[WinError?1455]頁面文件太小無法完成操作問題的多種解決方法
這篇文章主要給大家介紹了關(guān)于pycharm中:OSError:[WinError?1455]頁面文件太小無法完成操作問題的多種徹底解決方法,文中通過圖文介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友可以參考下2023-02-02Python GUI編程之tkinter 關(guān)于 ttkbootstrap 的使用
ttkbootstrap 是一個基于 tkinter 的界面美化庫,使用這個工具可以開發(fā)出類似前端 bootstrap 風格的 tkinter 桌面程序,這篇文章主要介紹了Python GUI編程之tkinter 關(guān)于 ttkbootstrap 的使用詳解,需要的朋友可以參考下2022-03-03