python定向爬蟲校園論壇帖子信息
更新時間:2018年07月23日 14:13:43 作者:lannooooooooooo
這篇文章主要為大家詳細介紹了Python定向爬蟲校園論壇帖子信息的相關方法,具有一定的參考價值,感興趣的小伙伴們可以參考一下
引言
寫這個小爬蟲主要是為了爬校園論壇上的實習信息,主要采用了Requests庫
源碼
URLs.py
主要功能是根據(jù)一個初始url(包含page頁面參數(shù))來獲得page頁面從當前頁面數(shù)到pageNum的url列表
import re def getURLs(url, attr, pageNum=1): all_links = [] try: now_page_number = int(re.search(attr+'=(\d+)', url, re.S).group(1)) for i in range(now_page_number, pageNum + 1): new_url = re.sub(attr+'=\d+', attr+'=%s' % i, url, re.S) all_links.append(new_url) return all_links except TypeError: print "arguments TypeError:attr should be string."
uni_2_native.py
由于論壇上爬取得到的網(wǎng)頁上的中文都是unicode編碼的形式,文本格式都為 &#XXXX;的形式,所以在爬得網(wǎng)站內(nèi)容后還需要對其進行轉(zhuǎn)換
import sys import re reload(sys) sys.setdefaultencoding('utf-8') def get_native(raw): tostring = raw while True: obj = re.search('&#(.*?);', tostring, flags=re.S) if obj is None: break else: raw, code = obj.group(0), obj.group(1) tostring = re.sub(raw, unichr(int(code)), tostring) return tostring
存入SQLite數(shù)據(jù)庫:saveInfo.py
# -*- coding: utf-8 -*- import MySQLdb class saveSqlite(): def __init__(self): self.infoList = [] def saveSingle(self, author=None, title=None, date=None, url=None,reply=0, view=0): if author is None or title is None or date is None or url is None: print "No info saved!" else: singleDict = {} singleDict['author'] = author singleDict['title'] = title singleDict['date'] = date singleDict['url'] = url singleDict['reply'] = reply singleDict['view'] = view self.infoList.append(singleDict) def toMySQL(self): conn = MySQLdb.connect(host='localhost', user='root', passwd='', port=3306, db='db_name', charset='utf8') cursor = conn.cursor() # sql = "select * from info" # n = cursor.execute(sql) # for row in cursor.fetchall(): # for r in row: # print r # print '\n' sql = "delete from info" cursor.execute(sql) conn.commit() sql = "insert into info(title,author,url,date,reply,view) values (%s,%s,%s,%s,%s,%s)" params = [] for each in self.infoList: params.append((each['title'], each['author'], each['url'], each['date'], each['reply'], each['view'])) cursor.executemany(sql, params) conn.commit() cursor.close() conn.close() def show(self): for each in self.infoList: print "author: "+each['author'] print "title: "+each['title'] print "date: "+each['date'] print "url: "+each['url'] print "reply: "+str(each['reply']) print "view: "+str(each['view']) print '\n' if __name__ == '__main__': save = saveSqlite() save.saveSingle('網(wǎng)','aaa','2008-10-10 10:10:10','www.baidu.com',1,1) # save.show() save.toMySQL()
主要爬蟲代碼
import requests from lxml import etree from cc98 import uni_2_native, URLs, saveInfo # 根據(jù)自己所需要爬的網(wǎng)站,偽造一個header headers ={ 'Accept': '', 'Accept-Encoding': '', 'Accept-Language': '', 'Connection': '', 'Cookie': '', 'Host': '', 'Referer': '', 'Upgrade-Insecure-Requests': '', 'User-Agent': '' } url = 'http://www.cc98.org/list.asp?boardid=459&page=1&action=' cc98 = 'http://www.cc98.org/' print "get infomation from cc98..." urls = URLs.getURLs(url, "page", 50) savetools = saveInfo.saveSqlite() for url in urls: r = requests.get(url, headers=headers) html = uni_2_native.get_native(r.text) selector = etree.HTML(html) content_tr_list = selector.xpath('//form/table[@class="tableborder1 list-topic-table"]/tbody/tr') for each in content_tr_list: href = each.xpath('./td[2]/a/@href') if len(href) == 0: continue else: # print len(href) # not very well using for, though just one element in list # but I don't know why I cannot get the data by index for each_href in href: link = cc98 + each_href title_author_time = each.xpath('./td[2]/a/@title') # print len(title_author_time) for info in title_author_time: info_split = info.split('\n') title = info_split[0][1:len(info_split[0])-1] author = info_split[1][3:] date = info_split[2][3:] hot = each.xpath('./td[4]/text()') # print len(hot) for hot_num in hot: reply_view = hot_num.strip().split('/') reply, view = reply_view[0], reply_view[1] savetools.saveSingle(author=author, title=title, date=date, url=link, reply=reply, view=view) print "All got! Now saving to Database..." # savetools.show() savetools.toMySQL() print "ALL CLEAR! Have Fun!"
以上就是本文的全部內(nèi)容,希望對大家的學習有所幫助,也希望大家多多支持腳本之家。
相關文章
Pycharm debug調(diào)試時帶參數(shù)過程解析
這篇文章主要介紹了Pycharm debug調(diào)試時帶參數(shù)過程解析,文中通過示例代碼介紹的非常詳細,對大家的學習或者工作具有一定的參考學習價值,需要的朋友可以參考下2020-02-02利用python實現(xiàn)微信頭像加紅色數(shù)字功能
通過Python實現(xiàn)將你的 QQ 頭像(或者微博頭像)右上角加上紅色的數(shù)字,類似于微信未讀信息數(shù)量那種提示效果。下面通過本文給大家分享python實現(xiàn)微信頭像加紅色數(shù)字功能,感興趣的朋友一起看看吧2018-03-03