Python爬取當(dāng)當(dāng)、京東、亞馬遜圖書信息代碼實(shí)例
注:1.本程序采用MSSQLserver數(shù)據(jù)庫存儲(chǔ),請(qǐng)運(yùn)行程序前手動(dòng)修改程序開頭處的數(shù)據(jù)庫鏈接信息
2.需要bs4、requests、pymssql庫支持
3.支持多線程
from bs4 import BeautifulSoup import re,requests,pymysql,threading,os,traceback try: conn = pymysql.connect(host='127.0.0.1', port=3306, user='root', passwd='root', db='book',charset="utf8") cursor = conn.cursor() except: print('\n錯(cuò)誤:數(shù)據(jù)庫連接失敗') #返回指定頁面的html信息 def getHTMLText(url): try: headers = {'User-Agent':'Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/56.0.2924.87 Safari/537.36'} r = requests.get(url,headers = headers) r.raise_for_status() r.encoding = r.apparent_encoding return r.text except: return '' #返回指定url的Soup對(duì)象 def getSoupObject(url): try: html = getHTMLText(url) soup = BeautifulSoup(html,'html.parser') return soup except: return '' #獲取該關(guān)鍵字在圖書網(wǎng)站上的總頁數(shù) def getPageLength(webSiteName,url): try: soup = getSoupObject(url) if webSiteName == 'DangDang': a = soup('a',{'name':'bottom-page-turn'}) return a[-1].string elif webSiteName == 'Amazon': a = soup('span',{'class':'pagnDisabled'}) return a[-1].string except: print('\n錯(cuò)誤:獲取{}總頁數(shù)時(shí)出錯(cuò)...'.format(webSiteName)) return -1 class DangDangThread(threading.Thread): def __init__(self,keyword): threading.Thread.__init__(self) self.keyword = keyword def run(self): print('\n提示:開始爬取當(dāng)當(dāng)網(wǎng)數(shù)據(jù)...') count = 1 length = getPageLength('DangDang','http://search.dangdang.com/?key={}'.format(self.keyword))#總頁數(shù) tableName = 'db_{}_dangdang'.format(self.keyword) try: print('\n提示:正在創(chuàng)建DangDang表...') cursor.execute('create table {} (id int ,title text,prNow text,prPre text,link text)'.format(tableName)) print('\n提示:開始爬取當(dāng)當(dāng)網(wǎng)頁面...') for i in range(1,int(length)): url = 'http://search.dangdang.com/?key={}&page_index={}'.format(self.keyword,i) soup = getSoupObject(url) lis = soup('li',{'class':re.compile(r'line'),'id':re.compile(r'p')}) for li in lis: a = li.find_all('a',{'name':'itemlist-title','dd_name':'單品標(biāo)題'}) pn = li.find_all('span',{'class': 'search_now_price'}) pp = li.find_all('span',{'class': 'search_pre_price'}) if not len(a) == 0: link = a[0].attrs['href'] title = a[0].attrs['title'].strip() else: link = 'NULL' title = 'NULL' if not len(pn) == 0: prNow = pn[0].string else: prNow = 'NULL' if not len(pp) == 0: prPre = pp[0].string else: prPre = 'NULL' sql = "insert into {} (id,title,prNow,prPre,link) values ({},'{}','{}','{}','{}')".format(tableName,count,title,prNow,prPre,link) cursor.execute(sql) print('\r提示:正在存入當(dāng)當(dāng)數(shù)據(jù),當(dāng)前處理id:{}'.format(count),end='') count += 1 conn.commit() except: pass class AmazonThread(threading.Thread): def __init__(self,keyword): threading.Thread.__init__(self) self.keyword = keyword def run(self): print('\n提示:開始爬取亞馬遜數(shù)據(jù)...') count = 1 length = getPageLength('Amazon','https://www.amazon.cn/s/keywords={}'.format(self.keyword))#總頁數(shù) tableName = 'db_{}_amazon'.format(self.keyword) try: print('\n提示:正在創(chuàng)建Amazon表...') cursor.execute('create table {} (id int ,title text,prNow text,link text)'.format(tableName)) print('\n提示:開始爬取亞馬遜頁面...') for i in range(1,int(length)): url = 'https://www.amazon.cn/s/keywords={}&page={}'.format(self.keyword,i) soup = getSoupObject(url) lis = soup('li',{'id':re.compile(r'result_')}) for li in lis: a = li.find_all('a',{'class':'a-link-normal s-access-detail-page a-text-normal'}) pn = li.find_all('span',{'class': 'a-size-base a-color-price s-price a-text-bold'}) if not len(a) == 0: link = a[0].attrs['href'] title = a[0].attrs['title'].strip() else: link = 'NULL' title = 'NULL' if not len(pn) == 0: prNow = pn[0].string else: prNow = 'NULL' sql = "insert into {} (id,title,prNow,link) values ({},'{}','{}','{}')".format(tableName,count,title,prNow,link) cursor.execute(sql) print('\r提示:正在存入亞馬遜數(shù)據(jù),當(dāng)前處理id:{}'.format(count),end='') count += 1 conn.commit() except: pass class JDThread(threading.Thread): def __init__(self,keyword): threading.Thread.__init__(self) self.keyword = keyword def run(self): print('\n提示:開始爬取京東數(shù)據(jù)...') count = 1 tableName = 'db_{}_jd'.format(self.keyword) try: print('\n提示:正在創(chuàng)建JD表...') cursor.execute('create table {} (id int,title text,prNow text,link text)'.format(tableName)) print('\n提示:開始爬取京東頁面...') for i in range(1,100): url = 'https://search.jd.com/Search?keyword={}&page={}'.format(self.keyword,i) soup = getSoupObject(url) lis = soup('li',{'class':'gl-item'}) for li in lis: a = li.find_all('div',{'class':'p-name'}) pn = li.find_all('div',{'class': 'p-price'})[0].find_all('i') if not len(a) == 0: link = 'http:' + a[0].find_all('a')[0].attrs['href'] title = a[0].find_all('em')[0].get_text() else: link = 'NULL' title = 'NULL' if(len(link) > 128): link = 'TooLong' if not len(pn) == 0: prNow = '¥'+ pn[0].string else: prNow = 'NULL' sql = "insert into {} (id,title,prNow,link) values ({},'{}','{}','{}')".format(tableName,count,title,prNow,link) cursor.execute(sql) print('\r提示:正在存入京東網(wǎng)數(shù)據(jù),當(dāng)前處理id:{}'.format(count),end='') count += 1 conn.commit() except : pass def closeDB(): global conn,cursor conn.close() cursor.close() def main(): print('提示:使用本程序,請(qǐng)手動(dòng)創(chuàng)建空數(shù)據(jù)庫:Book,并修改本程序開頭的數(shù)據(jù)庫連接語句') keyword = input("\n提示:請(qǐng)輸入要爬取的關(guān)鍵字:") dangdangThread = DangDangThread(keyword) amazonThread = AmazonThread(keyword) jdThread = JDThread(keyword) dangdangThread.start() amazonThread.start() jdThread.start() dangdangThread.join() amazonThread.join() jdThread.join() closeDB() print('\n爬取已經(jīng)結(jié)束,即將關(guān)閉....') os.system('pause') main()
示例截圖:
關(guān)鍵詞:Android下的部分運(yùn)行結(jié)果(以導(dǎo)出至Excel)
總結(jié)
以上就是本文關(guān)于Python爬取當(dāng)當(dāng)、京東、亞馬遜圖書信息代碼實(shí)例的全部?jī)?nèi)容,希望對(duì)大家有所幫助。感興趣的朋友可以繼續(xù)參閱本站:
如有不足之處,歡迎留言指出。感謝朋友們對(duì)本站的支持!
相關(guān)文章
Python實(shí)現(xiàn)使用dir獲取類的方法列表
今天小編就為大家分享一篇Python實(shí)現(xiàn)使用dir獲取類的方法列表,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧2019-12-12Django項(xiàng)目定期自動(dòng)清除過期session的2種方法實(shí)例
如果用戶主動(dòng)退出,session會(huì)自動(dòng)清除,如果沒有退出就一直保留,記錄數(shù)越來越大,要定時(shí)清理沒用的session,下面這篇文章主要給大家介紹了關(guān)于Django項(xiàng)目定期自動(dòng)清除過期session的2種方法,需要的朋友可以參考下2022-08-08打包PyQt5應(yīng)用時(shí)的注意事項(xiàng)
這篇文章主要介紹了打包PyQt5應(yīng)用時(shí)的注意事項(xiàng)的相關(guān)資料,需要的朋友可以參考下2020-02-02Python如何獲取免費(fèi)高匿代理IP及驗(yàn)證
這篇文章主要介紹了Python如何獲取免費(fèi)高匿代理IP及驗(yàn)證問題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2024-06-06解決TensorFlow訓(xùn)練內(nèi)存不斷增長(zhǎng),進(jìn)程被殺死問題
今天小編就為大家分享一篇解決TensorFlow訓(xùn)練內(nèi)存不斷增長(zhǎng),進(jìn)程被殺死問題,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過來看看吧2020-02-02