python爬蟲爬取淘寶商品信息(selenum+phontomjs)
本文實例為大家分享了python爬蟲爬取淘寶商品的具體代碼,供大家參考,具體內(nèi)容如下
1、需求目標(biāo) :
進去淘寶頁面,搜索耐克關(guān)鍵詞,抓取 商品的標(biāo)題,鏈接,價格,城市,旺旺號,付款人數(shù),進去第二層,抓取商品的銷售量,款號等。



2、結(jié)果展示

3、源代碼
# encoding: utf-8
import sys
reload(sys)
sys.setdefaultencoding('utf-8')
import time
import pandas as pd
time1=time.time()
from lxml import etree
from selenium import webdriver
#########自動模擬
driver=webdriver.PhantomJS(executable_path='D:/Python27/Scripts/phantomjs.exe')
import re
#################定義列表存儲#############
title=[]
price=[]
city=[]
shop_name=[]
num=[]
link=[]
sale=[]
number=[]
#####輸入關(guān)鍵詞耐克(這里必須用unicode)
keyword="%E8%80%90%E5%85%8B"
for i in range(0,1):
try:
print "...............正在抓取第"+str(i)+"頁..........................."
url="https://s.taobao.com/search?q=%E8%80%90%E5%85%8B&imgfile=&js=1&stats_click=search_radio_all%3A1&initiative_id=staobaoz_20170710&ie=utf8&bcoffset=4&ntoffset=4&p4ppushleft=1%2C48&s="+str(i*44)
driver.get(url)
time.sleep(5)
html=driver.page_source
selector=etree.HTML(html)
title1=selector.xpath('//div[@class="row row-2 title"]/a')
for each in title1:
print each.xpath('string(.)').strip()
title.append(each.xpath('string(.)').strip())
price1=selector.xpath('//div[@class="price g_price g_price-highlight"]/strong/text()')
for each in price1:
print each
price.append(each)
city1=selector.xpath('//div[@class="location"]/text()')
for each in city1:
print each
city.append(each)
num1=selector.xpath('//div[@class="deal-cnt"]/text()')
for each in num1:
print each
num.append(each)
shop_name1=selector.xpath('//div[@class="shop"]/a/span[2]/text()')
for each in shop_name1:
print each
shop_name.append(each)
link1=selector.xpath('//div[@class="row row-2 title"]/a/@href')
for each in link1:
kk="https://" + each
link.append("https://" + each)
if "https" in each:
print each
driver.get(each)
else:
print "https://" + each
driver.get("https://" + each)
time.sleep(3)
html2=driver.page_source
selector2=etree.HTML(html2)
sale1=selector2.xpath('//*[@id="J_DetailMeta"]/div[1]/div[1]/div/ul/li[1]/div/span[2]/text()')
for each in sale1:
print each
sale.append(each)
sale2=selector2.xpath('//strong[@id="J_SellCounter"]/text()')
for each in sale2:
print each
sale.append(each)
if "tmall" in kk:
number1 = re.findall('<ul id="J_AttrUL">(.*?)</ul>', html2, re.S)
for each in number1:
m = re.findall('>*號: (.*?)</li>', str(each).strip(), re.S)
if len(m) > 0:
for each1 in m:
print each1
number.append(each1)
else:
number.append("NULL")
if "taobao" in kk:
number2=re.findall('<ul class="attributes-list">(.*?)</ul>',html2,re.S)
for each in number2:
h=re.findall('>*號: (.*?)</li>', str(each).strip(), re.S)
if len(m) > 0:
for each2 in h:
print each2
number.append(each2)
else:
number.append("NULL")
if "click" in kk:
number.append("NULL")
except:
pass
print len(title),len(city),len(price),len(num),len(shop_name),len(link),len(sale),len(number)
# #
# ######數(shù)據(jù)框
data1=pd.DataFrame({"標(biāo)題":title,"價格":price,"旺旺":shop_name,"城市":city,"付款人數(shù)":num,"鏈接":link,"銷量":sale,"款號":number})
print data1
# 寫出excel
writer = pd.ExcelWriter(r'C:\\taobao_spider2.xlsx', engine='xlsxwriter', options={'strings_to_urls': False})
data1.to_excel(writer, index=False)
writer.close()
time2 = time.time()
print u'ok,爬蟲結(jié)束!'
print u'總共耗時:' + str(time2 - time1) + 's'
####關(guān)閉瀏覽器
driver.close()
以上就是本文的全部內(nèi)容,希望對大家的學(xué)習(xí)有所幫助,也希望大家多多支持腳本之家。
相關(guān)文章
使用Python中的cookielib模擬登錄網(wǎng)站
這篇文章主要介紹了使用Python中的cookielib模擬登錄網(wǎng)站,用作生成cookie然后登錄,需要的朋友可以參考下2015-04-04
Python restful框架接口開發(fā)實現(xiàn)
這篇文章主要介紹了Python restful框架接口開發(fā)實現(xiàn),文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友可以參考下2020-04-04
Django錯誤:TypeError at / ''bool'' object is not callable解決
這篇文章主要介紹了Django 錯誤:TypeError at / 'bool' object is not callable解決,文中通過示例代碼介紹的非常詳細(xì),對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友可以參考下2019-08-08
解決90%的常見問題的8個python NumPy函數(shù)
這篇文章主要為大家介紹了解決90%的常見問題的8個python NumPy函數(shù)示例詳解,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步,早日升職加薪2023-06-06

