欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

python爬取微博評(píng)論的實(shí)例講解

 更新時(shí)間:2021年01月15日 15:01:19   作者:宋宋大人  
在本篇文章里小編給大家整理的是一篇關(guān)于python爬取微博評(píng)論的實(shí)例講解,有興趣的朋友們可以學(xué)習(xí)下。

python爬蟲(chóng)是程序員們一定會(huì)掌握的知識(shí),練習(xí)python爬蟲(chóng)時(shí),很多人會(huì)選擇爬取微博練手。python爬蟲(chóng)微博根據(jù)微博存在于不同媒介上,所爬取的難度有差異,無(wú)論是python新入手的小白,還是已經(jīng)熟練掌握的程序員,可以拿來(lái)練手。本文介紹python爬取微博評(píng)論的代碼實(shí)例。

一、爬蟲(chóng)微博

與QQ空間爬蟲(chóng)類似,可以爬取新浪微博用戶的個(gè)人信息、微博信息、粉絲、關(guān)注和評(píng)論等。

爬蟲(chóng)抓取微博的速度可以達(dá)到 1300萬(wàn)/天 以上,具體要視網(wǎng)絡(luò)情況。

難度程度排序:網(wǎng)頁(yè)端>手機(jī)端>移動(dòng)端。微博端就是最好爬的微博端。

二、python爬蟲(chóng)爬取微博評(píng)論

第一步:確定評(píng)論用戶的id

# -*- coding:utf-8 -*-
import requests
import re
import time
import pandas as pd
urls = 'https://m.weibo.cn/api/comments/show?id=4073157046629802&page={}'
headers = {'Cookies':'Your cookies',
  'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) 
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/66.0.3359.181 Safari/537.36'}

第二步:找到html標(biāo)簽

tags = re.compile('</?\w+[^>]*>')

第三步:設(shè)置提取評(píng)論function

def get_comment(url):
j = requests.get(url, headers=headers).json()
comment_data = j['data']['data']
for data in comment_data:
try:

第四步:利用正則表達(dá)式去除文本中的html標(biāo)簽

comment = tags.sub('', data['text']) # 去掉html標(biāo)簽
reply = tags.sub('', data['reply_text'])
weibo_id = data['id']
reply_id = data['reply_id']
comments.append(comment)
comments.append(reply)
ids.append(weibo_id)
ids.append(reply_id)

第五步:爬取評(píng)論

df = pd.DataFrame({'ID': ids, '評(píng)論': comments})
df = df.drop_duplicates()
df.to_csv('觀察者網(wǎng).csv', index=False, encoding='gb18030')

實(shí)例擴(kuò)展:

# -*- coding: utf-8 -*-
# Created : 2018/8/26 18:33
# author :GuoLi
 
import requests
import json
import time
from lxml import etree
import html
import re
from bs4 import BeautifulSoup
 
 
class Weibospider:
 def __init__(self):
  # 獲取首頁(yè)的相關(guān)信息:
  self.start_url = 'https://weibo.com/u/5644764907?page=1&is_all=1'
 
  self.headers = {
   "accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
   "accept-encoding": "gzip, deflate, br",
   "accept-language": "zh-CN,zh;q=0.9,en;q=0.8",
   "cache-control": "max-age=0",
   "cookie": 使用自己本機(jī)的cookie,
   "referer": "https://www.weibo.com/u/5644764907?topnav=1&wvr=6&topsug=1",
   "upgrade-insecure-requests": "1",
   "user-agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/72.0.3626.96 Safari/537.36",
  }
  self.proxy = {
   'HTTP': 'HTTP://180.125.70.78:9999',
   'HTTP': 'HTTP://117.90.4.230:9999',
   'HTTP': 'HTTP://111.77.196.229:9999',
   'HTTP': 'HTTP://111.177.183.57:9999',
   'HTTP': 'HTTP://123.55.98.146:9999',
  }
 
 def parse_home_url(self, url): # 處理解析首頁(yè)面的詳細(xì)信息(不包括兩個(gè)通過(guò)ajax獲取到的頁(yè)面)
  res = requests.get(url, headers=self.headers)
  response = res.content.decode().replace("\\", "")
  # every_url = re.compile('target="_blank" href="(/\d+/\w+\?from=\w+&wvr=6&mod=weibotime)" rel="external nofollow"  ', re.S).findall(response)
  every_id = re.compile('name=(\d+)', re.S).findall(response) # 獲取次級(jí)頁(yè)面需要的id
  home_url = []
  for id in every_id:
   base_url = 'https://weibo.com/aj/v6/comment/big?ajwvr=6&id={}&from=singleWeiBo'
   url = base_url.format(id)
   home_url.append(url)
  return home_url
 
 def parse_comment_info(self, url): # 爬取直接發(fā)表評(píng)論的人的相關(guān)信息(name,info,time,info_url)
  res = requests.get(url, headers=self.headers)
  response = res.json()
  count = response['data']['count']
  html = etree.HTML(response['data']['html'])
  name = html.xpath("http://div[@class='list_li S_line1 clearfix']/div[@class='WB_face W_fl']/a/img/@alt") # 評(píng)論人的姓名
  info = html.xpath("http://div[@node-type='replywrap']/div[@class='WB_text']/text()") # 評(píng)論信息
  info = "".join(info).replace(" ", "").split("\n")
  info.pop(0)
  comment_time = html.xpath("http://div[@class='WB_from S_txt2']/text()") # 評(píng)論時(shí)間
  name_url = html.xpath("http://div[@class='WB_face W_fl']/a/@href") # 評(píng)論人的url
  name_url = ["https:" + i for i in name_url]
  comment_info_list = []
  for i in range(len(name)):
   item = {}
   item["name"] = name[i] # 存儲(chǔ)評(píng)論人的網(wǎng)名
   item["comment_info"] = info[i] # 存儲(chǔ)評(píng)論的信息
   item["comment_time"] = comment_time[i] # 存儲(chǔ)評(píng)論時(shí)間
   item["comment_url"] = name_url[i] # 存儲(chǔ)評(píng)論人的相關(guān)主頁(yè)
   comment_info_list.append(item)
  return count, comment_info_list
 
 def write_file(self, path_name, content_list):
  for content in content_list:
   with open(path_name, "a", encoding="UTF-8") as f:
    f.write(json.dumps(content, ensure_ascii=False))
    f.write("\n")
 
 def run(self):
  start_url = 'https://weibo.com/u/5644764907?page={}&is_all=1'
  start_ajax_url1 = 'https://weibo.com/p/aj/v6/mblog/mbloglist?ajwvr=6&domain=100406&is_all=1&page={0}&pagebar=0&pl_name=Pl_Official_MyProfileFeed__20&id=1004065644764907&script_uri=/u/5644764907&pre_page={0}'
  start_ajax_url2 = 'https://weibo.com/p/aj/v6/mblog/mbloglist?ajwvr=6&domain=100406&is_all=1&page={0}&pagebar=1&pl_name=Pl_Official_MyProfileFeed__20&id=1004065644764907&script_uri=/u/5644764907&pre_page={0}'
  for i in range(12): # 微博共有12頁(yè)
   home_url = self.parse_home_url(start_url.format(i + 1)) # 獲取每一頁(yè)的微博
   ajax_url1 = self.parse_home_url(start_ajax_url1.format(i + 1)) # ajax加載頁(yè)面的微博
   ajax_url2 = self.parse_home_url(start_ajax_url2.format(i + 1)) # ajax第二頁(yè)加載頁(yè)面的微博
   all_url = home_url + ajax_url1 + ajax_url2
   for j in range(len(all_url)):
    print(all_url[j])
    path_name = "第{}條微博相關(guān)評(píng)論.txt".format(i * 45 + j + 1)
    all_count, comment_info_list = self.parse_comment_info(all_url[j])
    self.write_file(path_name, comment_info_list)
    for num in range(1, 10000):
     if num * 15 < int(all_count) + 15:
      comment_url = all_url[j] + "&page={}".format(num + 1)
      print(comment_url)
      try:
       count, comment_info_list = self.parse_comment_info(comment_url)
       self.write_file(path_name, comment_info_list)
      except Exception as e:
       print("Error:", e)
       time.sleep(60)
       count, comment_info_list = self.parse_comment_info(comment_url)
       self.write_file(path_name, comment_info_list)
      del count
      time.sleep(0.2)
 
    print("第{}微博信息獲取完成!".format(i * 45 + j + 1))
 
 
if __name__ == '__main__':
 weibo = Weibospider()
 weibo.run()
 

到此這篇關(guān)于python爬取微博評(píng)論的實(shí)例講解的文章就介紹到這了,更多相關(guān)python爬蟲(chóng)爬取微博評(píng)論內(nèi)容請(qǐng)搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關(guān)文章希望大家以后多多支持腳本之家!

相關(guān)文章

最新評(píng)論