python抓取搜狗微信公眾號文章
更新時間:2019年04月01日 14:17:55 作者:萌力突破
這篇文章主要為大家詳細介紹了python抓取搜狗微信公眾號文章,具有一定的參考價值,感興趣的小伙伴們可以參考一下
初學python,抓取搜狗微信公眾號文章存入mysql
mysql表:


代碼:
import requests
import json
import re
import pymysql
# 創(chuàng)建連接
conn = pymysql.connect(host='你的數(shù)據(jù)庫地址', port=端口, user='用戶名', passwd='密碼', db='數(shù)據(jù)庫名稱', charset='utf8')
# 創(chuàng)建游標
cursor = conn.cursor()
cursor.execute("select * from hd_gzh")
effect_row = cursor.fetchall()
from bs4 import BeautifulSoup
socket.setdefaulttimeout(60)
count = 1
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:65.0) Gecko/20100101 Firefox/65.0'}
#阿布云ip代理暫時不用
# proxyHost = "http-cla.abuyun.com"
# proxyPort = "9030"
# # 代理隧道驗證信息
# proxyUser = "H56761606429T7UC"
# proxyPass = "9168EB00C4167176"
# proxyMeta = "http://%(user)s:%(pass)s@%(host)s:%(port)s" % {
# "host" : proxyHost,
# "port" : proxyPort,
# "user" : proxyUser,
# "pass" : proxyPass,
# }
# proxies = {
# "http" : proxyMeta,
# "https" : proxyMeta,
# }
#查看是否已存在數(shù)據(jù)
def checkData(name):
sql = "select * from gzh_article where title = '%s'"
data = (name,)
count = cursor.execute(sql % data)
conn.commit()
if(count!=0):
return False
else:
return True
#插入數(shù)據(jù)
def insertData(title,picture,author,content):
sql = "insert into gzh_article (title,picture,author,content) values ('%s', '%s','%s', '%s')"
data = (title,picture,author,content)
cursor.execute(sql % data)
conn.commit()
print("插入一條數(shù)據(jù)")
return
for row in effect_row:
newsurl = 'https://weixin.sogou.com/weixin?type=1&s_from=input&query=' + row[1] + '&ie=utf8&_sug_=n&_sug_type_='
res = requests.get(newsurl,headers=headers)
res.encoding = 'utf-8'
soup = BeautifulSoup(res.text,'html.parser')
url = 'https://weixin.sogou.com' + soup.select('.tit a')[0]['href']
res2 = requests.get(url,headers=headers)
res2.encoding = 'utf-8'
soup2 = BeautifulSoup(res2.text,'html.parser')
pattern = re.compile(r"url \+= '(.*?)';", re.MULTILINE | re.DOTALL)
script = soup2.find("script")
url2 = pattern.search(script.text).group(1)
res3 = requests.get(url2,headers=headers)
res3.encoding = 'utf-8'
soup3 = BeautifulSoup(res3.text,'html.parser')
print()
pattern2 = re.compile(r"var msgList = (.*?);$", re.MULTILINE | re.DOTALL)
script2 = soup3.find("script", text=pattern2)
s2 = json.loads(pattern2.search(script2.text).group(1))
#等待10s
time.sleep(10)
for news in s2["list"]:
articleurl = "https://mp.weixin.qq.com"+news["app_msg_ext_info"]["content_url"]
articleurl = articleurl.replace('&','&')
res4 = requests.get(articleurl,headers=headers)
res4.encoding = 'utf-8'
soup4 = BeautifulSoup(res4.text,'html.parser')
if(checkData(news["app_msg_ext_info"]["title"])):
insertData(news["app_msg_ext_info"]["title"],news["app_msg_ext_info"]["cover"],news["app_msg_ext_info"]["author"],pymysql.escape_string(str(soup4)))
count += 1
#等待5s
time.sleep(10)
for news2 in news["app_msg_ext_info"]["multi_app_msg_item_list"]:
articleurl2 = "https://mp.weixin.qq.com"+news2["content_url"]
articleurl2 = articleurl2.replace('&','&')
res5 = requests.get(articleurl2,headers=headers)
res5.encoding = 'utf-8'
soup5 = BeautifulSoup(res5.text,'html.parser')
if(checkData(news2["title"])):
insertData(news2["title"],news2["cover"],news2["author"],pymysql.escape_string(str(soup5)))
count += 1
#等待10s
time.sleep(10)
cursor.close()
conn.close()
print("操作完成")
以上就是本文的全部內容,希望對大家的學習有所幫助,也希望大家多多支持腳本之家。
相關文章
解決tensorflow讀取本地MNITS_data失敗的原因
這篇文章主要介紹了解決tensorflow讀取本地MNITS_data失敗的原因,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧2020-06-06
python一行代碼就能實現(xiàn)數(shù)據(jù)分析的pandas-profiling庫
這篇文章主要為大家介紹了python一行代碼就能實現(xiàn)數(shù)據(jù)分析的pandas-profiling庫,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進步,早日升職加薪2024-01-01
Python中print和return的作用及區(qū)別解析
print的作用是輸出數(shù)據(jù)到控制端,就是打印在你能看到的界面上。這篇文章給大家介紹Python中print和return的作用及區(qū)別解析,感興趣的朋友跟隨小編一起看看吧2019-05-05
Python實現(xiàn)五子棋聯(lián)機對戰(zhàn)小游戲
本文主要介紹了通過Python實現(xiàn)簡單的支持聯(lián)機對戰(zhàn)的游戲——支持局域網(wǎng)聯(lián)機對戰(zhàn)的五子棋小游戲。廢話不多說,快來跟隨小編一起學習吧2021-12-12
tensorflow模型文件(ckpt)轉pb文件的方法(不知道輸出節(jié)點名)
這篇文章主要介紹了tensorflow模型文件(ckpt)轉pb文件(不知道輸出節(jié)點名),本文通過實例代碼給大家介紹的非常詳細,對大家的學習或工作具有一定的參考借鑒價值,需要的朋友可以參考下2020-04-04

