python中的被動(dòng)信息搜集
概述:
被動(dòng)信息搜集主要通過(guò)搜索引擎或者社交等方式對(duì)目標(biāo)資產(chǎn)信息進(jìn)行提取,通常包括IP查詢,Whois查詢,子域名搜集等。進(jìn)行被動(dòng)信息搜集時(shí)不與目標(biāo)產(chǎn)生交互,可以在不接觸到目標(biāo)系統(tǒng)的情況下挖掘目標(biāo)信息。
主要方法:DNS解析,子域名挖掘,郵件爬取等。
DNS解析:
1、概述:
DNS(Domain Name System,域名系統(tǒng))是一種分布式網(wǎng)絡(luò)目錄服務(wù),主要用于域名與IP地址的相互轉(zhuǎn)換,能夠使用戶更方便地訪問(wèn)互聯(lián)網(wǎng),而不用去記住一長(zhǎng)串?dāng)?shù)字(能夠被機(jī)器直接讀取的IP)。
2、IP查詢:
IP查詢是通過(guò)當(dāng)前所獲取的URL去查詢對(duì)應(yīng)IP地址的過(guò)程。可以利用Socket庫(kù)函數(shù)中的gethostbyname()獲取域名對(duì)應(yīng)的IP值。
代碼:
import socket ip = socket.gethostbyname('www.baidu.com') print(ip)
返回:
39.156.66.14
3、Whois查詢:
Whois是用來(lái)查詢域名的IP以及所有者信息的傳輸協(xié)議。Whois相當(dāng)于一個(gè)數(shù)據(jù)庫(kù),用來(lái)查詢域名是否已經(jīng)被注冊(cè),以及注冊(cè)域名的詳細(xì)信息(如域名所有人,域名注冊(cè)商等)。
Python中的python-whois模塊可用于Whois查詢。
代碼:
from whois import whois data = whois('www.baidu.com') print(data)
返回:
E:\python\python.exe "H:/code/Python Security/Day01/Whois查詢.py" { "domain_name": [ "BAIDU.COM", "baidu.com" ], "registrar": "MarkMonitor, Inc.", "whois_server": "whois.markmonitor.com", "referral_url": null, "updated_date": [ "2020-12-09 04:04:41", "2021-04-07 12:52:21" ], "creation_date": [ "1999-10-11 11:05:17", "1999-10-11 04:05:17" ], "expiration_date": [ "2026-10-11 11:05:17", "2026-10-11 00:00:00" ], "name_servers": [ "NS1.BAIDU.COM", "NS2.BAIDU.COM", "NS3.BAIDU.COM", "NS4.BAIDU.COM", "NS7.BAIDU.COM", "ns3.baidu.com", "ns2.baidu.com", "ns7.baidu.com", "ns1.baidu.com", "ns4.baidu.com" ], "status": [ "clientDeleteProhibited https://icann.org/epp#clientDeleteProhibited", "clientTransferProhibited https://icann.org/epp#clientTransferProhibited", "clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited", "serverDeleteProhibited https://icann.org/epp#serverDeleteProhibited", "serverTransferProhibited https://icann.org/epp#serverTransferProhibited", "serverUpdateProhibited https://icann.org/epp#serverUpdateProhibited", "clientUpdateProhibited (https://www.icann.org/epp#clientUpdateProhibited)", "clientTransferProhibited (https://www.icann.org/epp#clientTransferProhibited)", "clientDeleteProhibited (https://www.icann.org/epp#clientDeleteProhibited)", "serverUpdateProhibited (https://www.icann.org/epp#serverUpdateProhibited)", "serverTransferProhibited (https://www.icann.org/epp#serverTransferProhibited)", "serverDeleteProhibited (https://www.icann.org/epp#serverDeleteProhibited)" ], "emails": [ "abusecomplaints@markmonitor.com", "whoisrequest@markmonitor.com" ], "dnssec": "unsigned", "name": null, "org": "Beijing Baidu Netcom Science Technology Co., Ltd.", "address": null, "city": null, "state": "Beijing", "zipcode": null, "country": "CN" } Process finished with exit code 0
子域名挖掘:
1、概述:
域名可以分為頂級(jí)域名,一級(jí)域名,二級(jí)域名等。
子域名(subdomain)是頂級(jí)域名(一級(jí)域名或父域名)的下一級(jí)。
在測(cè)試過(guò)程中,測(cè)試目標(biāo)主站時(shí)如果未發(fā)現(xiàn)任何相關(guān)漏洞,此時(shí)通常會(huì)考慮挖掘目標(biāo)系統(tǒng)的子域名。
子域名挖掘方法有多種,例如,搜索引擎,子域名破解,字典查詢等。
2、利用Python編寫一個(gè)簡(jiǎn)單的子域名挖掘工具:
(以https://cn.bing.com/為例)
代碼:
# coding=gbk import requests from bs4 import BeautifulSoup from urllib.parse import urlparse import sys def Bing_Search(site, pages): Subdomain = [] # 以列表的形式存儲(chǔ)子域名 headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'Referer': 'https://cn.bing.com/', 'Cookie': 'MUID=37FA745F1005602C21A27BB3117A61A3; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=DA7BDD699AFB4AEB8C68A0B4741EFA74&dmnchg=1; MUIDB=37FA745F1005602C21A27BB3117A61A3; ULC=P=9FD9|1:1&H=9FD9|1:1&T=9FD9|1:1; PPLState=1; ANON=A=CEC39B849DEE39838493AF96FFFFFFFF&E=1943&W=1; NAP=V=1.9&E=18e9&C=B8-HXGvKTE_2lQJ0I3OvbJcIE8caEa9H4f3XNrd3z07nnV3pAxmVJQ&W=1; _tarLang=default=en; _TTSS_IN=hist=WyJ6aC1IYW5zIiwiYXV0by1kZXRlY3QiXQ==; _TTSS_OUT=hist=WyJlbiJd; ABDEF=V=13&ABDV=13&MRB=1618913572156&MRNB=0; KievRPSSecAuth=FABSARRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACPyKw8I/CYhDEAFiUHPfZQSWnp%2BMm43NyhmcUtEqcGeHpvygEOz6CPQIUrTCcE3VESTgWkhXpYVdYAKRL5u5EH0y3%2BmSTi5KxbOq5zlLxOf61W19jGuTQGjb3TZhsv5Wb58a2I8NBTwIh/cFFvuyqDM11s7xnw/ZZoqc9tNuD8ZG9Hi29RgIeOdoSL/Kzz5Lwb/cfSW6GbawOVtMcToRJr20K0C0zGzLhxA7gYH9CxajTo7w5kRx2/b/QjalnzUh7lvZCNrF5naagj10xHhZyHItlNtjNe3yqqLyLZmgNrzT8o7QWfpJWHqAak4AFt3nY9R0NGLHM6UxPC8ph9hEaYbWtIsY7JNvVYFwbDk6o4oqu33kHeyqW/JTVhQACnpn2v74dZzvk4xRp%2BpcQIoRIzI%3D; _U=1ll1JNraa8gnrWOg3NTDw_PUniDnXYIikDzB-R_hVgutXRRVFcrnaPKxVBXA1w-dBZJsJJNfk6vGHSqJtUsLXvZswsd5A1xFvQ_V_nUInstIfDUs7q7FyY2DmvDRlfMIqbgdt-KEqazoz-r_TLWScg4_WDNFXRwg6Ga8k2cRyOTfGNkon7kVCJ7IoPDTAdqdP; WLID=kQRArdi2czxUqvURk62VUr88Lu/DLn6bFfcwTmB8EoKbi3UZYvhKiOCdmPbBTs0PQ3jO42l3O5qWZgTY4FNT8j837l8J9jp0NwVh2ytFKZ4=; _EDGE_S=SID=01830E382F4863360B291E1B2E6662C7; SRCHS=PC=ATMM; WLS=C=3d04cfe82d8de394&N=%e5%81%a5; SRCHUSR=DOB=20210319&T=1619277515000&TPC=1619267174000&POEX=W; SNRHOP=I=&TS=; _SS=PC=ATMM&SID=01830E382F4863360B291E1B2E6662C7&bIm=656; ipv6=hit=1619281118251&t=4; SRCHHPGUSR=SRCHLANGV2=zh-Hans&BRW=W&BRH=S&CW=1462&CH=320&DPR=1.25&UTC=480&DM=0&WTS=63754766339&HV=1619277524&BZA=0&TH=ThAb5&NEWWND=1&NRSLT=-1&LSL=0&SRCHLANG=&AS=1&NNT=1&HAP=0&VSRO=0' } for i in range(1, int(pages)+1): url = "https://cn.bing.com/search?q=site%3a" + site + "&go=Search&qs=ds&first=" + str((int(i)-1)*10) + "&FORM=PERE" html = requests.get(url, headers=headers) soup = BeautifulSoup(html.content, 'html.parser') job_bt = soup.findAll('h2') for i in job_bt: link = i.a.get('href') domain = str(urlparse(link).scheme + "://" + urlparse(link).netloc) if domain in Subdomain: pass else: Subdomain.append(domain) print(domain) if __name__ == '__main__': if len(sys.argv) == 3: site = sys.argv[1] page = sys.argv[2] else: print("usge: %s baidu.com 10" % sys.argv[0]) # 輸出幫助信息 sys.exit(-1) Subdomain = Bing_Search('www.baidu.com', 15)
返回:
郵件爬?。?br />
1、概述:
針對(duì)目標(biāo)系統(tǒng)進(jìn)行滲透的過(guò)程中,如果目標(biāo)服務(wù)器安全性很高,通過(guò)服務(wù)器很難獲取目標(biāo)權(quán)限時(shí),通常會(huì)采用社工的方式對(duì)目標(biāo)服務(wù)進(jìn)行進(jìn)一步攻擊。
針對(duì)搜索界面的相關(guān)郵件信息進(jìn)行爬取、處理等操作之后。利用獲得的郵箱賬號(hào)批量發(fā)送釣魚郵件,誘騙、欺詐目標(biāo)用戶或管理員進(jìn)行賬號(hào)登錄或點(diǎn)擊執(zhí)行,進(jìn)而獲取目標(biāo)系統(tǒng)的其權(quán)限。
該郵件采集工具所用到的相關(guān)庫(kù)函數(shù)如下:
import sys import getopt import requests from bs4 import BeautifulSoup import re
2、過(guò)程:
①:在程序的起始部分,當(dāng)執(zhí)行過(guò)程中沒(méi)有發(fā)生異常時(shí),則執(zhí)行定義的start()函數(shù)。
通過(guò)sys.argv[ ] 實(shí)現(xiàn)外部指令的接收。其中,sys.argv[0] 代表代碼本身的文件路徑,sys.argv[1:] 表示從第一個(gè)命令行參數(shù)到輸入的最后一個(gè)命令行參數(shù),存儲(chǔ)形式為list。
代碼如下:
if __name__ == '__main__': # 定義異常 try: start(sys.argv[1: ]) except: print("interrupted by user, killing all threads ... ")
②:編寫命令行參數(shù)處理功能。此處主要應(yīng)用 getopt.getopt()函數(shù)處理命令行參數(shù),該函數(shù)目前有短選項(xiàng)和長(zhǎng)選項(xiàng)兩種格式。
短選項(xiàng)格式為“ - ”加上單個(gè)字母選項(xiàng);
長(zhǎng)選項(xiàng)格式為“ -- ”加上一個(gè)單詞選項(xiàng)。
opts為一個(gè)兩元組列表,每個(gè)元素形式為“(選項(xiàng)串,附加參數(shù))”。當(dāng)沒(méi)有附加參數(shù)時(shí),則為空串。之后通過(guò)for語(yǔ)句循環(huán)輸出opts列表中的數(shù)值并賦值給自定義的變量。
代碼如下:
def start(argv): url = "" pages = "" if len(sys.argv) < 2: print("-h 幫助信息;\n") sys.exit() # 定義異常處理 try: banner() opts, args = getopt.getopt(argv, "-u:-p:-h") except: print('Error an argument') sys.exit() for opt, arg in opts: if opt == "-u": url = arg elif opt == "-p": pages = arg elif opt == "-h": print(usage()) launcher(url, pages)
③:輸出幫助信息,增加代碼工具的可讀性和易用性。為了使輸出的信息更加美觀簡(jiǎn)潔,可以通過(guò)轉(zhuǎn)義字符設(shè)置輸出字體顏色,從而實(shí)現(xiàn)所需效果。
開(kāi)頭部分包含三個(gè)參數(shù):顯示方式,前景色,背景色。這三個(gè)參數(shù)是可選的,可以只寫其中一個(gè)參數(shù)。結(jié)尾可以省略,但為了書寫規(guī)范,建議以 “\033[0m” 結(jié)尾。
代碼如下:
print('\033[0:30;41m 3cH0 - Nu1L \033[0m') print('\033[0:30;42m 3cH0 - Nu1L \033[0m') print('\033[0:30;43m 3cH0 - Nu1L \033[0m') print('\033[0:30;44m 3cH0 - Nu1L \033[0m') # banner信息 def banner(): print('\033[1:34m ################################ \033[0m\n') print('\033[1:34m 3cH0 - Nu1L \033[0m\n') print('\033[1:34m ################################ \033[0m\n') # 使用規(guī)則 def usage(): print('-h: --help 幫助;') print('-u: --url 域名;') print('-p --pages 頁(yè)數(shù);') print('eg: python -u "www.baidu.com" -p 100' + '\n') sys.exit()
④:確定搜索郵件的關(guān)鍵字,并調(diào)用bing_search()和baidu_search()兩個(gè)函數(shù),返回Bing與百度兩大搜索引擎的查詢結(jié)果。由獲取到的結(jié)果進(jìn)行列表合并,去重之后,循環(huán)輸出。
代碼如下:
# 漏洞回調(diào)函數(shù) def launcher(url, pages): email_num = [] key_words = ['email', 'mail', 'mailbox', '郵件', '郵箱', 'postbox'] for page in range(1, int(pages)+1): for key_word in key_words: bing_emails = bing_search(url, page, key_word) baidu_emails = baidu_search(url, page, key_word) sum_emails = bing_emails + baidu_emails for email in sum_emails: if email in email_num: pass else: print(email) with open('data.txt', 'a+')as f: f.write(email + '\n') email_num.append(email)
⑤:用Bing搜索引擎進(jìn)行郵件爬取。Bing引擎具有反爬防護(hù),會(huì)通過(guò)限定referer、cookie等信息來(lái)確定是否網(wǎng)頁(yè)爬取操作。
可以通過(guò)指定referer與requeses.session()函數(shù)自動(dòng)獲取cookie信息,繞過(guò)Bing搜索引擎的反爬防護(hù)。
代碼如下:
# Bing_search def bing_search(url, page, key_word): referer = "http://cn.bing.com/search?q=email+site%3abaidu.com&sp=-1&pq=emailsite%3abaidu.com&first=1&FORM=PERE1" conn = requests.session() bing_url = "http://cn.bing.com/search?q=" + key_word + "+site%3a" + url + "&qa=n&sp=-1&pq=" + key_word + "site%3a" + url +"&first=" + str((page-1)*10) + "&FORM=PERE1" conn.get('http://cn.bing.com', headers=headers(referer)) r = conn.get(bing_url, stream=True, headers=headers(referer), timeout=8) emails = search_email(r.text) return emails
⑥:用百度搜索引擎進(jìn)行郵件爬取。百度搜索引擎同樣設(shè)定了反爬防護(hù),相對(duì)Bing來(lái)說(shuō),百度不僅對(duì)referer和cookie進(jìn)行校驗(yàn),還同時(shí)在頁(yè)面中通過(guò)JavaScript語(yǔ)句進(jìn)行動(dòng)態(tài)請(qǐng)求鏈接,從而導(dǎo)致不能動(dòng)態(tài)獲取頁(yè)面中的信息。
可以通過(guò)對(duì)鏈接的提取,在進(jìn)行request請(qǐng)求,從而繞過(guò)反爬設(shè)置。
代碼如下:
# Baidu_search def baidu_search(url, page, key_word): email_list = [] emails = [] referer = "https://www.baidu.com/s?wd=email+site%3Abaidu.com&pn=1" baidu_url = "https://www.baidu.com/s?wd=" + key_word + "+site%3A" + url + "&pn=" + str((page-1)*10) conn = requests.session() conn.get(baidu_url, headers=headers(referer)) r = conn.get(baidu_url, headers=headers(referer)) soup = BeautifulSoup(r.text, 'lxml') tagh3 = soup.find_all('h3') for h3 in tagh3: href = h3.find('a').get('href') try: r = requests.get(href, headers=headers(referer)) emails = search_email(r.text) except Exception as e: pass for email in emails: email_list.append(email) return email_list
⑦:通過(guò)正則表達(dá)式獲取郵箱號(hào)碼。此處也可以換成目標(biāo)企業(yè)郵箱的正則表達(dá)式。
代碼如下:
# search_email def search_email(html): emails = re.findall(r"[a-z0-9\.\-+_]+@[a-z0-9\.\-+_]+\.[a-z]" + html, re.I) return emails # headers(referer) def headers(referer): headers = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36', 'Accept': 'application/json, text/javascript, */*; q=0.01', 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8', 'Accept-Encoding': 'gzip, deflate, br', 'Referer': referer } return headers
3、完整代碼:
# coding=gbk import sys import getopt import requests from bs4 import BeautifulSoup import re # 主函數(shù),傳入用戶輸入的參數(shù) def start(argv): url = "" pages = "" if len(sys.argv) < 2: print("-h 幫助信息;\n") sys.exit() # 定義異常處理 try: banner() opts, args = getopt.getopt(argv, "-u:-p:-h") except: print('Error an argument') sys.exit() for opt, arg in opts: if opt == "-u": url = arg elif opt == "-p": pages = arg elif opt == "-h": print(usage()) launcher(url, pages) # banner信息 def banner(): print('\033[1:34m ################################ \033[0m\n') print('\033[1:34m 3cH0 - Nu1L \033[0m\n') print('\033[1:34m ################################ \033[0m\n') # 使用規(guī)則 def usage(): print('-h: --help 幫助;') print('-u: --url 域名;') print('-p --pages 頁(yè)數(shù);') print('eg: python -u "www.baidu.com" -p 100' + '\n') sys.exit() # 漏洞回調(diào)函數(shù) def launcher(url, pages): email_num = [] key_words = ['email', 'mail', 'mailbox', '郵件', '郵箱', 'postbox'] for page in range(1, int(pages)+1): for key_word in key_words: bing_emails = bing_search(url, page, key_word) baidu_emails = baidu_search(url, page, key_word) sum_emails = bing_emails + baidu_emails for email in sum_emails: if email in email_num: pass else: print(email) with open('data.txt', 'a+')as f: f.write(email + '\n') email_num.append(email) # Bing_search def bing_search(url, page, key_word): referer = "http://cn.bing.com/search?q=email+site%3abaidu.com&sp=-1&pq=emailsite%3abaidu.com&first=1&FORM=PERE1" conn = requests.session() bing_url = "http://cn.bing.com/search?q=" + key_word + "+site%3a" + url + "&qa=n&sp=-1&pq=" + key_word + "site%3a" + url +"&first=" + str((page-1)*10) + "&FORM=PERE1" conn.get('http://cn.bing.com', headers=headers(referer)) r = conn.get(bing_url, stream=True, headers=headers(referer), timeout=8) emails = search_email(r.text) return emails # Baidu_search def baidu_search(url, page, key_word): email_list = [] emails = [] referer = "https://www.baidu.com/s?wd=email+site%3Abaidu.com&pn=1" baidu_url = "https://www.baidu.com/s?wd=" + key_word + "+site%3A" + url + "&pn=" + str((page-1)*10) conn = requests.session() conn.get(baidu_url, headers=headers(referer)) r = conn.get(baidu_url, headers=headers(referer)) soup = BeautifulSoup(r.text, 'lxml') tagh3 = soup.find_all('h3') for h3 in tagh3: href = h3.find('a').get('href') try: r = requests.get(href, headers=headers(referer)) emails = search_email(r.text) except Exception as e: pass for email in emails: email_list.append(email) return email_list # search_email def search_email(html): emails = re.findall(r"[a-z0-9\.\-+_]+@[a-z0-9\.\-+_]+\.[a-z]" + html, re.I) return emails # headers(referer) def headers(referer): headers = { 'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.90 Safari/537.36', 'Accept': 'application/json, text/javascript, */*; q=0.01', 'Accept-Language': 'zh-CN,zh;q=0.9,en;q=0.8', 'Accept-Encoding': 'gzip, deflate, br', 'Referer': referer } return headers if __name__ == '__main__': # 定義異常 try: start(sys.argv[1: ]) except: print("interrupted by user, killing all threads ... ")
以上就是python中的被動(dòng)信息搜集的詳細(xì)內(nèi)容,更多關(guān)于python 被動(dòng)信息搜集的資料請(qǐng)關(guān)注腳本之家其它相關(guān)文章!
- Python爬蟲(chóng)之爬取二手房信息
- Python爬取OPGG上英雄聯(lián)盟英雄勝率及選取率信息的操作
- 基于python制作簡(jiǎn)易版學(xué)生信息管理系統(tǒng)
- Python如何利用正則表達(dá)式爬取網(wǎng)頁(yè)信息及圖片
- 用python爬蟲(chóng)爬取CSDN博主信息
- python爬取企查查企業(yè)信息之selenium自動(dòng)模擬登錄企查查
- 使用python實(shí)現(xiàn)學(xué)生信息管理系統(tǒng)
- python實(shí)現(xiàn)學(xué)生信息管理系統(tǒng)源碼
- python 獲取計(jì)算機(jī)的網(wǎng)卡信息
- python批量提取圖片信息并保存的實(shí)現(xiàn)
- 利用Python實(shí)現(xiàn)學(xué)生信息管理系統(tǒng)的完整實(shí)例
相關(guān)文章
Pytorch中的自動(dòng)求梯度機(jī)制和Variable類實(shí)例
今天小編就為大家分享一篇Pytorch中的自動(dòng)求梯度機(jī)制和Variable類實(shí)例,具有很好的參考價(jià)值,希望對(duì)大家有所幫助。一起跟隨小編過(guò)來(lái)看看吧2020-02-02Python matplotlib繪圖可視化知識(shí)點(diǎn)整理(小結(jié))
這篇文章主要介紹了Python matplotlib繪圖可視化知識(shí)點(diǎn)整理(小結(jié)),小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,也給大家做個(gè)參考。一起跟隨小編過(guò)來(lái)看看吧2018-03-03深入了解Python枚舉類型的相關(guān)知識(shí)
這篇文章主要介紹了深入了解Python枚舉類型的相關(guān)知識(shí),文中通過(guò)示例代碼介紹的非常詳細(xì),對(duì)大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價(jià)值,需要的朋友可以參考下2019-07-07Python graphlib庫(kù)輕松創(chuàng)建操作分析圖形對(duì)象
Python中的graphlib庫(kù)是一個(gè)功能強(qiáng)大且易于使用的工具,graphlib提供了許多功能,可以幫助您創(chuàng)建、操作和分析圖形對(duì)象,本文將介紹graphlib庫(kù)的主要用法,并提供一些示例代碼和輸出來(lái)幫助您入門2024-01-01python之實(shí)現(xiàn)兩個(gè)或多個(gè)列表相加
這篇文章主要介紹了python之實(shí)現(xiàn)兩個(gè)或多個(gè)列表相加方式,具有很好的參考價(jià)值,希望對(duì)大家有所幫助,如有錯(cuò)誤或未考慮完全的地方,望不吝賜教2023-08-08解決python使用pd.read_csv()出現(xiàn)錯(cuò)誤UnicodeDecodeError:?'utf-8&
你是否有過(guò)之前用pd.read打開(kāi)csv文件都正常,但突然有一天運(yùn)行以前的代碼就突然報(bào)錯(cuò),這篇文章主要給大家介紹了關(guān)于如何解決python使用pd.read_csv()出現(xiàn)錯(cuò)誤UnicodeDecodeError:?'utf-8'?codec?can't?decode......的相關(guān)資料,需要的朋友可以參考下2023-12-12