Python根據(jù)URL地址下載文件并保存至對應目錄的實現(xiàn)
引言
在編程中經(jīng)常會遇到圖片等數(shù)據(jù)集將圖片等數(shù)據(jù)以URL形式存儲在txt文檔中,為便于后續(xù)的分析,需要將其下載下來,并按照文件夾分類存儲。本文以Github中Alexander Kim提供的圖片分類數(shù)據(jù)集為例,下載其提供的圖片樣本并分類保存
Python 3.6.5,Anaconda, VSCode
1. 下載數(shù)據(jù)集文件
建立項目文件夾,下載上述Github項目中的raw_data文件夾,并保存至項目目錄中。

2. 獲取樣本文件位置
編寫get_doc_path.py,根據(jù)根目錄位置,獲取目錄及其子目錄所有數(shù)據(jù)集文件
import os
def get_file(root_path, all_files={}):
'''
遞歸函數(shù),遍歷該文檔目錄和子目錄下的所有文件,獲取其path
'''
files = os.listdir(root_path)
for file in files:
if not os.path.isdir(root_path + '/' + file): # not a dir
all_files[file] = root_path + '/' + file
else: # is a dir
get_file((root_path+'/'+file), all_files)
return all_files
if __name__ == '__main__':
path = './raw_data'
print(get_file(path))
3. 下載文件
3.1 讀取url列表并
for filename, path in paths.items():
print('reading file: {}'.format(filename))
with open(path, 'r') as f:
lines = f.readlines()
url_list = []
for line in lines:
url_list.append(line.strip('\n'))
print(url_list)
3.2 創(chuàng)建文件夾
foldername = "./picture_get_by_url/pic_download/{}".format(filename.split('.')[0])
if not os.path.exists(folder_path):
print("Selected folder not exist, try to create it.")
os.makedirs(folder_path)
3.3 下載圖片
def get_pic_by_url(folder_path, lists):
if not os.path.exists(folder_path):
print("Selected folder not exist, try to create it.")
os.makedirs(folder_path)
for url in lists:
print("Try downloading file: {}".format(url))
filename = url.split('/')[-1]
filepath = folder_path + '/' + filename
if os.path.exists(filepath):
print("File have already exist. skip")
else:
try:
urllib.request.urlretrieve(url, filename=filepath)
except Exception as e:
print("Error occurred when downloading file, error message:")
print(e)
4. 完整源碼
4.1 get_doc_path.py
import os
def get_file(root_path, all_files={}):
'''
遞歸函數(shù),遍歷該文檔目錄和子目錄下的所有文件,獲取其path
'''
files = os.listdir(root_path)
for file in files:
if not os.path.isdir(root_path + '/' + file): # not a dir
all_files[file] = root_path + '/' + file
else: # is a dir
get_file((root_path+'/'+file), all_files)
return all_files
if __name__ == '__main__':
path = './raw_data'
print(get_file(path))
4.2 get_pic.py
import get_doc_path
import os
import urllib.request
def get_pic_by_url(folder_path, lists):
if not os.path.exists(folder_path):
print("Selected folder not exist, try to create it.")
os.makedirs(folder_path)
for url in lists:
print("Try downloading file: {}".format(url))
filename = url.split('/')[-1]
filepath = folder_path + '/' + filename
if os.path.exists(filepath):
print("File have already exist. skip")
else:
try:
urllib.request.urlretrieve(url, filename=filepath)
except Exception as e:
print("Error occurred when downloading file, error message:")
print(e)
if __name__ == "__main__":
root_path = './picture_get_by_url/raw_data'
paths = get_doc_path.get_file(root_path)
print(paths)
for filename, path in paths.items():
print('reading file: {}'.format(filename))
with open(path, 'r') as f:
lines = f.readlines()
url_list = []
for line in lines:
url_list.append(line.strip('\n'))
foldername = "./picture_get_by_url/pic_download/{}".format(filename.split('.')[0])
get_pic_by_url(foldername, url_list)
4.3 運行結(jié)果
執(zhí)行get_pic.py
當程序意外停止或再次執(zhí)行時,程序會自動跳過文件夾中已下載的文件,繼續(xù)下載未下載的內(nèi)容
{‘urls_drawings.txt': ‘./picture_get_by_url/raw_data/drawings/urls_drawings.txt', ‘urls_hentai.txt': ‘./picture_get_by_url/raw_data/hentai/urls_hentai.txt', ‘urls_neutral.txt': ‘./picture_get_by_url/raw_data/neutral/urls_neutral.txt', ‘urls_porn.txt': ‘./picture_get_by_url/raw_data/porn/urls_porn.txt', ‘urls_sexy.txt': ‘./picture_get_by_url/raw_data/sexy/urls_sexy.txt'}
reading file: urls_drawings.txt
Try downloading file: http://41.media.tumblr.com/xxxxxx.jpg
Try downloading file: http://41.media.tumblr.com/xxxxxx.jpg
Try downloading file: http://ak1.polyvoreimg.com/cgi/img-thing/size/l/tid/xxxxxx.jpg
Error occurred when downloading file, error message:
HTTP Error 502: No data received from server or forwarder
Try downloading file: http://akicocotte.weblike.jp/gaugau/xxxxxx.jpg
Try downloading file: http://animewriter.files.wordpress.com/2009/01/nagisa-xxxxxx-xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
Try downloading file: http://cdn.awwni.me/xxxxxx.jpg
后注:由于樣本數(shù)據(jù)集內(nèi)容的問題,上述地址以xxxxx代替具體地址,案例項目也已經(jīng)失效,但是方法仍然可以借鑒
20.9.23更新:數(shù)據(jù)集地址:https://github.com/ZQ-Qi/nsfw_data_scrapper,單純?yōu)榱藢W習和實踐本文代碼的可以下載該數(shù)據(jù)集進行嘗試
到此這篇關于Python根據(jù)URL地址下載文件并保存至對應目錄的實現(xiàn)的文章就介紹到這了,更多相關Python URL下載文件內(nèi)容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!
相關文章
Python散列表(Hash Table)的實現(xiàn)示例
散列表是一種常用于實現(xiàn)關聯(lián)數(shù)組或映射的數(shù)據(jù)結(jié)構,本文我們將深入講解Python中的散列表,包括散列函數(shù)、沖突解決方法、散列表的實現(xiàn)和應用場景,感興趣的可以了解一下2024-01-01
解決Python 函數(shù)聲明先后順序出現(xiàn)的問題
這篇文章主要介紹了如何解決Python 函數(shù)聲明先后順序的問題,幫助大家更好的理解和學習python,感興趣的朋友可以了解下2020-09-09
創(chuàng)建SparkSession和sparkSQL的詳細過程
SparkSession 是 Spark SQL 的入口,Builder 是 SparkSession 的構造器。 通過 Builder, 可以添加各種配置,并通過 stop 函數(shù)來停止 SparkSession,本文給大家分享創(chuàng)建SparkSession和sparkSQL的詳細過程,一起看看吧2021-08-08

