欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

python 爬取影視網(wǎng)站下載鏈接

 更新時間:2021年05月28日 10:58:51   作者:GriffinLewis2001  
一個簡單的爬取影視網(wǎng)站下載鏈接的爬蟲,非常適合新手學(xué)習(xí),感興趣的朋友可以參考下

項目地址:

https://github.com/GriffinLewis2001/Python_movie_links_scraper

運行效果

導(dǎo)入模塊

import requests,re
from requests.cookies import RequestsCookieJar
from fake_useragent import UserAgent
import os,pickle,threading,time
import concurrent.futures
from goto import with_goto

爬蟲主代碼

def get_content_url_name(url):
    send_headers = {
     "User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36",
    "Connection": "keep-alive",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
    "Accept-Language": "zh-CN,zh;q=0.8"

        }
    cookie_jar = RequestsCookieJar()
    cookie_jar.set("mttp", "9740fe449238", domain="www.yikedy.co")
    response=requests.get(url,send_headers,cookies=cookie_jar)
    response.encoding='utf-8'
    content=response.text
    reg=re.compile(r'<a href="(.*?)" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  class="thumbnail-img" title="(.*?)"')
    url_name_list=reg.findall(content)
    return url_name_list

def get_content(url):
    send_headers = {
     "User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36",
    "Connection": "keep-alive",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
    "Accept-Language": "zh-CN,zh;q=0.8"

        }
    cookie_jar = RequestsCookieJar()
    cookie_jar.set("mttp", "9740fe449238", domain="www.yikedy.co")
    response=requests.get(url,send_headers,cookies=cookie_jar)
    response.encoding='utf-8'
    return response.text



def search_durl(url):
    content=get_content(url)
    reg=re.compile(r"{'\\x64\\x65\\x63\\x72\\x69\\x70\\x74\\x50\\x61\\x72\\x61\\x6d':'(.*?)'}")
    index=reg.findall(content)[0]
    download_url=url[:-5]+r'/downloadList?decriptParam='+index
    content=get_content(download_url)
    reg1=re.compile(r'title=".*?" href="(.*?)" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ')
    download_list=reg1.findall(content)
    return download_list
def get_page(url):
    send_headers = {
     "User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36",
    "Connection": "keep-alive",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
    "Accept-Language": "zh-CN,zh;q=0.8"

        }
    cookie_jar = RequestsCookieJar()
    cookie_jar.set("mttp", "9740fe449238", domain="www.yikedy.co")
    response=requests.get(url,send_headers,cookies=cookie_jar)
    response.encoding='utf-8'
    content=response.text
    reg=re.compile(r'<a target="_blank" class="title" href="(.*?)" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  title="(.*?)">(.*?)<\/a>')
    url_name_list=reg.findall(content)
    return url_name_list
@with_goto
def main():

    print("=========================================================")
    name=input("請輸入劇名(輸入quit退出):")
    if name == "quit":
        exit()
    url="http://www.yikedy.co/search?query="+name
    dlist=get_page(url)
    print("\n")
    if(dlist):
        num=0
        count=0
        for i in dlist:
            if (name in i[1]) :
                print(f"{num} {i[1]}")
                num+=1
            elif num==0 and count==len(dlist)-1:
                goto .end
            count+=1
        dest=int(input("\n\n請輸入劇的編號(輸100跳過此次搜尋):"))
        if dest == 100:
            goto .end
        x=0
        print("\n以下為下載鏈接:\n")
        for i in dlist:
            if (name in i[1]):
                if(x==dest):
                    for durl in search_durl(i[0]):
                        print(f"{durl}\n")

                    print("\n")

                    break
                x+=1

    else:
        label .end
        print("沒找到或不想看\n")

完整代碼

import requests,re
from requests.cookies import RequestsCookieJar
from fake_useragent import UserAgent
import os,pickle,threading,time
import concurrent.futures
from goto import with_goto

def get_content_url_name(url):
    send_headers = {
     "User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36",
    "Connection": "keep-alive",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
    "Accept-Language": "zh-CN,zh;q=0.8"

        }
    cookie_jar = RequestsCookieJar()
    cookie_jar.set("mttp", "9740fe449238", domain="www.yikedy.co")
    response=requests.get(url,send_headers,cookies=cookie_jar)
    response.encoding='utf-8'
    content=response.text
    reg=re.compile(r'<a href="(.*?)" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  class="thumbnail-img" title="(.*?)"')
    url_name_list=reg.findall(content)
    return url_name_list

def get_content(url):
    send_headers = {
     "User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36",
    "Connection": "keep-alive",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
    "Accept-Language": "zh-CN,zh;q=0.8"

        }
    cookie_jar = RequestsCookieJar()
    cookie_jar.set("mttp", "9740fe449238", domain="www.yikedy.co")
    response=requests.get(url,send_headers,cookies=cookie_jar)
    response.encoding='utf-8'
    return response.text



def search_durl(url):
    content=get_content(url)
    reg=re.compile(r"{'\\x64\\x65\\x63\\x72\\x69\\x70\\x74\\x50\\x61\\x72\\x61\\x6d':'(.*?)'}")
    index=reg.findall(content)[0]
    download_url=url[:-5]+r'/downloadList?decriptParam='+index
    content=get_content(download_url)
    reg1=re.compile(r'title=".*?" href="(.*?)" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow" ')
    download_list=reg1.findall(content)
    return download_list
def get_page(url):
    send_headers = {
     "User-Agent":"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36",
    "Connection": "keep-alive",
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8",
    "Accept-Language": "zh-CN,zh;q=0.8"

        }
    cookie_jar = RequestsCookieJar()
    cookie_jar.set("mttp", "9740fe449238", domain="www.yikedy.co")
    response=requests.get(url,send_headers,cookies=cookie_jar)
    response.encoding='utf-8'
    content=response.text
    reg=re.compile(r'<a target="_blank" class="title" href="(.*?)" rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  rel="external nofollow"  title="(.*?)">(.*?)<\/a>')
    url_name_list=reg.findall(content)
    return url_name_list
@with_goto
def main():

    print("=========================================================")
    name=input("請輸入劇名(輸入quit退出):")
    if name == "quit":
        exit()
    url="http://www.yikedy.co/search?query="+name
    dlist=get_page(url)
    print("\n")
    if(dlist):
        num=0
        count=0
        for i in dlist:
            if (name in i[1]) :
                print(f"{num} {i[1]}")
                num+=1
            elif num==0 and count==len(dlist)-1:
                goto .end
            count+=1
        dest=int(input("\n\n請輸入劇的編號(輸100跳過此次搜尋):"))
        if dest == 100:
            goto .end
        x=0
        print("\n以下為下載鏈接:\n")
        for i in dlist:
            if (name in i[1]):
                if(x==dest):
                    for durl in search_durl(i[0]):
                        print(f"{durl}\n")

                    print("\n")

                    break
                x+=1

    else:
        label .end
        print("沒找到或不想看\n")

print("本軟件由CLY.所有\(zhòng)n\n")
while(True):
    main()

以上就是python 爬取影視網(wǎng)站下載鏈接的詳細內(nèi)容,更多關(guān)于python 爬取下載鏈接的資料請關(guān)注腳本之家其它相關(guān)文章!

相關(guān)文章

  • 使用apidocJs快速生成在線文檔的實例講解

    使用apidocJs快速生成在線文檔的實例講解

    下面小編就為大家分享一篇使用apidocJs快速生成在線文檔的實例講解,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧
    2018-02-02
  • Python實現(xiàn)PS圖像抽象畫風(fēng)效果的方法

    Python實現(xiàn)PS圖像抽象畫風(fēng)效果的方法

    這篇文章主要介紹了Python實現(xiàn)PS圖像抽象畫風(fēng)效果的方法,涉及Python基于skimage模塊進行圖像處理的相關(guān)操作技巧,需要的朋友可以參考下
    2018-01-01
  • 用python實現(xiàn)超強的加密軟件

    用python實現(xiàn)超強的加密軟件

    大家好,本篇文章主要講的是用python實現(xiàn)超強的加密軟件,感興趣的同學(xué)趕快來看一看吧,對你有幫助的話記得收藏一下,方便下次瀏覽
    2022-01-01
  • python3的一個天坑問題及解決方法:報錯UnicodeDecodeError: ‘utf-8‘ codec can‘t decode byte 0xa3 in position 59: invalid

    python3的一個天坑問題及解決方法:報錯UnicodeDecodeError: ‘utf-8‘ 

    在調(diào)試程序發(fā)現(xiàn)python3的一個天坑問題:報錯UnicodeDecodeError: ‘utf-8‘ codec can‘t decode byte 0xa3 in position 59: invalid,特此曝光,為眾位開發(fā)朋友提個醒
    2023-09-09
  • Scipy稀疏矩陣bsr_array的使用

    Scipy稀疏矩陣bsr_array的使用

    本文主要介紹了Scipy稀疏矩陣bsr_array的使用,文中通過示例代碼介紹的非常詳細,對大家的學(xué)習(xí)或者工作具有一定的參考學(xué)習(xí)價值,需要的朋友們下面隨著小編來一起學(xué)習(xí)學(xué)習(xí)吧
    2023-02-02
  • Python中切片的詳細操作篇

    Python中切片的詳細操作篇

    在Python中切片(slice)是對序列型對象(如list, string, tuple)的一種高級索引方法,下面這篇文章主要給大家介紹了關(guān)于Python中切片操作的相關(guān)資料,文中通過實例代碼介紹的非常詳細,需要的朋友可以參考下
    2022-08-08
  • python venv和virtualenv模塊詳解

    python venv和virtualenv模塊詳解

    venv 是 Python 內(nèi)置標(biāo)準(zhǔn)庫中創(chuàng)建輕量級虛擬環(huán)境的工具,本文通過示例代碼介紹python venv和virtualenv的相關(guān)知識,感興趣的朋友跟隨小編一起看看吧
    2024-08-08
  • 使用Python的Tornado框架實現(xiàn)一個Web端圖書展示頁面

    使用Python的Tornado框架實現(xiàn)一個Web端圖書展示頁面

    Tornado是Python的一款高人氣Web開發(fā)框架,這里我們來展示使用Python的Tornado框架實現(xiàn)一個Web端圖書展示頁面的實例,通過該實例可以清楚地學(xué)習(xí)到Tornado的模板使用及整個Web程序的執(zhí)行流程.
    2016-07-07
  • python3處理含有中文的url方法

    python3處理含有中文的url方法

    今天小編就為大家分享一篇python3處理含有中文的url方法,具有很好的參考價值,希望對大家有所幫助。一起跟隨小編過來看看吧
    2018-05-05
  • 基于Pytorch實現(xiàn)分類器的示例詳解

    基于Pytorch實現(xiàn)分類器的示例詳解

    這篇文章主要為大家詳細介紹了如何基于Pytorch實現(xiàn)兩個分類器:?softmax分類器和感知機分類器,文中的示例代碼講解詳細,需要的可以參考一下
    2023-04-04

最新評論