欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

基于Python實現(xiàn)子域名收集工具

 更新時間:2024年02月23日 10:38:41   作者:卡寧納12  
在網(wǎng)絡安全領域中,發(fā)現(xiàn)和管理攻擊面絕對是一項必須的任務,而對域名的尋找和分析是發(fā)現(xiàn)攻擊面的重要步驟,本文將使用Python編寫一個子域名收集,需要的可以參考下

在網(wǎng)絡安全領域中,發(fā)現(xiàn)和管理攻擊面絕對是一項必須的任務,而對域名的尋找和分析是發(fā)現(xiàn)攻擊面的重要步驟。今天我們將與您分享關于域名發(fā)現(xiàn)的四種方法,并附帶Python示例代碼來幫助您更好的理解和掌握這些方法。

1. 主域名鏈式證書提取域名信息(Chain of Trust from Root Domain)

import ssl
import OpenSSL

def get_cert_chain(domain):
    cert = ssl.get_server_certificate((domain, 443))
    x509 = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM, cert)
    return [value for value in x509.get_subject().get_components()]

print(get_cert_chain('example.com'))

2. 證書透明度日志(Certificate Transparency Logs)

import requests

def query_crt_sh(domain):
    url = f"https://crt.sh/?q={domain}&output=json"
    response = requests.get(url)
    try:
        return [result['name_value'] for result in response.json()]
    except:
        return []

print(query_crt_sh('example.com'))

3. 站長工具(Webmaster Tools)

import requests
from bs4 import BeautifulSoup

def query_webmaster_tools(domain):
    base_url = f"https://whois.chinaz.com/{domain}"
    page = requests.get(base_url)
    bs_obj = BeautifulSoup(page.text, "html.parser")
    return [pre.text for pre in bs_obj.find_all('pre')]

print(query_webmaster_tools('example.com'))

4. 子域名爆破(Subdomain Enumeration)

對實際環(huán)境中常見的子域名前綴進行枚舉。

import socket

def enum_subdomains(domain):
    common_subdomains = ['www', 'ftp', 'mail', 'webmail', 'admin']
    for subdomain in common_subdomains:
        full_domain = f"{subdomain}.{domain}"
        try:
            # if the subdomain resolves, it exists
            socket.gethostbyname(full_domain)
            print(f"Discovered subdomain: {full_domain}")
        except socket.gaierror:
            pass

enum_subdomains('example.com')

根據(jù)目標和環(huán)境選擇適合的工具進行深入挖掘總能幫助我們更好的發(fā)現(xiàn)攻擊面。希望以上的信息會對你有所幫助。

寫在最后

云圖極速版支持包含上述幾種在內的 20 余種域名發(fā)現(xiàn)方式,通過智能編排的方式動態(tài)調用以實現(xiàn)域名發(fā)現(xiàn)覆蓋度的最大化。除此之外,云圖極速版還支持 IP 發(fā)現(xiàn)、端口、服務、網(wǎng)站、組件、漏洞、安全風險等多種企業(yè)資產信息的全自動發(fā)現(xiàn)與監(jiān)控。實現(xiàn)攻擊面發(fā)現(xiàn)與攻擊面管理的自動化。

方法補充

除了上文的方法,小編為大家整理了其他Python實現(xiàn)子域名收集的方法,希望對大家有所幫助

實現(xiàn)代碼

# 導入模塊
import sys
from threading import Thread
from urllib.parse import urlparse
import requests
from bs4 import BeautifulSoup


# bing搜索子域名
def bing_search(site, page):
    headers = {
        'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/'
                      '85.0.4183.102 Safari/537.36',
        'Accept-Encoding': 'gzip,deflate',
        'Accept-Language': 'en-US,en;q=0,5',
        'Referer': 'https://cn.bing.com/search?q=site%3Abaidu.com&qs=n&form=QBLH&sp=-1&pq=site%3Abaidu.com'
                   '&sc=0-14&sk=&cvid=852BA524E035477EBE906058D68F4D70',
        'cookie': 'SRCHD=AF=WNSGPH; SRCHUID=V=2&GUID=D1F8852A6B034B4CB229A2323F653242&dmnchg=1; _EDGE_V=1; '
                  'MUID=304D7AA1FB94692B1EB575D7FABA68BD; MUIDB=304D7AA1FB94692B1EB575D7FABA68BD; '
                  '_SS=SID=1C2F6FA53C956FED2CBD60D33DBB6EEE&bIm=75:; ipv6=hit=1604307539716&t=4; '
                  '_EDGE_S=F=1&SID=1C2F6FA53C956FED2CBD60D33DBB6EEE&mkt=zh-cn; SRCHUSR=DOB=20200826&T=1604303946000;'
                  ' SRCHHPGUSR=HV=1604303950&WTS=63739900737&CW=1250&CH=155&DPR=1.5&UTC=480&DM=0&BZA=0&BRW=N&BRH=S'
    }
    for i in range(1, int(page) + 1):
        url = "https://cn.bing.com/search?q=site:" + site + "&go=Search&qs=ds&first=" + str((int(i) - 1) * 10 + 1)
        html = requests.get(url, headers=headers)
        soup = BeautifulSoup(html.content, 'html.parser')

        job_bt = soup.findAll('h2')
        for j in job_bt:
            link = j.a.get('href')
            domain = str(urlparse(link).scheme + "://" + urlparse(link).netloc)
            if domain in Subdomain:
                pass
            else:
                Subdomain.append(domain)

# 百度搜索
def baidu_search(site, page):
    headers = {
        'User-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/'
                      '85.0.4183.102 Safari/537.36',
        'Referer': 'https://www.baidu.com/s?wd=nsfocus'
    }

    for i in range(1, int(page) + 1):
        # 拼接搜索鏈接
        baidu_url = "https://www.baidu.com/s?wd=site:" + site + "&pn=" + str(
            (int(i) - 1) * 10) + "&oq=site:" + site + "&ie=utf-8"
        conn = requests.session()
        resp = conn.get(baidu_url, headers=headers)
        soup = BeautifulSoup(resp.text, 'lxml')
        tagh3 = soup.findAll('h3')
        for h3 in tagh3:
                href = h3.find('a').get('href')
                resp_site = requests.get(href,headers=headers)
                # 獲取url鏈接地址
                domain = str(urlparse(resp_site.url).scheme + "://" + urlparse(resp_site.url).netloc)
                # 將子域名追加到列表中
                if domain in Subdomain:
                    pass
                else:
                    Subdomain.append(domain)



# 從保存的文件中讀取內容
def read_file():
    with open(r'c:\users\xxxx\desktop\xxx.txt', mode='r') as f:
        for line in f.readlines():
            print(line.strip())


#    將結果寫入文件
def write_file():
    with open(r'c:\users\xxx\desktop\xxx.txt', mode='w') as f:
        for domain in Subdomain:
            f.write(domain)
            f.write('\n')


if __name__ == '__main__':
	# 需要用戶傳入需要查詢的站點域名及希望查詢的頁數(shù)
    if len(sys.argv) == 3:
        domain = sys.argv[1]
        num = sys.argv[2]
    else:
        print("Usage: %s baidu.com 10" % sys.argv[0])
        sys.exit(-1)
    Subdomain = []
    # 多行程執(zhí)行子域名查找
    bingt = Thread(target=bing_search, args=(domain, num,))
    bait = Thread(target=baidu_search, args=(domain, num,))
    bingt.start()
    bait.start()
    bingt.join()
    bait.join()
    # 寫入文件
    write_file()

到此這篇關于基于Python實現(xiàn)子域名收集工具的文章就介紹到這了,更多相關Python子域名收集內容請搜索腳本之家以前的文章或繼續(xù)瀏覽下面的相關文章希望大家以后多多支持腳本之家!

相關文章

最新評論