欧美bbbwbbbw肥妇,免费乱码人妻系列日韩,一级黄片

PyHacker實(shí)現(xiàn)網(wǎng)站后臺(tái)掃描器編寫(xiě)指南

 更新時(shí)間:2022年05月14日 16:07:45   作者:巡安似海  
這篇文章主要為大家介紹了PyHacker實(shí)現(xiàn)網(wǎng)站后臺(tái)掃描器編寫(xiě)指南,有需要的朋友可以借鑒參考下,希望能夠有所幫助,祝大家多多進(jìn)步,早日升職加薪

包括如何處理假的200頁(yè)面/404智能判斷等

喜歡用Python寫(xiě)腳本的小伙伴可以跟著一起寫(xiě)一寫(xiě)呀。

編寫(xiě)環(huán)境:Python2.x

00x1:模塊

需要用到的模塊如下:

import request

00x2:請(qǐng)求基本代碼

先將請(qǐng)求的基本代碼寫(xiě)出來(lái):

import requests
def dir(url):
    headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'}
    req = requests.get(url=url,headers=headers)
    print req.status_code
dir('http://www.hackxc.cc')

00x3:設(shè)置

設(shè)置超時(shí)時(shí)間,以及忽略不信任證書(shū)

import urllib3
urllib3.disable_warnings()
req = requests.get(url=url,headers=headers,timeout=3,verify=False)

再加個(gè)異常處理

調(diào)試一下

再進(jìn)行改進(jìn),如果為200則輸出

if req.status_code==200:
    print "[*]",req.url

00x4:200頁(yè)面處理

難免會(huì)碰到假的200頁(yè)面,我們?cè)偬幚硪幌?/p>

處理思路:

首先訪問(wèn)hackxchackxchackxc.php和xxxxxxxxxx記錄下返回的頁(yè)面的內(nèi)容長(zhǎng)度,然后在后來(lái)的掃描中,返回長(zhǎng)度等于這個(gè)長(zhǎng)度的判定為404

def dirsearch(u,dir):
    try:
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'}
        #假的200頁(yè)面進(jìn)行處理
        hackxchackxchackxc = '/hackxchackxchackxc.php'
        hackxchackxchackxc_404 =requests.get(url=u+hackxchackxchackxc,headers=headers)
        # print len(hackxchackxchackxc_404.content)
        xxxxxxxxxxxx = '/xxxxxxxxxxxx'
        xxxxxxxxxxxx_404 = requests.get(url=u + xxxxxxxxxxxx, headers=headers)
        # print len(xxxxxxxxxxxx_404.content)
        #正常掃描
        req = requests.get(url=u+dir,headers=headers,timeout=3,verify=False)
        # print len(req.content)
        if req.status_code==200:
            if len(req.content)!=len(hackxchackxchackxc_404.content)and len(req.content)!= len(xxxxxxxxxxxx_404.content):
                print "[+]",req.url
            else:
                print u+dir,404
    except:
        pass

很nice

00x5:保存結(jié)果

再讓結(jié)果自動(dòng)保存

0x06:完整代碼

#!/usr/bin/python
#-*- coding:utf-8 -*-
import requests
import urllib3
urllib3.disable_warnings()
urls = []
def dirsearch(u,dir):
    try:
        headers = {
            'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/65.0.3314.0 Safari/537.36 SE 2.X MetaSr 1.0'}
        #假的200頁(yè)面進(jìn)行處理
        hackxchackxchackxc = '/hackxchackxchackxc.php'
        hackxchackxchackxc_404 =requests.get(url=u+hackxchackxchackxc,headers=headers)
        # print len(hackxchackxchackxc_404.content)
        xxxxxxxxxxxx = '/xxxxxxxxxxxx'
        xxxxxxxxxxxx_404 = requests.get(url=u + xxxxxxxxxxxx, headers=headers)
        # print len(xxxxxxxxxxxx_404.content)
        #正常掃描
        req = requests.get(url=u+dir,headers=headers,timeout=3,verify=False)
        # print len(req.content)
        if req.status_code==200:
            if len(req.content)!=len(hackxchackxchackxc_404.content)and len(req.content)!= len(xxxxxxxxxxxx_404.content):
                print "[+]",req.url
                with open('success_dir.txt','a+')as f:
                    f.write(req.url+"\n")
            else:
                print u+dir,404
        else:
            print u + dir, 404
    except:
        pass
if __name__ == '__main__':
    url = raw_input('\nurl:')
    print ""
    if 'http' not in url:
        url = 'http://'+url
    dirpath = open('rar.txt','r')
    for dir in dirpath.readlines():
        dir = dir.strip()
        dirsearch(url,dir)

以上就是PyHacker實(shí)現(xiàn)網(wǎng)站后臺(tái)掃描器編寫(xiě)指南的詳細(xì)內(nèi)容,更多關(guān)于PyHacker網(wǎng)站后臺(tái)掃描器的資料請(qǐng)關(guān)注腳本之家其它相關(guān)文章!

相關(guān)文章

最新評(píng)論