重新捡起《Python安全攻防》这本书继续学习。
被动信息收集:
在敲写代码之前,我们可以熟悉以下函数方法收集一些简单信息。
DNS解析:
1、IP查询
在
socket
模块中有
gethostbyname()
函数,可以通过域名解析得到我们的IP:
>>> import socket
>>> ip = socket.gethostbyname('www.rainhacker.com')
>>> ip
'47.112.224.231'
2、Whois查询
通过
whois()
可以得到注册域名的详细信息:
首先安装相应模块:
pip3 install python-whois
简单运用:
>>> from whois import whois
>>> a = whois('rainhacker.com')
>>> print(a)
{
"domain_name": [
"RAINHACKER.COM",
"rainhacker.com"
],
"registrar": "Alibaba Cloud Computing (Beijing) Co., Ltd.",
"whois_server": "grs-whois.hichina.com",
"referral_url": null,
"updated_date": "2020-09-22 07:25:52",
"creation_date": "2020-09-22 07:20:13",
"expiration_date": "2021-09-22 07:20:13",
"name_servers": [
"DNS10.HICHINA.COM",
"DNS9.HICHINA.COM"
],
"status": "ok https://icann.org/epp#ok",
"emails": "DomainAbuse@service.aliyun.com",
"dnssec": "unsigned",
"name": null,
"org": null,
"address": null,
"city": null,
"state": "hu bei",
"zipcode": null,
"country": "CN"
}
子域名挖掘:
通过
Python
编写一个基于
bing
搜索引擎的简单的子域名挖掘工具
# @Project : PythonMS08067
# @Time : 2020/11/23 16:26
# @File : subdomain.py
# @Software : PyCharm
import requests
from bs4 import BeautifulSoup
from urllib.parse import urlparse
import sys
def bingsearch(site, pages):
Subdomain = []
# 构造http请求头
headers = {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Firefox/60.0',
'Accept': '*/*',
'Accept-Language': 'en-US,en;q=0.5',
'Accept-Encoding': 'gzip,deflate',
'referer': 'http://cn.bing.com/search?q=email+site%3abaidu.com&qs=n&sp=-1&pq=emailsite%3abaidu.com&first=2&FROM=PERE1'}
# 定义翻页动作
for i in range(1, int(pages)+1):
url = "https://cn.bing.com/search?q=site%3a"+site+"&go=Search&qs=ds&first="+str((int(i)-1)*10)+"&FROM=PERE"
conn = requests.session()
conn.get("http://cn.bing.com", headers=headers)
html = conn.get(url, stream=True, headers=headers, timeout=8)
soup = BeautifulSoup(html.content, 'html.parser')
job_bt = soup.findAll('h2')
# 筛选相应子域名
for i in job_bt:
link = i.a.get('href')
domain = str(urlparse(link).scheme + "://" + urlparse(link).netloc)
if domain in Subdomain:
pass
else:
Subdomain.append(domain)
print(domain)
return Subdomain
if __name__ == '__main__':
# site=baidu.com
if len(sys.argv) == 3:
site = sys.argv[1]
page = sys.argv[2]
else:
print("usage: %s baidu.com 10" % sys.argv[0])
sys.exit(0)
Subdomain = bingsearch(site, page)
上述代码比较简单,大致意思是通过
site
语法搜索输入域名相关子域名,并仿造翻页动作确保获取每页中的信息,域名由页面超链接中提取。
下面实验获取前十页的域名信息:
邮件爬取:
代码未实现,此小结略过。
主动信息收集:
简单来说,通过认为主动积极获取服务器的一些信息。
基于ICMP的主机发现:
代码通过向局域网内发送ICMP数据包,识别存活主机:
# @Project : PythonMS08067
# @Time : 2020/11/25 10:21
# @File : ICMP_host.py
# @Software : PyCharm
# 因scapy只能再linux上正常运行,所以此程序只适用于linux
from scapy.all import *
# from random import randint
from optparse import OptionParser
def main():
parser = OptionParser("Usage:%prog -i <target host>")
# 获取IP地址参数
parser.add_option('-i', type='string', dest='IP', help='specify target host')
options,ars = parser.parse_args()
print("Scan report for {0}\n".format(options.IP))
# 判断是单台主机还是多台主机
# IP中存在-,说明是要扫描多台主机
if '-' in options.IP:
# 构造范围内IP
for i in range(int(options.IP.split('-')[0].split('.')[3]), int(options.IP.split('-')[1])+1):
# 扫描主机
Scan('.'.join([options.IP.split('.')[0],options.IP.split('.')[1],options.IP.split('.')[2],str(i)]))
# 睡眠
time.sleep(0.2)
else:
# 扫描单个IP
Scan(options.IP)
print("\nScan finished!\n")
def Scan(ip):
# ip_id = randint(1, 65535)
# icmp_id = randint(1, 65535)
# icmp_seq = randint(1, 65535)
# 构造ICMP包
packet = IP(dst=ip)/ICMP()/b'rootkit'
# 发送并接收
result = sr1(packet, timeout=1, verbose=False)
#
if result:
# for rcv in result:
scan_ip = result[IP].src
print(scan_ip+'-->'+'Host is up')
else:
print(ip + '--->'+'Host is down')
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print("interrupted by user, killing all threads...")
也是一段很简单的程序,这里不做过多的解释,程序将构造好的ICMP包发送给某个局域网内,存活的主机会发回ICMP的响应包。
书中还讲到了通过
nmap
来扫描:
# @Project : PythonMS08067
# @Time : 2020/11/26 9:32
# @File : nmap_ICMP_host.py
# @Software : PyCharm
import nmap
import optparse
def NmapScan(targetIP):
# 实例化PortScan对象
nm = nmap
版权声明:本文为qq_40549070原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。