概述:
被动信息搜集主要通过搜索引擎或者社交等方式对目标资产信息进行提取,通常包括ip查询,whois查询,子域名搜集等。进行被动信息搜集时不与目标产生交互,可以在不接触到目标系统的情况下挖掘目标信息。
主要方法:dns解析,子域名挖掘,邮件爬取等。
dns解析:
1、概述:
dns(domain name system,域名系统)是一种分布式网络目录服务,主要用于域名与ip地址的相互转换,能够使用户更方便地访问互联网,而不用去记住一长串数字(能够被机器直接读取的ip)。
2、ip查询:
ip查询是通过当前所获取的url去查询对应ip地址的过程。可以利用socket库函数中的gethostbyname()获取域名对应的ip值。
代码:
1
2
3
4
|
import socket
ip = socket.gethostbyname( 'www.baidu.com' )
print (ip)
|
返回:
39.156.66.14
3、whois查询:
whois是用来查询域名的ip以及所有者信息的传输协议。whois相当于一个数据库,用来查询域名是否已经被注册,以及注册域名的详细信息(如域名所有人,域名注册商等)。
python中的python-whois模块可用于whois查询。
代码:
1
2
3
4
|
from whois import whois
data = whois( 'www.baidu.com' )
print (data)
|
返回:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
|
e:\python\python.exe "h:/code/python security/day01/whois查询.py"
{
"domain_name" : [
"baidu.com" ,
"baidu.com"
],
"registrar" : "markmonitor, inc." ,
"whois_server" : "whois.markmonitor.com" ,
"referral_url" : null,
"updated_date" : [
"2020-12-09 04:04:41" ,
"2021-04-07 12:52:21"
],
"creation_date" : [
"1999-10-11 11:05:17" ,
"1999-10-11 04:05:17"
],
"expiration_date" : [
"2026-10-11 11:05:17" ,
"2026-10-11 00:00:00"
],
"name_servers" : [
"ns1.baidu.com" ,
"ns2.baidu.com" ,
"ns3.baidu.com" ,
"ns4.baidu.com" ,
"ns7.baidu.com" ,
"ns3.baidu.com" ,
"ns2.baidu.com" ,
"ns7.baidu.com" ,
"ns1.baidu.com" ,
"ns4.baidu.com"
],
"status" : [
"clientdeleteprohibited https://icann.org/epp#clientdeleteprohibited" ,
"clienttransferprohibited https://icann.org/epp#clienttransferprohibited" ,
"clientupdateprohibited https://icann.org/epp#clientupdateprohibited" ,
"serverdeleteprohibited https://icann.org/epp#serverdeleteprohibited" ,
"servertransferprohibited https://icann.org/epp#servertransferprohibited" ,
"serverupdateprohibited https://icann.org/epp#serverupdateprohibited" ,
"clientupdateprohibited (https://www.icann.org/epp#clientupdateprohibited)" ,
"clienttransferprohibited (https://www.icann.org/epp#clienttransferprohibited)" ,
"clientdeleteprohibited (https://www.icann.org/epp#clientdeleteprohibited)" ,
"serverupdateprohibited (https://www.icann.org/epp#serverupdateprohibited)" ,
"servertransferprohibited (https://www.icann.org/epp#servertransferprohibited)" ,
"serverdeleteprohibited (https://www.icann.org/epp#serverdeleteprohibited)"
],
"emails" : [
"abusecomplaints@markmonitor.com" ,
"whoisrequest@markmonitor.com"
],
"dnssec" : "unsigned" ,
"name" : null,
"org" : "beijing baidu netcom science technology co., ltd." ,
"address" : null,
"city" : null,
"state" : "beijing" ,
"zipcode" : null,
"country" : "cn"
}
process finished with exit code 0
|
子域名挖掘:
1、概述:
域名可以分为*域名,一级域名,二级域名等。
子域名(subdomain)是*域名(一级域名或父域名)的下一级。
在测试过程中,测试目标主站时如果未发现任何相关漏洞,此时通常会考虑挖掘目标系统的子域名。
子域名挖掘方法有多种,例如,搜索引擎,子域名破解,字典查询等。
2、利用python编写一个简单的子域名挖掘工具:
(以为例)
代码:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
# coding=gbk
import requests
from bs4 import beautifulsoup
from urllib.parse import urlparse
import sys
def bing_search(site, pages):
subdomain = []
# 以列表的形式存储子域名
headers = {
'user-agent' : 'mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/89.0.4389.90 safari/537.36' ,
'accept' : 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9' ,
'referer' : 'https://cn.bing.com/' ,
'cookie' : 'muid=37fa745f1005602c21a27bb3117a61a3; srchd=af=noform; srchuid=v=2&guid=da7bdd699afb4aeb8c68a0b4741efa74&dmnchg=1; muidb=37fa745f1005602c21a27bb3117a61a3; ulc=p=9fd9|1:1&h=9fd9|1:1&t=9fd9|1:1; pplstate=1; anon=a=cec39b849dee39838493af96ffffffff&e=1943&w=1; nap=v=1.9&e=18e9&c=b8-hxgvkte_2lqj0i3ovbjcie8caea9h4f3xnrd3z07nnv3paxmvjq&w=1; _tarlang=default=en; _ttss_in=hist=wyj6ac1iyw5ziiwiyxv0by1kzxrly3qixq==; _ttss_out=hist=wyjlbijd; abdef=v=13&abdv=13&mrb=1618913572156&mrnb=0; kievrpssecauth=fabsarratojiltfsmkplvwsg6an6c/svrwnmaaaegaaacpykw8i/cyhdeafiuhpfzqswnp%2bmm43nyhmcuteqcgehpvygeoz6cpqiurtcce3vestgwkhxpyvdyakrl5u5eh0y3%2bmsti5kxboq5zllxof61w19jgutqgjb3tzhsv5wb58a2i8nbtwih/cffvuyqdm11s7xnw/zzoqc9tnud8zg9hi29rgieodosl/kzz5lwb/cfsw6gbawovtmctorjr20k0c0zgzlhxa7gyh9cxajto7w5krx2/b/qjalnzuh7lvzcnrf5naagj10xhhzyhitlntjne3yqqlylzmgnrzt8o7qwfpjwhqaak4aft3ny9r0nglhm6uxpc8ph9heaybwtisy7jnvvyfwbdk6o4oqu33kheyqw/jtvhqacnpn2v74dzzvk4xrp%2bpcqiorizi%3d; _u=1ll1jnraa8gnrwog3ntdw_punidnxyiikdzb-r_hvgutxrrvfcrnapkxvbxa1w-dbzjsjjnfk6vghsqjtuslxvzswsd5a1xfvq_v_nuinstifdus7q7fyy2dmvdrlfmiqbgdt-keqazoz-r_tlwscg4_wdnfxrwg6ga8k2cryotfgnkon7kvcj7iopdtadqdp; wlid=kqrardi2czxuqvurk62vur88lu/dln6bffcwtmb8eokbi3uzyvhkiocdmpbbts0pq3jo42l3o5qwzgty4fnt8j837l8j9jp0nwvh2ytfkz4=; _edge_s=sid=01830e382f4863360b291e1b2e6662c7; srchs=pc=atmm; wls=c=3d04cfe82d8de394&n=%e5%81%a5; srchusr=dob=20210319&t=1619277515000&tpc=1619267174000&poex=w; snrhop=i=&ts=; _ss=pc=atmm&sid=01830e382f4863360b291e1b2e6662c7&bim=656; ipv6=hit=1619281118251&t=4; srchhpgusr=srchlangv2=zh-hans&brw=w&brh=s&cw=1462&ch=320&dpr=1.25&utc=480&dm=0&wts=63754766339&hv=1619277524&bza=0&th=thab5&newwnd=1&nrslt=-1&lsl=0&srchlang=&as=1&nnt=1&hap=0&vsro=0'
}
for i in range ( 1 , int (pages) + 1 ):
url = "https://cn.bing.com/search?q=site%3a" + site + "&go=search&qs=ds&first=" + str (( int (i) - 1 ) * 10 ) + "&form=pere"
html = requests.get(url, headers = headers)
soup = beautifulsoup(html.content, 'html.parser' )
job_bt = soup.findall( 'h2' )
for i in job_bt:
link = i.a.get( 'href' )
domain = str (urlparse(link).scheme + "://" + urlparse(link).netloc)
if domain in subdomain:
pass
else :
subdomain.append(domain)
print (domain)
if __name__ = = '__main__' :
if len (sys.argv) = = 3 :
site = sys.argv[ 1 ]
page = sys.argv[ 2 ]
else :
print ( "usge: %s baidu.com 10" % sys.argv[ 0 ])
# 输出帮助信息
sys.exit( - 1 )
subdomain = bing_search( 'www.baidu.com' , 15 )
|
返回:
邮件爬取:
1、概述:
针对目标系统进行渗透的过程中,如果目标服务器安全性很高,通过服务器很难获取目标权限时,通常会采用社工的方式对目标服务进行进一步攻击。
针对搜索界面的相关邮件信息进行爬取、处理等操作之后。利用获得的邮箱账号批量发送钓鱼邮件,诱骗、欺诈目标用户或管理员进行账号登录或点击执行,进而获取目标系统的其权限。
该邮件采集工具所用到的相关库函数如下:
1
2
3
4
5
|
import sys
import getopt
import requests
from bs4 import beautifulsoup
import re
|
2、过程:
①:在程序的起始部分,当执行过程中没有发生异常时,则执行定义的start()函数。
通过sys.argv[ ] 实现外部指令的接收。其中,sys.argv[0] 代表代码本身的文件路径,sys.argv[1:] 表示从第一个命令行参数到输入的最后一个命令行参数,存储形式为list。
代码如下:
1
2
3
4
5
6
|
if __name__ = = '__main__' :
# 定义异常
try :
start(sys.argv[ 1 : ])
except :
print ( "interrupted by user, killing all threads ... " )
|
②:编写命令行参数处理功能。此处主要应用 getopt.getopt()函数处理命令行参数,该函数目前有短选项和长选项两种格式。
短选项格式为“ - ”加上单个字母选项;
长选项格式为“ -- ”加上一个单词选项。
opts为一个两元组列表,每个元素形式为“(选项串,附加参数)”。当没有附加参数时,则为空串。之后通过for语句循环输出opts列表中的数值并赋值给自定义的变量。
代码如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
def start(argv):
url = ""
pages = ""
if len (sys.argv) < 2 :
print ( "-h 帮助信息;\n" )
sys.exit()
# 定义异常处理
try :
banner()
opts, args = getopt.getopt(argv, "-u:-p:-h" )
except :
print ( 'error an argument' )
sys.exit()
for opt, arg in opts:
if opt = = "-u" :
url = arg
elif opt = = "-p" :
pages = arg
elif opt = = "-h" :
print (usage())
launcher(url, pages)
|
③:输出帮助信息,增加代码工具的可读性和易用性。为了使输出的信息更加美观简洁,可以通过转义字符设置输出字体颜色,从而实现所需效果。
开头部分包含三个参数:显示方式,前景色,背景色。这三个参数是可选的,可以只写其中一个参数。结尾可以省略,但为了书写规范,建议以 “\033[0m” 结尾。
代码如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
|
print ( '\033[0:30;41m 3ch0 - nu1l \033[0m' )
print ( '\033[0:30;42m 3ch0 - nu1l \033[0m' )
print ( '\033[0:30;43m 3ch0 - nu1l \033[0m' )
print ( '\033[0:30;44m 3ch0 - nu1l \033[0m' )
# banner信息
def banner():
print ( '\033[1:34m ################################ \033[0m\n' )
print ( '\033[1:34m 3ch0 - nu1l \033[0m\n' )
print ( '\033[1:34m ################################ \033[0m\n' )
# 使用规则
def usage():
print ( '-h: --help 帮助;' )
print ( '-u: --url 域名;' )
print ( '-p --pages 页数;' )
print ( 'eg: python -u "www.baidu.com" -p 100' + '\n' )
sys.exit()
|
④:确定搜索邮件的关键字,并调用bing_search()和baidu_search()两个函数,返回bing与百度两大搜索引擎的查询结果。由获取到的结果进行列表合并,去重之后,循环输出。
代码如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
# 漏洞回调函数
def launcher(url, pages):
email_num = []
key_words = [ 'email' , 'mail' , 'mailbox' , '邮件' , '邮箱' , 'postbox' ]
for page in range ( 1 , int (pages) + 1 ):
for key_word in key_words:
bing_emails = bing_search(url, page, key_word)
baidu_emails = baidu_search(url, page, key_word)
sum_emails = bing_emails + baidu_emails
for email in sum_emails:
if email in email_num:
pass
else :
print (email)
with open ( 'data.txt' , 'a+' )as f:
f.write(email + '\n' )
email_num.append(email)
|
⑤:用bing搜索引擎进行邮件爬取。bing引擎具有反爬防护,会通过限定referer、cookie等信息来确定是否网页爬取操作。
可以通过指定referer与requeses.session()函数自动获取cookie信息,绕过bing搜索引擎的反爬防护。
代码如下:
1
2
3
4
5
6
7
8
9
|
# bing_search
def bing_search(url, page, key_word):
referer = "http://cn.bing.com/search?q=email+site%3abaidu.com&sp=-1&pq=emailsite%3abaidu.com&first=1&form=pere1"
conn = requests.session()
bing_url = "http://cn.bing.com/search?q=" + key_word + "+site%3a" + url + "&qa=n&sp=-1&pq=" + key_word + "site%3a" + url + "&first=" + str ((page - 1 ) * 10 ) + "&form=pere1"
conn.get( 'http://cn.bing.com' , headers = headers(referer))
r = conn.get(bing_url, stream = true, headers = headers(referer), timeout = 8 )
emails = search_email(r.text)
return emails
|
⑥:用百度搜索引擎进行邮件爬取。百度搜索引擎同样设定了反爬防护,相对bing来说,百度不仅对referer和cookie进行校验,还同时在页面中通过javascript语句进行动态请求链接,从而导致不能动态获取页面中的信息。
可以通过对链接的提取,在进行request请求,从而绕过反爬设置。
代码如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
|
# baidu_search
def baidu_search(url, page, key_word):
email_list = []
emails = []
referer = "https://www.baidu.com/s?wd=email+site%3abaidu.com&pn=1"
baidu_url = "https://www.baidu.com/s?wd=" + key_word + "+site%3a" + url + "&pn=" + str ((page - 1 ) * 10 )
conn = requests.session()
conn.get(baidu_url, headers = headers(referer))
r = conn.get(baidu_url, headers = headers(referer))
soup = beautifulsoup(r.text, 'lxml' )
tagh3 = soup.find_all( 'h3' )
for h3 in tagh3:
href = h3.find( 'a' ).get( 'href' )
try :
r = requests.get(href, headers = headers(referer))
emails = search_email(r.text)
except exception as e:
pass
for email in emails:
email_list.append(email)
return email_list
|
⑦:通过正则表达式获取邮箱号码。此处也可以换成目标企业邮箱的正则表达式。
代码如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
|
# search_email
def search_email(html):
emails = re.findall(r "[a-z0-9\.\-+_]+@[a-z0-9\.\-+_]+\.[a-z]" + html, re.i)
return emails
# headers(referer)
def headers(referer):
headers = {
'user-agent' : 'mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/89.0.4389.90 safari/537.36' ,
'accept' : 'application/json, text/javascript, */*; q=0.01' ,
'accept-language' : 'zh-cn,zh;q=0.9,en;q=0.8' ,
'accept-encoding' : 'gzip, deflate, br' ,
'referer' : referer
}
return headers
|
3、完整代码:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
|
# coding=gbk
import sys
import getopt
import requests
from bs4 import beautifulsoup
import re
# 主函数,传入用户输入的参数
def start(argv):
url = ""
pages = ""
if len (sys.argv) < 2 :
print ( "-h 帮助信息;\n" )
sys.exit()
# 定义异常处理
try :
banner()
opts, args = getopt.getopt(argv, "-u:-p:-h" )
except :
print ( 'error an argument' )
sys.exit()
for opt, arg in opts:
if opt = = "-u" :
url = arg
elif opt = = "-p" :
pages = arg
elif opt = = "-h" :
print (usage())
launcher(url, pages)
# banner信息
def banner():
print ( '\033[1:34m ################################ \033[0m\n' )
print ( '\033[1:34m 3ch0 - nu1l \033[0m\n' )
print ( '\033[1:34m ################################ \033[0m\n' )
# 使用规则
def usage():
print ( '-h: --help 帮助;' )
print ( '-u: --url 域名;' )
print ( '-p --pages 页数;' )
print ( 'eg: python -u "www.baidu.com" -p 100' + '\n' )
sys.exit()
# 漏洞回调函数
def launcher(url, pages):
email_num = []
key_words = [ 'email' , 'mail' , 'mailbox' , '邮件' , '邮箱' , 'postbox' ]
for page in range ( 1 , int (pages) + 1 ):
for key_word in key_words:
bing_emails = bing_search(url, page, key_word)
baidu_emails = baidu_search(url, page, key_word)
sum_emails = bing_emails + baidu_emails
for email in sum_emails:
if email in email_num:
pass
else :
print (email)
with open ( 'data.txt' , 'a+' )as f:
f.write(email + '\n' )
email_num.append(email)
# bing_search
def bing_search(url, page, key_word):
referer = "http://cn.bing.com/search?q=email+site%3abaidu.com&sp=-1&pq=emailsite%3abaidu.com&first=1&form=pere1"
conn = requests.session()
bing_url = "http://cn.bing.com/search?q=" + key_word + "+site%3a" + url + "&qa=n&sp=-1&pq=" + key_word + "site%3a" + url + "&first=" + str ((page - 1 ) * 10 ) + "&form=pere1"
conn.get( 'http://cn.bing.com' , headers = headers(referer))
r = conn.get(bing_url, stream = true, headers = headers(referer), timeout = 8 )
emails = search_email(r.text)
return emails
# baidu_search
def baidu_search(url, page, key_word):
email_list = []
emails = []
referer = "https://www.baidu.com/s?wd=email+site%3abaidu.com&pn=1"
baidu_url = "https://www.baidu.com/s?wd=" + key_word + "+site%3a" + url + "&pn=" + str ((page - 1 ) * 10 )
conn = requests.session()
conn.get(baidu_url, headers = headers(referer))
r = conn.get(baidu_url, headers = headers(referer))
soup = beautifulsoup(r.text, 'lxml' )
tagh3 = soup.find_all( 'h3' )
for h3 in tagh3:
href = h3.find( 'a' ).get( 'href' )
try :
r = requests.get(href, headers = headers(referer))
emails = search_email(r.text)
except exception as e:
pass
for email in emails:
email_list.append(email)
return email_list
# search_email
def search_email(html):
emails = re.findall(r "[a-z0-9\.\-+_]+@[a-z0-9\.\-+_]+\.[a-z]" + html, re.i)
return emails
# headers(referer)
def headers(referer):
headers = {
'user-agent' : 'mozilla/5.0 (windows nt 10.0; win64; x64) applewebkit/537.36 (khtml, like gecko) chrome/89.0.4389.90 safari/537.36' ,
'accept' : 'application/json, text/javascript, */*; q=0.01' ,
'accept-language' : 'zh-cn,zh;q=0.9,en;q=0.8' ,
'accept-encoding' : 'gzip, deflate, br' ,
'referer' : referer
}
return headers
if __name__ = = '__main__' :
# 定义异常
try :
start(sys.argv[ 1 : ])
except :
print ( "interrupted by user, killing all threads ... " )
|
以上就是python中的被动信息搜集的详细内容,更多关于python 被动信息搜集的资料请关注服务器之家其它相关文章!
原文链接:https://www.cnblogs.com/3cH0-Nu1L/p/14698655.html