- Shodan : community access + commercial access
- Censys : community + commercial access (access posible for independent researchers)
- ZoomEye : community + commercial access
- Onyphe : community + commercial access
- BinaryEdge : commercial access only
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
{ | |
"threatLists": [ | |
{ | |
"threatType": "MALWARE", | |
"platformType": "ANY_PLATFORM", | |
"threatEntryType": "URL" | |
}, | |
{ | |
"threatType": "MALWARE", | |
"platformType": "WINDOWS", |
aka Cobalt Kitty, APT-C-00, SeaLotus, Sea Lotus, APT-32, APT 32, Ocean Buffalo, POND LOACH, TIN WOODLAWN, BISMUTH
Many tools do not fully remove metadata, but just remove the link with in the metadata table. The data are thus still available in the PDF file itself.
While a lot of people rely on Exiftool to remove metadata, it actually does the same in PDFs. If you remove metadata with exiftool -all= some.pdf
, you can always restore the data with exiftool -pdf-update:all= some.pdf
.
There are several options to remove PDF metadata safely:
- Remove metadata with exiftool :
exiftool -all= some.pdf
- Then remove ununsed objects with qpdf :
qpdf --linearize some.pdf - > some.cleaned.pdf
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
_ _ _ ____ _ _ | |
| | | | __ _ ___| | __ | __ ) __ _ ___| | _| | | |
| |_| |/ _` |/ __| |/ / | _ \ / _` |/ __| |/ / | | |
| _ | (_| | (__| < | |_) | (_| | (__| <|_| | |
|_| |_|\__,_|\___|_|\_\ |____/ \__,_|\___|_|\_(_) | |
A DIY Guide | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
# | |
# Updated Maltego Python library | |
# 2013/03/30 | |
# See TRX documentation | |
# | |
# RT | |
import sys | |
from xml.dom import minidom |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Originally posted at: http://pastebin.com/MP8zpQ26 | |
HACKING TEAM CLIENT RENEWAL DATES | |
From: Client List_Renewal date.xlsx | |
Name Country Name Maintenance Status | |
AFP Australia Australian Federal Police - Expired | |
AZNS Azerbaijan Ministry of National Defence 6/30/2015 Active | |
BHR Bahrain Bahrain 5/5/2015 Not Active | |
PHANTOM Chile Policia de Investigation 12/10/2018 Delivery scheduled (end of november) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import re | |
from urllib.parse import parse_qs | |
regex = re.compile('^(?P<ip>\S+)\s+-\s*(?P<userid>\S+)\s+\[(?P<datetime>[^\]]+)\]\s+"(?P<method>[A-Z]+)\s*(?P<request>[^ "]+)?\s*(HTTP/(?P<http_version>[0-9.]+))?"\s+(?P<status>[0-9]{3})\s+(?P<size>[0-9]+|-)\s+"(?P<referer>[^"]*)"\s+"(?P<user_agent>[^"]*)"') | |
def parse_log(log): | |
res = regex.match(line) | |
if not res: | |
raise(ValueError('Invalid log format')) | |
res = res.groupdict() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import requests | |
from bs4 import BeautifulSoup | |
def get_info(key): | |
# Returns the date and raw data | |
if key.startswith('http'): | |
r = requests.get(key) | |
else: | |
r = requests.get('https://pastebin.com/' + key) | |
soup = BeautifulSoup(r.text, 'html.parser') |
NewerOlder