Skip to content

Instantly share code, notes, and snippets.

@dreizehnutters
Created February 27, 2024 14:58
Show Gist options
  • Star 2 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save dreizehnutters/5c315b2088a1666ff877c3cea363d775 to your computer and use it in GitHub Desktop.
Save dreizehnutters/5c315b2088a1666ff877c3cea363d775 to your computer and use it in GitHub Desktop.

Automating the First Hours of My Internal Vulnerability Assessments

In this post, I'll share how I automated the first hour of my internal vulnerability assessment using several handy scripts which i crafted over the years and try to demonstrate the power of automation in streamlining the initial phase of the assessment process.

The Need for Automation

Internal vulnerability assessments are crucial for identifying weaknesses within network infrastructure before they can be exploited by malicious actors. However, the manual effort required to initiate scans, gather results, and perform preliminary analysis can be time-consuming, resource-intensive and are mostly repetitive. Automation offers a solution to this challenge, allowing me to kickstart the assessment process and focus my attention on critical analysis and remediation tasks.

Automating offers several benefits:

  • Time Savings: By automating repetitive tasks, I can kickstart the assessment process within minutes, saving valuable time.

  • Consistency: The script ensures consistent scanning parameters and methodologies, minimizing human error.

  • Resource Efficiency: Automation allows me to make optimal use of available resources, maximizing the effectiveness of the assessment.

  • Focus on Analysis: With the initial phase automated, I can focus my attention on in-depth analysis and prioritization of identified vulnerabilities.

snmap.sh

Automates network scanning tasks using Nmap, extracting information such as open ports and host statuses. Supports scanning a specified CIDR range or input file. Provides options for TCP, UDP, and version detection scans with configurable parameters. Outputs scan results to organized directories for further analysis. I try to find a balance between accuracy and speed with my staggered Nmap bash script. This is done by only performing a full initial TCP SYN Scan on all ports and then starting a deep script scan on all open ports.

#!/bin/bash

NET=$1
bold=$(tput bold)
normal=$(tput sgr0)
error="${bold}[!]${normal}"
if [ -z "${1}" ]; then
    echo "${0} <NET_IN_CIDR>|<FILE> [--check]"
    exit 1
fi

XMLS=/usr/bin/xmlstarlet
NMAP_BIN=/usr/bin/nmap
uID=1000
NMAP_MIN_RATE=500 # CHANGE ME
MIN_HOSTGROUP=16  # MIN_HOSTGROUP*16 ~= #scan targets
DEFALUT_OPTIONS="--privileged \
                -v \
                -d1 \
                -Pn \
                -T5 \
                --min-rate=${NMAP_MIN_RATE} \
                --min-hostgroup=${MIN_HOSTGROUP} \
                --stats-every=10 \
                --open"

get_ports_from_XML() {
    local nmap_path="${1}"
    $XMLS sel -t -m '//port/state[@state="open"]/parent::port' \
        -v 'ancestor::host/address[@addrtype="ipv4"]/@addr' \
        -o : -v './@portid' -n "${nmap_path}"/*.xml | sort -u -V |
        cut -d ':' -f2- | sed ':a;N;$!ba;s/\n/,/g'
}

is_valid_cidr() {
    local cidr="${1}"
    local cidr_pattern='^([0-9]{1,3}\.){3}[0-9]{1,3}/[0-9]{1,2}$'
    if ! [[ $cidr =~ $cidr_pattern ]]; then
        echo "${error} Invalid CIDR notation: ${cidr}"
        exit 1
    fi
}

exit_fun() {
    sudo /usr/bin/chown ${uID}:${uID} -hR "${NET_PATH}"
    echo "$1" && exit 0
}

if [[ -f "${NET}" || -d "${NET}" ]]; then
    INPUT="-iL ${NET}"
    VERBOSE="$(cat ${NET} 2>/dev/null | tr '\n' ',')"
else
    is_valid_cidr $NET
    INPUT="${NET}"
    VERBOSE="${NET}"
fi

if [ "$2" == "--ports" ]; then
    echo "${bold}[[[[ grepping open ports ]]]]${normal}"
    get_ports_from_XML $1
    exit 0
fi

if [ "$EUID" -ne 0 ]; then
    echo "${error} Please run as root (or set capabilities)"
    echo "sudo setcap cap_net_raw,cap_net_admin,cap_net_bind_service+eip $(which nmap)"
    exit 1
fi

NET_PATH="${PWD}/nmap-$(echo ${NET} | tr '.' '_' | tr '\/' '-')-$(date +%s)"
mkdir -p "${NET_PATH}"

if [ "$2" == "--check" ]; then
    echo "${bold}[[[[ subnet scan of ${VERBOSE} to generate hosts.xml ]]]]${normal}"
    $NMAP_BIN \
        -v \
        -d1 \
        --stats-every=10 \
        -sn \
        -PE \
        -oA "${NET_PATH}/hosts" \
        $INPUT
    echo "${bold}[[[[ the following hosts are reachable ]]]]${normal}"
    $XMLS sel -t -m "//host[status/@state='up']" -v "concat(address[@addrtype='ipv4']/@addr, ' ', hostnames/hostname/@name)" -n "${NET_PATH}"/hosts.xml | tee "${NET_PATH}"/up_hosts.txt
    echo "${bold}[[[[ the following hosts are NOT reachable ]]]]${normal}"
    $XMLS sel -t -m "//host[status/@state='down']" -v "concat(address[@addrtype='ipv4']/@addr, ' ', hostnames/hostname/@name)" -n "${NET_PATH}"/hosts.xml | tee "${NET_PATH}"/down_hosts.txt
    exit_fun
fi

echo "${bold}[[[[ min tcp scan for ${VERBOSE} ]]]]${normal}"
$NMAP_BIN \
    $DEFALUT_OPTIONS \
    -p- \
    -sS \
    -oA "${NET_PATH}/init" \
    $INPUT
[ $? -eq 1 ] && exit_fun "${error} min tcp scan FAILED"

echo "${bold}[[[ checking version on ports: $(get_ports_from_XML "${NET_PATH}") on ${VERBOSE} ]]]${normal}"
$NMAP_BIN \
    $DEFALUT_OPTIONS \
    -p$(get_ports_from_XML "${NET_PATH}") \
    -sCV \
    -O \
    --script='discovery' \
    --version-all \
    -oA "${NET_PATH}/version" \
    $INPUT
[ $? -eq 1 ] && exit_fun "${error} version scan FAILED"

echo "${bold}[[[[ nmap min udp for ${VERBOSE} ]]]]${normal}"
$NMAP_BIN \
    $DEFALUT_OPTIONS \
    --top-ports=100 \
    -sUV \
    --version-intensity 1 \
    --open \
    -oA "${NET_PATH}/uinit" \
    $INPUT
[ $? -eq 1 ] && exit_fun "${error} UDP scan FAILED"

exit_fun "${bold}[[[[ finished ${NET_PATH} ]]]]${normal}"

Usage

  1. Host Discovery (if needed)
➜ sudo snmap.sh 192.168.0.16/24 --check           
[[[[ subnet scan of 192.168.0.16/24 to generate hosts.xml ]]]]
Starting Nmap 7.94SVN ( https://nmap.org ) at 2024-02-27 06:14 EST
--------------- Timing report ---------------
  hostgroups: min 1, max 100000
  rtt-timeouts: init 1000, min 100, max 10000
  max-scan-delay: TCP 1000, UDP 1000, SCTP 1000
  parallelism: min 0, max 0
  max-retries: 10, host-timeout: 0
  min-rate: 0, max-rate: 0
---------------------------------------------
Initiating ARP Ping Scan at 06:14
Scanning 255 hosts [1 port/host]
Packet capture filter (device wlan0): arp and arp[18:4] = 0x00C0CAB0 and arp[22:2] = 0x3944
Destroying timed-out global ping from 192.168.0.2.
Stats: 0:00:10 elapsed; 0 hosts completed (0 up), 255 undergoing ARP Ping Scan
ARP Ping Scan Timing: About 83.72% done; ETC: 06:14 (0:00:02 remaining)
Current sending rates: 44.52 packets / s, 1839.61 bytes / s.
Destroying timed-out global ping from 192.168.0.2.
[...]
➜ tree nmap-192_168_0_16-24-1709032454 
nmap-192_168_0_16-24-1709032454
├── down_hosts.txt
├── hosts.gnmap
├── hosts.nmap
├── hosts.xml
└── up_hosts.txt

➜ cat nmap-192_168_0_16-24-1709032454/up_hosts.txt
192.168.0.1
192.168.0.2
192.168.0.4
192.168.0.5
192.168.0.10
192.168.0.14
192.168.0.16
  1. Port Scan

Full port scan using all my favourite scripts only on sockets that are open. (full SYN scan -> small script/version scan)

➜ sudo snmap.sh nmap-192_168_0_16-24-1709032454/up_hosts.txt 
[[[[ min tcp scan for 192.168.0.1 ,192.168.0.2 ,192.168.0.4 ,192.168.0.5 ,192.168.0.10 ,192.168.0.14 ,192.168.0.16 , ]]]]
Host discovery disabled (-Pn). All addresses will be marked 'up' and scan times may be slower.
Starting Nmap 7.94SVN ( https://nmap.org ) at 2024-02-27 06:15 EST
--------------- Timing report ---------------
  hostgroups: min 16, max 100000
  rtt-timeouts: init 250, min 50, max 300
  max-scan-delay: TCP 5, UDP 1000, SCTP 5
  parallelism: min 0, max 0
  max-retries: 2, host-timeout: 900000
  min-rate: 500, max-rate: 0
---------------------------------------------
Initiating ARP Ping Scan at 06:15
Scanning 6 hosts [1 port/host]
Packet capture filter (device wlan0): arp and arp[18:4] = 0x00C0CAB0 and arp[22:2] = 0x3944
Completed ARP Ping Scan at 06:15, 0.28s elapsed (6 total hosts)
Overall sending rates: 38.85 packets / s, 1631.87 bytes / s.
Initiating Parallel DNS resolution of 4 hosts. at 06:15
Completed Parallel DNS resolution of 4 hosts. at 06:15, 0.06s elapsed
DNS resolution of 4 IPs took 0.06s. Mode: Async [#: 4, OK: 0, NX: 4, DR: 0, SF: 0, TR: 4, CN: 0]
Initiating SYN Stealth Scan at 06:15
Scanning 4 hosts [65535 ports/host]
Packet capture filter (device wlan0): dst host 192.168.0.16 and (icmp or icmp6 or ((tcp) and (src host 192.168.0.1 or src host 192.168.0.4 or src host 192.168.0.10 or src host 192.168.0.14)))
Discovered open port 53/tcp on 192.168.0.1
Discovered open port 443/tcp on 192.168.0.1
Discovered open port 80/tcp on 192.168.0.1
Increased max_successful_tryno for 192.168.0.1 to 1 (packet drop)
Stats: 0:00:10 elapsed; 2 hosts completed (4 up), 4 undergoing SYN Stealth Scan
SYN Stealth Scan Timing: About 1.83% done; ETC: 06:24 (0:08:57 remaining)
[...]

This generated the following working directory.

➜  tree nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501 
nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501
├── init.gnmap
├── init.nmap
├── init.xml
├── uinit.gnmap
├── uinit.nmap
├── uinit.xml
├── version.gnmap
├── version.nmap
└── version.xml

1 directory, 9 files
  1. List all Open Ports (for the obligatory Nessus Scan configuration to only use open ports!)
➜ snmap.sh nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501/ --ports        
[[[[ grepping open ports ]]]]
53,80,443,2222,9090,25801,10074,22,3389

nmap2csv.sh

Next i like to aggregate the information gathered by Nmap into a table, which can be helpful to track the progress of the audit. The following script automates importing Nmap scan results into Metasploit database and exporting relevant information to CSV files. The generated CSV files are also very easy to read for clients and often very better suitable in reports then listing 100+ tables of Nmap data for each host.

#!/bin/bash

if [ -z "$1" ]; then
   echo "$0 <PATH TO NMAP SCAN RESULTS>"
   exit 1
fi

AUDIT_RESULTS=$1
PREFIX=$2
DB_HOST="127.0.0.1"
DB_PORT="5432"
DB_USER="msf"
DB_NAME="msf"
DB_CONFIG_PATH="/usr/share/metasploit-framework/config/database.yml"
CSV_OUT="WITH (FORMAT CSV, DELIMITER ';', HEADER TRUE, FORCE_QUOTE *)"
PSQL="psql -h ${DB_HOST} -p ${DB_PORT} -U ${DB_USER} -d ${DB_NAME}"

PGPASSWORD=$(cat $DB_CONFIG_PATH | grep password | cut -d ' ' -f4 | head -n1)
[ "$?" != "0" ] && echo "[!] Failed to grep password from metasploit-framework database.yml" && exit 1

PGPASSWORD=$PGPASSWORD $PSQL -c "SELECT 1;" >/dev/null 2>&1
[ "$?" != "0" ] && echo "[!] msfdb not running -> \`msfdb init\`" && exit 1

echo "[*] clearing workspace & metasploit import..."
msfconsole -q -x "workspace -D Default; db_import ${AUDIT_RESULTS}/*.xml; exit"
[ "$?" != "0" ] && echo "[!] ERROR: Failed to import data to metasploit" && exit 1

declare -a qs=(
   "(select address, mac, name, os_name, os_flavor, os_sp from hosts)"#hosts
   "(select address, mac, HOSTS.name as host_name, port, proto, SERVICES.state, SERVICES.name, SERVICES.info, os_name, os_flavor, os_sp \
      from \
         services \
      INNER JOIN \
         hosts \
      ON hosts.id = services.host_id)"#service)

for q in "${qs[@]}"; do
   OUTPUT_CSV="${PWD}/$(echo $q | cut -d "#" -f2)$2.csv"
   echo -e "\t[-] exporting to ${OUTPUT_CSV}"
   QUERY="\copy $(echo $q | cut -d "#" -f1) TO '${OUTPUT_CSV}' $CSV_OUT;"
   PGPASSWORD=$PGPASSWORD $PSQL -A -F ';' -P footer=off -R "\\n" -c "${QUERY}" >/dev/null 2>&1
   [ "$?" != "0" ] && echo "[!] export failed" && exit 1
done

echo "[*] done" && exit 0

Usage

➜ nmap2csv.sh nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501 
[*] clearing workspace & metasploit import...
[*] Deleted workspace: default
[*] Recreated the default workspace
[*] Importing 'Nmap XML' data
[*] Import: Parsing with 'Nokogiri v1.13.10'
[*] Importing host 192.168.0.1
[*] Importing host 192.168.0.4
[*] Importing host 192.168.0.14
[*] Importing host 192.168.0.16
[*] Successfully imported /tmp/ws/nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501/init.xml
[*] Importing 'Nmap XML' data
[*] Import: Parsing with 'Nokogiri v1.13.10'
[*] Importing host 192.168.0.1
[*] Importing host 192.168.0.2
[*] Importing host 192.168.0.10
[*] Importing host 192.168.0.14
[*] Importing host 192.168.0.16
[*] Successfully imported /tmp/ws/nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501/uinit.xml
[*] Importing 'Nmap XML' data
[*] Import: Parsing with 'Nokogiri v1.13.10'
[*] Importing host 192.168.0.1
[*] Importing host 192.168.0.4
[*] Importing host 192.168.0.14
[*] Importing host 192.168.0.16
[*] Successfully imported /tmp/ws/nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501/version.xml
        [-] exporting to /tmp/ws/hosts.csv
        [-] exporting to /tmp/ws/service.csv
[*] done

The generated CSV file can be used to sort/filter for different information. Alt text of the image

nmap2svc.py

At this point of time I usually go for compliance scans regarding the used cryptographic principles in either SSH or used SSL/TLS services. Therefore I use a script to extract relevant hosts for each service I would like to extract the needed information.

#!/usr/bin/python3

from os import listdir, path
from argparse import ArgumentParser
import xml.etree.ElementTree as ET

__version__ = 1.0

def scan(in_file, search_pattern):
    systems = {}
    try:
        root = ET.parse(in_file).getroot()
        for cur_host in root.findall('host'):
            ipv4_addr = [h.attrib['addr'] for h in cur_host.findall('address') if h.attrib['addrtype'] == 'ipv4'][0]
            systems[ipv4_addr] = set()
            for cur_xml_port in cur_host.findall('ports/port'):
                port_id = cur_xml_port.attrib['portid']
                systems[ipv4_addr].update(port_id for tag in cur_xml_port if search_pattern in str(tag.attrib))
    except Exception as err:
        print(err)
        raise Exception(f"[!] Can not parse '{in_file}'")
    return systems

def write_file(systems_dict, output):
    try:
        with open(output, 'w+', encoding='utf-8') as fd:
            for key in systems_dict.keys():
                for val in systems_dict[key]:
                    loot = f"{key}:{val}"
                    print(loot)
                    fd.write(f"{loot}\n")
    except Exception as err:
        print(err)
        raise Exception(f"[!] Can not write '{output}'")

def arguments_parser():
    parser = ArgumentParser(
        description="Creates an input file for service-scans (ssl, ssh) based on nmap script scan results.",
        epilog="Example usage: ./%(prog)s -i nmap-dir ssl")
    parser.add_argument('-i', '--input', help="nmap input directory or file (containing script scan .xml)", type=str, required=True)
    parser.add_argument('-o', '--output', help="output file to write found sockets", type=str, default=None)
    parser.add_argument('service', help="service type to scan for (ssl, ssh, ftp, ...)", type=str)
    return parser.parse_args()

if __name__ == "__main__":
    args = arguments_parser()
    print(f"Input: {args.input}")
    args_output = f"{args.service}-servcies.txt" if args.output == None else args.output
    print(f"Output: {args_output}\n")
    print(f"[*] scanning for service: '{args.service}'")
    results = {}
    try:
        if not path.isdir(args.input):
            if args.input.endswith('.xml'):
                results = scan(args.input, args.service)
            else:
                print("Input is not a xml file.")
        else:
            for file in [path.join(args.input, f) for f in listdir(args.input) if f.endswith('.xml')]:
                result_dict = scan(file, args.service)
                for ip_key in result_dict:
                    try:
                        results[ip_key].update(result_dict[ip_key])
                    except KeyError:
                        results[ip_key] = result_dict[ip_key]
        write_file(results, args_output)
    except Exception as err:
        print(err)

Usage

➜ nmap2svc.py -i nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501 ssl

Input: nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501
Output: ssl-servcies.txt

[*] scanning for service: 'ssl'
192.168.0.1:443
192.168.0.4:25801
192.168.0.16:3389

vide.sh

Now we come to my latest "script" (get your own copy of vide.sh here) which tries to focus on deeper service enumeration. I mainly use it to identify/crawl and screenshot HTTP/HTTPS web servers which are in my reach and then run templated nuclei scans or start working with burp to further poke against a target.

  1. identify web servers

Can help to narrow the scope and identify assets that should not be there

➜  vide.sh nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501 -sc
[?] working dir -> '/tmp/ws/vide_runs/vide_27.02_06461709034417'
        _______________
    ==c(___(o(______(_()
              \=\
               )=\    ┌───────────────────────────~vide~──┐
              //|\\   │ attack surface enumeration        │
             //|| \\  │ version: 2.2                      │
            // ||. \\ └──────────────────@dreizehnutters──┘
          .//  ||   \\ .
          //  .      \\ 

[*] grepping open ports per host from 'nmap-nmap-192_168_0_16-24-1709032454-up_hosts_txt-1709032501/*.xml'

    __    __  __       _  __
   / /_  / /_/ /_____ | |/ /
  / __ \/ __/ __/ __ \|   /
 / / / / /_/ /_/ /_/ /   |
/_/ /_/\__/\__/ .___/_/|_|
             /_/

                projectdiscovery.io

[INF] Current httpx version v1.3.7 (outdated)
http://192.168.0.14:10074 [FAILED]
http://192.168.0.14:10074 [FAILED]
http://192.168.0.14:10074 [FAILED]
http://192.168.0.14:10074 [FAILED]
https://192.168.0.14:10074 [FAILED]
https://192.168.0.14:10074 [FAILED]
https://192.168.0.14:10074 [FAILED]
http://192.168.0.14:10074 [FAILED]
http://192.168.0.14:10074 [FAILED]
http://192.168.0.14:10074 [FAILED]
https://192.168.0.14:10074 [FAILED]
https://192.168.0.16:3389 [SUCCESS] [200] [GET] [7293] [updog - /tmp/ws] [Werkzeug/3.0.1 Python/3.11.8]

[...]

[*] found 3 targets
https://192.168.0.16:3389 [Directory: /tmp/ws Choose a file… Upload Name Size Last Modified nmap-nmap-192_168_0_16-24-170903]                                                                                                             
https://192.168.0.1 []
https://192.168.0.1 []
[$] enjoy

This will generate the following working directory

➜  ws tree vide_runs
vide_runs
└── vide_27.02_06481709034510
    ├── host_port.txt
    ├── http_servers.txt
    ├── https_servers.txt
    ├── httpx
    │   └── scan.log
    ├── vide.log
    └── vide_targets.txt

4 directories, 7 files
  1. running TESTSSL.sh

Starting TESTSSL.sh with all my favourite options via vide.sh

➜ vide.sh ssl-servcies.txt -el
[?] working dir -> '/tmp/ws/vide_runs/vide_27.02_06391709033955'
        _______________
    ==c(___(o(______(_()
              \=\
               )=\    ┌───────────────────────────~vide~──┐
              //|\\   │ attack surface enumeration        │
             //|| \\  │ version: 2.2                      │
            // ||. \\ └──────────────────@dreizehnutters──┘
          .//  ||   \\ .
          //  .      \\ 

[*] working on 3 targets
[*] ssl.sh:
[?] no protocol handler found defaulting to http
        [1/3] testssl.sh scan 192.168.0.1:443

###########################################################
    testssl.sh       3.2rc3 from https://testssl.sh/dev/

      This program is free software. Distribution and
             modification under GPLv2 permitted.
      USAGE w/o ANY WARRANTY. USE IT AT YOUR OWN RISK!

       Please file bugs @ https://testssl.sh/bugs/

###########################################################

[...]

[!] some errors while scanning targets were encountered (/tmp/ws/vide_runs/vide_27.02_06521709034770/error_on_testssl_scan.txt).
192.168.0.4:25801
[$] enjoy

If any SSL/TLS targets throw an error (hello STARTTLS) you will get notified.

  1. Getting a picture of all web servers

To view each website without actually manually surfing there

➜ cat vide_runs/*/htt*_servers.txt > webservers.txt
➜ vide.sh webservers.txt -sc -sp -es
[?] working dir -> '/tmp/ws/vide_runs/vide_27.02_06491709034593'
        _______________
    ==c(___(o(______(_()
              \=\
               )=\    ┌───────────────────────────~vide~──┐
              //|\\   │ attack surface enumeration        │
             //|| \\  │ version: 2.2                      │
            // ||. \\ └──────────────────@dreizehnutters──┘
          .//  ||   \\ .
          //  .      \\ 

[*] working on 5 targets
[*] screenshot.sh gathering:
[$] enjoy
➜ firefox vide_runs/vide_27.02_06491709034593/screenshots/screenshot/screenshot.html
  1. Running nuclei scans

Getting more low-hanging fruits with templated scans

➜ cat vide_runs/vide_27.02_06481709034510/http_servers.txt
http://192.168.0.1
http://192.168.0.4:9090
➜ vide.sh vide_runs/vide_27.02_06481709034510/http_servers.txt -sp -sc -eu
[?] working dir -> '/tmp/ws/vide_runs/vide_27.02_09191709043587'
        _______________
    ==c(___(o(______(_()
              \=\
               )=\    ┌───────────────────────────~vide~──┐
              //|\\   │ attack surface enumeration        │
             //|| \\  │ version: 2.2                      │
            // ||. \\ └──────────────────@dreizehnutters──┘
          .//  ||   \\ .
          //  .      \\ 

[*] working on 2 targets
[*] nuclei.sh scans:
        [1/2] full template scan of http://192.168.0.1

                     __     _
   ____  __  _______/ /__  (_)
  / __ \/ / / / ___/ / _ \/ /
 / / / / /_/ / /__/ /  __/ /
/_/ /_/\__,_/\___/_/\___/_/   v3.1.3

                projectdiscovery.io

[INF] Your current nuclei-templates v9.7.5 are outdated. Latest is v9.7.6
[http-missing-security-headers:strict-transport-security] [http] [info] http://192.168.0.1
[http-missing-security-headers:x-permitted-cross-domain-policies] [http] [info] http://192.168.0.1
[http-missing-security-headers:referrer-policy] [http] [info] http://192.168.0.1
[http-missing-security-headers:cross-origin-embedder-policy] [http] [info] http://192.168.0.1
[...]

Conclusion

Automation plays a vital role in enhancing the efficiency and effectiveness of internal vulnerability assessments. By automating the first steps of my assessment process using scripts, I've streamlined the initial phase and empowered myself to focus on critical analysis and remediation tasks. And I guess the next step would be to pour all the highlighted steps themself into a script in order to orchestrate the run down itself. Depending on your reporting framework, the gathered information should also be automatically digested/imported to cut down time on the writing of the report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment