Skip to content

Instantly share code, notes, and snippets.

@mbrownnycnyc
mbrownnycnyc / blazor.md
Created Dec 19, 2020
Blazor tutorial notes
View blazor.md

blazor

why?

I know c# and always had trouble spending the time to learn full web framework and all the internals of making a web site. This didn't stop me from having ideas for sites, even easy sites.

This is my write up for notes extending the MSFT blazor tutorial: https://dotnet.microsoft.com/learn/aspnet/blazor-tutorial/run

cautions

  • when attempting to dotnet run the blazor site, I discovered kestral (whatever that is) wouldn't bind to tcp port 5000, as noted in ./BlazorApp/Properties/launchsettings.json.
@mbrownnycnyc
mbrownnycnyc / kali_wsl_2.md
Last active Oct 17, 2020
a working Kali in WSL. Why? Kali in vmware player took three times as long to complete a `-p- -A -sC` nmap scan of vulnversity on tryhackme. I'm hoping this WSL 2 is faster. If it isn't, then I will try WSL 1, and if that fails, then I will just build a box on Digital Ocean.
View kali_wsl_2.md
View splk_app_inspect_api.ps1
$body = @{
"username" = "splksso@login.com"
"password" = "password"
}
$LoginResponse = Invoke-WebRequest 'https://api.splunk.com/2.0/rest/login/splunk' -SessionVariable 'Session' -Body $Body -Method 'POST'
$Session
$headers = @{
View splk_packaging_toolkit_build.md
yum -y update
reboot
yum -y install wget
wget https://bootstrap.pypa.io/get-pip.py
python get-pip.py
pip install virtualenv
mkdir ~/virtualenvs
cd ~/virtualenvs
virtualenv slim
View container.json
{"phases": [], "container": {"node_guid": null, "in_case": false, "sensitivity": "amber", "create_time": "2019-06-13T18:28:36.836633Z", "tenant_id": 0, "role_id": null, "id": 105, "custom_fields": {}, "asset_id": null, "close_time": null, "open_time": "2019-06-13T18:30:53.840601Z", "status_id": 2, "container_type": "default", "closing_owner_id": null, "current_phase_id": null, "due_time": "2019-06-14T06:27:45.276000Z", "version": 1, "workflow_name": "", "owner_id": 1, "status": "open", "owner_name": null, "hash": "9e4458b9791d28101e5b3c1788fce582", "description": "A file download has been detected by network scan", "tags": [], "start_time": "2019-06-13T18:28:36.846066Z", "severity_id": "medium", "kill_chain": null, "artifact_update_time": "2019-06-13T18:32:15.408501Z", "artifact_count": 5, "parent_container_id": null, "data": {}, "name": "File Downloaded by HTTP", "ingest_app_id": null, "label": "events", "source_data_identifier": "e76431b6-c725-4981-9703-d27e0374693c", "end_time": null, "closing_rule_run_id"
@mbrownnycnyc
mbrownnycnyc / splk_resetpassword.sh
Created May 26, 2020
reset a local user password for splunk (quick ref for sys adm class)
View splk_resetpassword.sh
export SPLUNK_HOME=/opt/home/matt_b/splunkforwarder
$SPLUNK_HOME/bin/splunk stop
rm -rf $SPLUNK_HOME/etc/passwd
echo [user_info] > $SPLUNK_HOME/etc/system/local/user-seed.conf
echo USERNAME = admin >> $SPLUNK_HOME/etc/system/local/user-seed.conf
echo PASSWORD = NEW_PASSWORD >> $SPLUNK_HOME/etc/system/local/user-seed.conf
$SPLUNK_HOME/bin/splunk start
#reset password:
$SPLUNK_HOME/bin/splunk edit user admin -password 'splunk3du' -auth admin:NEW_PASSWORD
@mbrownnycnyc
mbrownnycnyc / splk_backpressure.md
Last active May 27, 2020
info on backpressure mechanism for splunk forwarders. You should _NOT_ be changing these settings usually, but may consider it when dealing with extremely high volume data sources (such as a UF on a syslog server).
View splk_backpressure.md

https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Protectagainstlossofin-flightdata

How can data loss be avoided?

The architecture is such that the UDP data sources must be converted to TCP backed by reliable delivery. Additionally, the forwarders and indexers may be configured to send application level ACKs back to sending forwarders.

splunkd delivery of packets is as follows:

  • data is sent in chunks of 64KB.
  • By default the forwarder is not looking for, nor is the indexer signaled to send, ACKs upon block receipt.
@mbrownnycnyc
mbrownnycnyc / dfs_backlog.wsf
Created Apr 10, 2020
dfs backlog checker from an old blog post i wrote (combining two MSFT scripts into one)... I know I know... I can get this info other ways, but not as
View dfs_backlog.wsf
<?XML version="1.0" standalone="yes" ?>
<job id="GetBacklog">
<runtime>
<description>
This script uses the DFSR WMI provider to obtain
replication backlog information between two servers.
</description>
<named
name="ReplicationGroupName"