proc - a process information pseudo-file system which is used as an interface to kernel data structures
loadavg describes CPU load
> cat /proc/loadavg
0.01 0.06 0.11 2/442 4114
Common Vulnerabilities and Exposures (CVEs) are known security vulnerabilities that exist in commom packages/os' used by our Docker images
When a CVE is introduced, we need to fix the respective Dockerfile so that it uses the most up-to-date version of the package/os that contains the CVE
CVEs are constantly discovered. This means that patching CVEs in our Dockerfiles is something we will do often
Fortunately, the solution may be as simple as re-building the respective Dockerfile
Trufflehog searches through git repositories for high [entropy][2] strings and secrets, digging deep into commit history
How it works: Trufflehog will go through the entire commit history of each branch, and check each diff from each commit, and check for secrets. This is both by regex and by entropy. For entropy checks, Trufflehog will evaluate the shannon entropy for both the base64 char set and hexidecimal char set for every blob of text greater than 20 characters comprised of those character sets in each diff. If at any point a high entropy string >20 characters is detected, it will print to the screen
+
Effective at finding secrets accidentally committed
+
Relatively easy to shove into a devops pipeline
+
Custom regexes can be added (things like s3 bucket detection)
I hereby claim:
To claim this, I am signing this object:
I took the following steps to enable HTTPS on stage.100.ucla.edu via SSL certificate installation. Use this guide as a reference for doing the same on the production server
Step 1: Place 100_ucla_edu_cert.cer
and 100_ucla_edu_interm.cer
on the production server (via SFTP, SCP, etc.) in any location
Step 2: Generate the chained certificate. Note the order here: the intermediate needs to come second!
$ cat 100_ucla_edu_cert.cer 100_ucla_edu_interm.cer >> certbundle.pem
Note: you will need to modify certbundle.pem
to include a line-break between the append
We need to reliably handle new, dynamic data that is generated on a continual basis and present it in real-time. A scooter's location could change in a matter of seconds - we need to be able to detect that change with minimal latency in order to provide accurate asset tracking.
Time gives our data meaning. Therefore, the raw data needs to be processed sequentially and incrementally over sliding time windows.
{ | |
"AWSEBDockerrunVersion": "1", | |
"Image": { | |
"Name": "<AWS_ACCOUNT_ID>.dkr.ecr.<AWS_REGION>.amazonaws.com/<NAME>:<TAG>", | |
"Update": "true" | |
}, | |
"Ports": [ | |
{ | |
"ContainerPort": "<CONTAINER_PORT>" | |
} |
I hereby claim:
To claim this, I am signing this object: