This is the reference point. All the other options are based off this.
|-- app
| |-- controllers
| | |-- admin
#! /usr/bin/env python | |
import threading | |
import subprocess | |
import traceback | |
import shlex | |
class Command(object): | |
""" | |
Enables to run subprocess commands in a different thread with TIMEOUT option. |
# This is a very crud example of using the Repository Pattern with SQLAlchemy. It allows me to completely ignore interactions with | |
# the database. This is only pulled in whenever I require to persist or retrieve an object from the database. The domain/business | |
# logic is entirely separated from persistence and I can have true unit tests for those. | |
# The tests for persistence are then limited to very specific cases of persistence and retrieving instances, and I can do those | |
# independent of the business logic. They also tend to be less tests since I only need to test them once. | |
git add HISTORY.md
git commit -m "Changelog for upcoming release 0.1.1."
bumpversion patch
#!/bin/bash | |
# generate new personal ed25519 ssh key | |
ssh-keygen -o -a 100 -t ed25519 -f ~/.ssh/id_ed25519 -C "rob thijssen <rthijssen@gmail.com>" | |
# generate new host cert authority (host_ca) ed25519 ssh key | |
# used for signing host keys and creating host certs | |
ssh-keygen -t ed25519 -f manta_host_ca -C manta.network | |
eval "$(ssh-agent -s)" |
Working with SCAP is daunting. I'm in the "Sho" stage of "Sho Ha Re". SCAP is running, but only because I am following specific directions. There are hundreds of selected controls for SSG and STIG using SCAP. The basic runs only passes about half of the tests and there are many tests not even selected.
Breaking the STIG down would be helpful. For example, there are only 17 "Severity: High" tests. Wouldn't it make sense to have a test file that tests only for those 17?
What I'm trying to do is to create a simpler version of a STIG, a STIG that only tests a single control, or only the 17 "high" severity tests. I could of course manually pull out these tests. And I may do that. But a smarter approach would be to programatically build a small subset from the published source material. That way, I'm extracting the code.
I'm committed to acceptance test driven development. So I want the extracted controls. I need to write code to extract the controls. I need a way of testing my extr
# A simple generator wrapper, not sure if it's good for anything at all. | |
# With basic python threading | |
from threading import Thread | |
try: | |
from queue import Queue | |
except ImportError: | |
from Queue import Queue | |
You might want to read this to get an introduction to armel vs armhf.
If the below is too much, you can try Ubuntu-ARMv7-Qemu but note it contains non-free blobs.
First, cross-compile user programs with GCC-ARM toolchain. Then install qemu-arm-static
so that you can run ARM executables directly on linux
import shutil, tempfile | |
from os import path | |
import unittest | |
class TestExample(unittest.TestCase): | |
def setUp(self): | |
# Create a temporary directory | |
self.test_dir = tempfile.mkdtemp() | |
def tearDown(self): |