Skip to content

Instantly share code, notes, and snippets.

Robots policy for github projects

This is a sketch of a proposal for a "robots.txt for github" -- a policy that defines what actions automated tooling can take against a given repository.

Identification

Bots self-identify, and use project/repo-style naming. So code that lives at https://github.com/jacobian/coolbot identifies as jacobian/coolbot. Forks should generally use the upstream identifier until/unless they become different enough to warrent new names. This is a matter of judgement.

Policy file location: .github/robots.yml

@jacobian
jacobian / datasette-to-heroku-docker.sh
Created March 15, 2021 13:00
How to deploy Datasette to Heroku using Docker
heroku create {app} --stack container
datasette package --tag registry.heroku.com/{app}/web
docker push registry.heroku.com/{app}/web
heroku container:release -a {app} web
"I feel like I belong on this team."
"On this team, I can voice a contrary opinion without fear of negative consequences."
"On this team, perspectives like mine are included in decision making."
"On this team, administrative or clerical tasks that don’t have a specific owner are fairly divided."
"People on this team accept others who are different."
"It is easy to ask other members of this team for help."
"On this team, messengers are not punished when they deliver news of failures or other bad news."
"On this team, responsibilities are shared."
"On this team, cross-functional collaboration is encouraged and rewarded."
"On this team, failure causes inquiry."
"""
Take 2 - trying to minimize jump stitches
Stitch a row \ / \ /, then back
"""
import itertools
import pyembroidery as em
from collections import namedtuple
import click
import pyembroidery as em
pattern = em.EmbPattern()
# units are in 1/10mm
# max size in DST is 12mm so if we go bigger need to fuck with max_stitch
SIZE = 32
# start with a stitch at the origin to get the needle down (see the docs)
pattern.stitch_abs(0, 0)
# https://stackoverflow.com/questions/53139643/django-postgres-array-field-count-number-of-overlaps
# !!! DOESN'T WORK but might with some more poking?
class Article(models.Model):
keywords = ArrayField(models.CharField(max_length=100))
def __str__(self):
return f"<Article {self.id} keywords={self.keywords}>"
<html>
<head>
<link rel="stylesheet" href="reveal.js/css/reveal.css">
<link rel="stylesheet" href="reveal.js/css/theme/white.css">
</head>
<body>
<div class="reveal">
<div class="slides">
<section data-markdown="slides.md"
data-charset="utf-8">
<html>
<head>
<link rel="stylesheet" href="reveal.js/css/reveal.css">
<link rel="stylesheet" href="reveal.js/css/theme/white.css">
</head>
<body>
<div class="reveal">
<div class="slides">
<section data-markdown="slides.md"
data-charset="utf-8">
@jacobian
jacobian / security hardness 2.md
Last active December 3, 2016 05:10
Security Hardness - another idea

This is a draft "security hardness scale", desgigned to somewhat roughly quantify the level of effort of a penetration test -- since simply measuing "how many vulns did you find" is a terrible measurement of success. The goal is the measure the "hardness" of the system under test in a way that's a bit quantitative.

The result is a score from 1-10. The scale is inspired by to the Mohs Hardness Scale in that it's simply an ordinal scale, not an absolute one. That is, the "gap" between 3 and 4 doesn't have to be the same "difficulty increase" as the gap between 5 and 6. It's simply a way of rating that one pentest was "harder" than another. (This is in lieu of being able measuing "hardness" in any truely quantitative way).

Instructions:

@jacobian
jacobian / security hardness.md
Created December 2, 2016 21:56
Security Hardness Scale

This is a draft "security hardness scale", desgigned to somewhat roughly quantify the level of effort of a penetration test -- since simply measuing "how many vulns did you find" is a terrible measurement of success

The scale is similar to the Mohs Hardness Scale in that it's simply an ordinal scale, not an absolute one. That is, the "gap" between 3 and 4 doesn't have to be the same "difficulty increase" as the gap between 5 and 6. It's simply a way of rating that one pentest was "harder" than another. (This is in lieu of being able measuing "hardness" in any truely quantitative way).