Skip to content

Instantly share code, notes, and snippets.

@pdxjohnny
pdxjohnny / 2ndparty.diff
Created Apr 22, 2021
dffml: 2ndparty: Start on CI for main package
View 2ndparty.diff
diff --git a/.ci/run.sh b/.ci/run.sh
index d056bdb78..3ea8962ec 100755
--- a/.ci/run.sh
+++ b/.ci/run.sh
@@ -194,8 +194,36 @@ function run_docs() {
export GIT_SSH_COMMAND='ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no'
cd "${SRC_ROOT}"
+ # TODO Stop using ~/.local here, make all pip related commands use venv
"${PYTHON}" -m pip install --prefix=~/.local -U -e "${SRC_ROOT}[dev]"
@pdxjohnny
pdxjohnny / README.rst
Created Apr 16, 2021
consoletest Sphinx extention talk demo file
View README.rst

consoletest Sphinx Extension Demo

Show that we're in a temporary directory

@pdxjohnny
pdxjohnny / a.py
Created Apr 2, 2021
DFFML Issue 831. Need to make it so that config properties can be saved / loaded from file
View a.py
import json
from dffml import *
from dffml.noasync import train, accuracy
SLRModel = Model.load("slr")
LinearRegressionModel = Model.load("scikitlr")
model1 = SLRModel(
features=Features(Feature("Years", int, 1)),
predict=Feature("Salary", int, 1),
@pdxjohnny
pdxjohnny / dffml_source_csv.diff
Last active Mar 23, 2021
source: csv: Combine multiple columns into Record.key (base commit 4db8d9da1fa11845c9522dc6ae8dc3d646de03dc)
View dffml_source_csv.diff
diff --git a/dffml/source/csv.py b/dffml/source/csv.py
index e5f54f75..165acdc1 100644
--- a/dffml/source/csv.py
+++ b/dffml/source/csv.py
@@ -14,7 +14,7 @@ from contextlib import asynccontextmanager
from ..record import Record
from .memory import MemorySource
from .file import FileSource, FileSourceConfig
-from ..base import config
+from ..base import config, field
@pdxjohnny
pdxjohnny / us_census_oregon.py
Created Mar 23, 2021
start on dataset_source() for ice cream demo / census data by city
View us_census_oregon.py
"""
This file is an example of how one might use the dataset_source() decorator to
create a new cached dataset as a source.
Whenever you see BEGIN, that's meant to be a new section, which could be a new
file. You can split them into their own files if you want, just make sure to
import from other files as needed.
"""
# BEGIN
@pdxjohnny
pdxjohnny / python.rst
Last active Mar 16, 2021
Updating HTTP service example to consoletest
View python.rst

Python

This is an example of how you can use the web API from Python.

To test this

$ python -m dffml.util.testing.consoletest docs/python.rst
@pdxjohnny
pdxjohnny / gist:08370e7339f9da096a8acdbecbf7aa8f
Last active Mar 8, 2021
linux: storage: Adding new luks disk to LVM root
View gist:08370e7339f9da096a8acdbecbf7aa8f
pdxjohnny@rza ~ $ sudo pvdisplay -v -m
Password:
Wiping internal VG cache
Wiping cache of LVM-capable devices
--- Physical volume ---
PV Name /dev/mapper/luks-3dc70587-e217-4c0c-ad62-a66602be2cf8
VG Name SolusSystem
PV Size 465.28 GiB / not usable 4.74 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
@pdxjohnny
pdxjohnny / gist:8046d912a8f914210c254155ababad83
Created Mar 7, 2021
httptests.Server as context manager with unittest.mock
View gist:8046d912a8f914210c254155ababad83
def wrap_source_dataset_base_dataset_source(state):
# Read the data from the csv file from the functions docstring
node = [
node
for node in parse_nodes(inspect.getdoc(state["obj"]))
if node.options.get("filepath", "") == "my_training.csv"
][0]
# Contents of the file to send to HTTP client
contents = "\n".join(node.content).encode()
@pdxjohnny
pdxjohnny / gist:07ee65368fae1bf9d721f617b339aa09
Created Mar 7, 2021
Python create context manager from lambda
View gist:07ee65368fae1bf9d721f617b339aa09
def wrap_source_dataset_base_dataset_source(state):
# Read the data from the CSV file from the functions docstring
node = [
node
for node in parse_nodes(inspect.getdoc(state["obj"]))
if node.options.get("filepath", "") == "my_training.csv"
][0]
# Contents of the file is a newline join of the list of lines in the file
contents = "\n".join(node.content).encode()
View foremodel.py
import pathlib
from typing import AsyncIterator, Type
from sktime.datasets import load_airline
from sktime.forecasting.base import ForecastingHorizon
from sktime.forecasting.exp_smoothing import ExponentialSmoothing
from sktime.performance_metrics.forecasting import sMAPE, smape_loss
import pandas as pd
import numpy as np