Skip to content

Instantly share code, notes, and snippets.

View abutcher's full-sized avatar

Andrew Butcher abutcher

  • Red Hat
  • Raleigh, NC
View GitHub Profile
CL-USER> (normalize-mod (weather-numerics))
#S(TABLE
:NAME WEATHER-NUMERICS
:COLUMNS (#S(DISCRETE
:NAME FORECAST
:CLASSP NIL
(defun xindex-tot (tbl)
"Consolidates xindex hash tables into a single table"
(let ((ht (make-hash-table :test #'equal)))
(dolist (attr (table-columns (xindex tbl)))
(if (numeric-p attr)
(dohash (key value (numeric-f attr))
(setf key (push (numeric-name attr) key))
(setf (gethash key ht) value))
(dohash (key value (discrete-f attr))
(setf key (reverse key))
The score being reported is the f-measure. The more number of rounds Which2 runs, the worse the f-measure, but the more stable the results.
Below, it shows that the number of "lives" and the sample size does not matter. It does show that the smaller the beam size, the better the median f-measure.
The median performance, like most classifiers, varies greatly by classifier.
Rounds (Sorted by round)
1 Rounds, 3.0, 10.0, 16.0, 36.0, 100.0, [---- | +++++++++++++++++++++++++++++++++]
2 Rounds, 3.0, 8.0, 14.0, 34.0, 100.0, [--- | ++++++++++++++++++++++++++++++++++]
3 Rounds, 3.0, 7.0, 13.0, 32.0, 100.0, [--- | +++++++++++++++++++++++++++++++++++]
CL-USER> (let ((heat-this-bitch-up (my-random-int most-positive-single-float)))
(dotimes (n 1000000000)
(setf heat-this-bitch-up (* heat-this-bitch-up (my-random-int most-positive-single-float))))
heat-this-bitch-up)
import csv
import random
import sys
class Node:
right = None
left = None
data = None
variance = None
CL-USER> (load "compass")
T
CL-USER> (run-tests (remove 'china *datasets*) :normalize? T)
ALBRECHT
BestK WIN: 58 TIE: 0 LOSS: 42 MDMRE: 2.1433
K=16 WIN: 0 TIE: 100 LOSS: 0 MDMRE: 4.1794
K=8 WIN: 49 TIE: 0 LOSS: 51 MDMRE: 2.5077
K=4 WIN: 20 TIE: 25 LOSS: 55 MDMRE: 3.3546
K=2 WIN: 74 TIE: 0 LOSS: 26 MDMRE: 1.5181
K=1 WIN: 99 TIE: 0 LOSS: 1 MDMRE: .9564
; SLIME 2010-01-11
CL-USER> (load "compass")
T
CL-USER> (run-tests (remove 'china *datasets*) :distance-func 'euclidean-distance :normalize? T)
ALBRECHT
BestK WIN: 58 TIE: 0 LOSS: 42 MDMRE: 2.1433
K=16 WIN: 0 TIE: 100 LOSS: 0 MDMRE: 4.1794
K=8 WIN: 49 TIE: 0 LOSS: 51 MDMRE: 2.5077
K=4 WIN: 20 TIE: 25 LOSS: 55 MDMRE: 3.3546
K=2 WIN: 74 TIE: 0 LOSS: 26 MDMRE: 1.5181
//
// dllist.m
// doublylinkedlist
//
// Created by Andrew Butcher on 3/20/10.
// Copyright 2010 West Virginia University. All rights reserved.
//
#import "dllist.h"
#import <Foundation/Foundation.h>
#import "dllist.h"
int main (int argc, const char * argv[]) {
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
dllist *list = [[dllist alloc] init];
[list setData: 5];
Muenster and Hot Sausage Lasagna
--------------------------------
2 Rolls of hot sausage
2-3 Blocks of muenster cheese
1 Bag of mozzarella
1 Large Container of sauce w/ the mushrooms and peppers
1 Small ricotta cheese container
2 Eggs
1 Box of those no boil lasagna noodles