Skip to content

Instantly share code, notes, and snippets.

View seanjensengrey's full-sized avatar

Sean Jensen-Grey seanjensengrey

View GitHub Profile

The Bitter Lesson

Rich Sutton, March 13, 2019 http://incompleteideas.net/IncIdeas/BitterLesson.html

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matter

arxiv-ml

I have a tuple of 3 numbers that I would like to predict a final value. The first row is the one I would like to complete with the value ? the other two rows are training data. Could you find the missing value or a way to approximate it?

(2607, 2671, 1975) = ?
(2495, 2488, 1879) = 28644
(2269, 2263, 1597) = 26513

ChatGPT4 estimated the following number of papers for 2023

class StackModule:
def __init__(self):
self.items = []
def __repr__(self):
return repr(self.items)
def push(self, value):
self.items.append(value)
@seanjensengrey
seanjensengrey / bitter_lesson.md
Created September 11, 2021 20:15
computation advances faster than heuristics

source: http://www.incompleteideas.net/IncIdeas/BitterLesson.html

The Bitter Lesson

Rich Sutton

March 13, 2019

The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. The ultimate reason for this is Moore's law, or rather its generalization of continued exponentially falling cost per unit of computation. Most AI research has been conducted as if the computation available to the agent were constant (in which case leveraging human knowledge would be one of the only ways to improve performance) but, over a slightly longer time than a typical research project, massively more computation inevitably becomes available. Seeking an improvement that makes a difference in the shorter term, researchers seek to leverage their human knowledge of the domain, but the only thing that matters in the long run is the leveraging of computation. These two need n

apt-get remove docker docker-engine docker.io containerd runc
 
apt-get install \
  apt-transport-https \
  ca-certificates \
  curl \
  gnupg \
  lsb-release

Instance level metadata urls

Both AWS and GCP, probably Azure as well, offer the ability to query instance level metadata from within the guest (and container)

Both cloud use the same IP address (169.254.169.254) internally to handle the request.

tl;dr, do a get request against the metadata url and look at the response headers.

# https://stopa.io/post/243
from collections import Counter
import random
a = ['r'] * 99 + ['w']
b = ['w'] * 99 + ['r']
t = [a,b]