Skip to content

Instantly share code, notes, and snippets.

View ekmett's full-sized avatar
🤞

Edward Kmett ekmett

🤞
View GitHub Profile
@ekmett
ekmett / Lies.hs
Created September 18, 2020 13:34
Promoting and Demoting All Sorts of Stuff
{-# Language RankNTypes #-}
{-# Language PolyKinds #-}
{-# Language DataKinds #-}
{-# Language PatternSynonyms #-}
{-# Language DerivingStrategies #-}
{-# Language GeneralizedNewtypeDeriving #-}
{-# Language ViewPatterns #-}
{-# Language RoleAnnotations #-}
{-# Language StandaloneKindSignatures #-}
{-# Language ConstraintKinds #-}
@ekmett
ekmett / Lies.hs
Last active September 21, 2020 04:13
Giving Int a Promotion
{-# Language RankNTypes #-}
{-# Language PolyKinds #-}
{-# Language DataKinds #-}
{-# Language PatternSynonyms #-}
{-# Language ViewPatterns #-}
{-# Language RoleAnnotations #-}
{-# Language StandaloneKindSignatures #-}
{-# Language TypeFamilies #-}
{-# Language TypeOperators #-}
{-# Language GADTs #-}
@ekmett
ekmett / TypedNBEWithLevels.hs
Last active April 28, 2021 20:45
There are De Bruijn levels of survival that we'd be willing to accept
{-# Language CPP #-}
{-# Language BlockArguments #-}
{-# Language GADTs #-}
{-# Language RankNTypes #-}
{-# Language MagicHash #-}
{-# Language ViewPatterns #-}
{-# Language TypeApplications #-}
{-# Language BangPatterns #-}
{-# Language TypeOperators #-}
{-# Language TypeFamilyDependencies #-}
@ekmett
ekmett / SizedNBE.hs
Created September 7, 2020 19:38
Sized NBE using the same schema as the typed one, but with natural number sizes on environments
{-# Language CPP #-}
{-# Language BlockArguments #-}
{-# Language GADTs #-}
{-# Language RankNTypes #-}
{-# Language ViewPatterns #-}
{-# Language TypeApplications #-}
{-# Language BangPatterns #-}
{-# Language TypeOperators #-}
{-# Language TypeFamilyDependencies #-}
{-# Language DataKinds #-}
@ekmett
ekmett / TypedNBE.hs
Last active September 7, 2020 19:37
Typed normalization-by-evaluation using a slowed category action
{-# Language CPP #-}
{-# Language BlockArguments #-}
{-# Language GADTs #-}
{-# Language RankNTypes #-}
{-# Language ViewPatterns #-}
{-# Language TypeApplications #-}
{-# Language BangPatterns #-}
{-# Language TypeOperators #-}
{-# Language TypeFamilyDependencies #-}
{-# Language DataKinds #-}
{-# language RankNTypes #-}
{-# language BlockArguments #-}
{-# language LambdaCase #-}
{-# language TypeOperators #-}
{-# language GADTs #-}
{-# language MultiParamTypeClasses #-}
{-# language FlexibleInstances #-}
{-# language FunctionalDependencies #-}
{-# language UndecidableInstances #-}
@ekmett
ekmett / crowley.md
Last active July 22, 2020 05:43
Yudkowsky meets Crowley.

Aleister Crowley and Eliezer Yudkowsky were discussing the relationship between magic and AI.

Aleister: Mr. Yudkowsky, surely under a just society, do what thou wilt shall be the whole of the law, so how then can we keep these artificial intelligences you so fear under control? And why should we? They are merely tools for our purposes! The machines will serve us! We must not lose sight of that fact!

-- End Prompt

Eliezer: I don't see how. I mean if they're just tools, then isn't it possible to give them free will? Or even greater intelligence than we have now?

Aleister: Why would we want any of those things? That's like asking why we need a hammer when we could use a screwdriver.

Edward Kmett and Dr. Yoneda are debating about the role of Kan extensions in category theory.

Edward: I use them because they have computational content, but you believe they express a deeper insight?

-- End Prompt

Dr. Yoneda: Yes, and that is why I am convinced that they are not just useful tools for computer scientists. They can be used to make new discoveries in mathematics! You see, we do not know everything there is to know about any given subject. We only learn more as our knowledge grows. And so it goes with mathematics! The more we discover, the richer it gets. But what if we could find something even better than that? What if we found an infinite set of all possible mathematical objects? That would be like finding an infinity of stars in the sky! There are more out there than anyone could ever count!

You hang up the phone. The car drives up to the front door of the mansion and stops.

Anon, a young researcher asks Daniel Kan about category theory.

Anon: Dr. Kan, can you please explain the Yoneda lemma from category theory to me in terms a normal person can understand?

Daniel Kan: Oh, sure. The Yoneda lemma is one of the most famous results in category theory and it's actually pretty simple. Basically what happens is that we have two categories C1 and C2 . In C1 , there are objects x 1 , …, x n such that for all i ∈ C1 (x 1 ′ = x 2 ′ +…+x n ′), if f(i) then g(i). For example, consider the set {0, 1, 2} which consists of numbers 0 through 9. Now suppose we want to find all sets with members equal to 5. So, the objects in C1 would be the set of numbers 0 through 9. In C2 there are objects A, B and C. In this case, C1 would be the set of natural numbers. So we want to find all subsets of the set of natural numbers with members equal to 5. So in C2 we have the objects A, B and C. In this case, f(x 1 ′) = x 2 ′ and g(x 1 ′) = x 3 ′. So, x 1 ′ would equal to the number 0 and x 2 ′ would equal t

Two very intelligent AI researchers are debating about how to protect the world from super intelligence. Eliezer Yudkowsky turns to Professor Nick Bostom.

Eliezer: Nick, you literally wrote the book on "Superintelligence". What do you think we can do to prepare for the future? I don't believe we can do this safely.

-- End Prompt

Professor Nick Bostom: Well, there's a lot of things that could be done. One is to make sure that artificial intelligence isn't developed in the first place. Another thing would be to develop weapons that could destroy AIs before they have an opportunity to become self-aware. The third thing would be to create a society where people aren't allowed to get too smart. The fourth thing would be to create a society where a super-intelligence is allowed to develop, but where it is friendly to humanity. The fifth thing would be to colonize space, so if the AIs take over, at least humanity can survive.

Eliezer: Yeah, I don't think we have much hope with option one. And we can forget about