- Bluenoise in the game INSIDE (dithering, raymarching, reflections)
- Dithering, Ray marching, shadows etc
- Moments In Graphics (void-and-cluster)
- 2D
- 3D and 4D
- Bart Wronski Implementation of Solid Angle
- Forced Random Dithering matrices
universe variables u v u1 u2 v1 v2 | |
set_option pp.universes true | |
open smt_tactic | |
meta def blast : tactic unit := using_smt $ intros >> add_lemmas_from_facts >> repeat_at_most 3 ematch | |
notation `♮` := by blast | |
structure semigroup_morphism { α β : Type u } ( s : semigroup α ) ( t: semigroup β ) := | |
(map: α → β) |
meta def blast : tactic unit := using_smt $ return () | |
structure Category := | |
(Obj : Type) | |
(Hom : Obj → Obj → Type) | |
structure Functor (C : Category) (D : Category) := | |
(onObjects : C^.Obj → D^.Obj) | |
(onMorphisms : Π { X Y : C^.Obj }, | |
C^.Hom X Y → D^.Hom (onObjects X) (onObjects Y)) |
-- Return the maximum unsigned number of a given width. | |
def max_unsigned : ℕ → ℕ | |
| 0 := 0 | |
| (nat.succ x) := 2 * max_unsigned x + 1 | |
open tactic nat | |
/- Function for convertiong a nat into a Lean expression using nat.zero and nat.succ. | |
Remark: the standard library already contains a tactic for converting nat into a binary encoding. -/ | |
meta def nat.to_unary_expr : nat → tactic expr |
This gist shows how formal conditions of Solidity smart contracts can be automatically verified even in the presence of potential re-entrant calls from other contracts.
Solidity already supports formal verification of some contracts that do not make calls to other contracts. This of course excludes any contract that transfers Ether or tokens.
The Solidity contract below models a crude crowdfunding contract that can hold Ether and some person can withdraw Ether according to their shares. It is missing the actual access control, but the point that wants to be made
module NatInd | |
import Language.Reflection.Elab | |
import Language.Reflection.Utils | |
%default total | |
trivial : Elab () | |
trivial = do compute | |
g <- snd <$> getGoal |
# -*- coding: utf-8 -*- | |
import numpy as np | |
import matplotlib.pyplot as plt | |
import scipy.integrate as integrate | |
# Dimension of image in pixels | |
N = 256 | |
# Number of samples to use for integration | |
M = 32 |
-- http://www.cs.bham.ac.uk/~mhe/papers/exhaustive.pdf | |
-- https://vimeo.com/32811801 | |
type N = Int | |
type Cantor = N -> Bool | |
(#) :: Bool -> Cantor -> Cantor | |
(x # a) 0 = x | |
(x # a) n = a (n-1) | |
epsilon' :: (Cantor -> Bool) -> Cantor |
-- Matthew Brecknell @mbrcknl | |
-- BFPG.org, March-April 2015 | |
-- With thanks to Conor McBride (@pigworker) for his lecture series: | |
-- https://www.youtube.com/playlist?list=PL44F162A8B8CB7C87 | |
-- from which I learned most of what I know about Agda, and | |
-- from which I stole several ideas for this talk. | |
open import Agda.Primitive using (_⊔_) |
""" Trains an agent with (stochastic) Policy Gradients on Pong. Uses OpenAI Gym. """ | |
import numpy as np | |
import cPickle as pickle | |
import gym | |
# hyperparameters | |
H = 200 # number of hidden layer neurons | |
batch_size = 10 # every how many episodes to do a param update? | |
learning_rate = 1e-4 | |
gamma = 0.99 # discount factor for reward |