Skip to content

Instantly share code, notes, and snippets.

@therewillbecode
Last active December 12, 2023 13:00
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save therewillbecode/038adf9b4d1c44a3da7582ebf4a3a22e to your computer and use it in GitHub Desktop.
Save therewillbecode/038adf9b4d1c44a3da7582ebf4a3a22e to your computer and use it in GitHub Desktop.

“Worry about data quality: Everything rests on the data.” ― David Spiegelhalter, The Art of Statistics: How to Learn from Data

“What am I not being told? This is perhaps the most important question of all.” ― David Spiegelhalter, The Art of Statistics: How to Learn from Data

“The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning. — Nate Silver, The Signal and the Noise1” ― David Spiegelhalter, The Art of Statistics: Learning from Data

"To ask the right question is harder than to answer it" Georg Cantor

"A mathematician is a person who can find analogies between theorems; a better mathematician is one who can see analogies between proofs and the best mathematician can notice analogies between theories. One can imagine that the ultimate mathematician is one who can see analogies between analogies." "Stefan Banach"

david spivak "Category theory is a gateway to pure maths ... If you were going to start learning pure maths I would recommend you start with category theory and then see how it differentiates intwo all these different places." https://youtu.be/eIjPxaFbEeg?t=471

Category theory systematises analogy, making it precise and allows us to transfer knowledge from one domain to another as well as seeing the connections between things.

"I hope mathematicians and other scientists hurry up and realize that there’s a glittering array of applications of mathematics in which non-traditional areas of mathematics are applied to non-traditional problems. It does no one any favours to keep using the term 'applied mathematics' in its current overly narrow sense." Tom Leinster

My thoughts

A language is a set of abstractions which can be combined. What if being "articulate" is really about finding the right abstractions through linguistics but is actually similar to the way maths is carried out. People often say definitions are uninteresting, but perhaps they are interesting in that they are linguistic models of concepts. This is why different linguistics models perhaps shape our thoughts differently.

There is more future time than present

https://www.codesimplicity.com/post/the-primary-law-of-software-design/

specs

Code without a spec cannot be wrong; it can only be surprising

Writing a specification serves three main purposes:

It provides clear documentation of the system requirements, behavior, and properties. It clarifies your understanding of the system. It finds really subtle, dangerous bugs. (3) is the most unique benefit and arguably the one that delivers the most obvious business value. But all of these are very valuable. The easiest ways to start applying specifications mostly give you (1) and (2). Since people don’t notice those benefits as much, it makes them think they have to dive into the deep end to get any use out of specs.

Specs are not tests

There are two kinds of system errors: implementation errors and design errors. Tests are good for showing your code matches your expectations but very bad for showing your expectations match your needs. Specs are the opposite. You need both. Specs are also not documentation, code review, code static analysis, or post-release analytics. It might make it easier to do all of them, but it does not remove the need for them.

On why formal specification isn't more common

So design verification is easier and faster than code verification and has a lot of spectacular successes. Then why don’t people use it? The problem with DV is much more insidious. While code verification is a technical problem, design verification is a social problem: people just don’t see the point.

Much of this is a consequence of designs are not code. With most design languages, there is no automatic way to generate code, nor is there a way to take existing code and verify it matches a design.19 Programmers tend to mistrust software artifacts that aren’t code or forcibly synced with code. It’s the same reason documentation, comments, diagrams, wikis, and commit messages are often neglected. https://www.hillelwayne.com/post/why-dont-people-use-formal-methods/

On the importance of design skill as a programmer

Design as the focus. We believe that the central intellectual challenge of programming is design: grappling with the vagueness and complexity of engineering by articulating a clear problem, inventing the key elements of a solution, and assembling and evaluating an embodiment. Design problems, unlike algorithmic problems, do not tend to have easily comparable solutions, so recognizing different metrics and making appropriate tradeoffs between them is essential. In industrial practice, design is central; the best software systems are distinguished by their designs, and development failures are inevitably due to design-level problems (whether in formulating requirements, structuring the implementation, or specifying components). In addition to being the most important skill for students to acquire, design is also perhaps the hardest. The ability to complete the code of a specified procedure comes easily to most students, and seems to need little tutoring. But the ability to articulate the essence of a problem, to invent robust and appropriate abstractions, to recognize tradeoffs, and – perhaps most importantly – the recognition of the value of simplicity (and the extraordinary effort needed to achieve it) come far less easily. Focusing on design is attractive for another reason too: while some of the skills imparted in a programming course are rather specialized, design skills are applicable in many fields.

on testing strategies

http://blog.ezyang.com/2011/12/bugs-and-battleships/

Importance of models in programming/cs

“Science” in its modern sense means trying to reconcile phenomena into models that are as explanatory and predictive as possible." Alan Kay https://tekkie.wordpress.com/2018/11/09/alan-kays-advice-to-computer-science-students/

"In math we deal in abstractions, and abstractions are best understood from multiple perspectives." http://rationalexpressions.blogspot.com/2013/11/exponents-without-repeated.html

Sandy maguire: However, there is an interesting philosophical takeaway here—dynamically typed languages are merely strongly typed languages with a single type.

<zincy_> How is that not the fooling model?

  • conjunctive (sid433686@gateway/web/irccloud.com/x-crumtmvqckegwlre) has joined Other programmers have to resort to fooling because they're too dumb for such bridging proofs. It's also why they're stuck as programmers. Those who can do these proofs enjoy much better careers than being code monkeys. <zincy_> What is an example of a bridging proofs? Difference being you kind of know how to prove it vs you can't even imagine?
  • merijn (~merijn@83-160-49-249.ip.xs4all.nl) has joined
  • vshev4enko has quit (Remote host closed the connection)
  • eric (~eric@2804:14d:32b1:29af:7c60:5ed2:1570:75c7) has joined
  • vshev4enko (~vshev4enk@igornata.znet.kiev.ua) has joined "full abstraction" which means that every equation provable by an operational semantics is also provable by a denotational semantics. and the denotational semantics is regarded as higher level

monochrom> Proofs that your compiler is correct, i.e., you want "(\x -> x+1) 2 --> 2+1" and you prove that the compiler generates asm code that corroborates with that story. <zincy_> 2. What would that look like for the IO magic fooling Grad school?

  • Ark_ (~Ark@79.173.124.121) has joined <zincy_> Since for 2 I can't imagine proofs that would help with it <zincy_> monochrom: Maybe it is time, the economy has died

"full abstraction" which means that every equation provable by an operational semantics is also provable by a denotational semantics. and the denotational semantics is regarded as higher level <zincy_> 1. How can I learn that

What would help, fooling? monochrom: what would you reccomend to people who are inexperienced programmers who want to be experts/ not just code monkeys? Take all of the more theoretical courses in CS "discrete math" to start with, so you become comfortable with proofs and a few "common sense" things that are not so common. Then it will be some programming language theory courses (those that mention "operational semantics" and "denotational semantics"),

Harper's Practical Foundations for Programming Languages is an inexpensive book for that. But it's pretty dense. I understand it because, only because, I've already learned much stuff elsewhere. I have not read the Software Foundation series or the Wadler et al. PLFA. This is only because they came out after I had already learned the stuff. But perhaps they are really good places to start today. They both teach PL theory as well as formal verification.

and then do some formal method aka formal verification courses. In the case of IO and GHC, no one has done it. But there have been little bits discussed.

  • tzh (~xax@2601:448:c500:5300::b27d) has joined SPJ's "awkward squad" long paper has an operational semantics of an important part of IO. One could start from there. Working upwards, you can use the operational semantics to work out what it predicts about small programs. m> Working downwards, you can follow through GHC or another compiler to see it generates asm code that agrees with the operational semantics, at least for a small program of your choice. Combining the two, you now know that your small program is compiled to asm code that really says "read a line, print its length".

Haskell and purity

Why is the IO monad pure? It quite simply comes down to extensional equality:

If you were to call getLine twice, then both calls would return an IO String which would look exactly the same on the outside each time. If you were to write a function to take 2 IO Strings and return a Bool to signal a detected difference between them both, it would not be possible to detect any difference from any observable properties. It could not ask any other function whether they are equal and any attempt at using >>= must also return something in IO which all are equall externally.

Being pure means that the result of any function call is fully determined by its arguments. Procedural entities like rand() or getchar() in C, which return different results on each call, are simply impossible to write in Haskell

The semantics of a program is a model of its behaviour. The Haskell language has its own semantics which has a particular model.

This brilliant stack overflow answer explains how the model of a program determine what side effects are. In Haskell impure things are those outside the semantic model that the Haskell language uses. For example Haskell's semantics ignores the nuances of file systems and considers memory to be infinite.

In Haskell those things which are impure are those outside the boundaries of the Haskell langauages' model, i,e interacting with the file system.

https://cstheory.stackexchange.com/questions/21257/what-exactly-does-semantically-observable-side-effect-mean/21265#21265 " A semantics of a program is a model of its behavior which, like any scientific model, ignores aspects that you don't want to study.

An extremely detailed model of the execution of a program would model the physical behavior of the computer that executes it, including the execution time, power consumption, electromagnetic radiation, etc. Such aspects are very rarely taken into account because they are very rarely relevant. Nonetheless they do matter sometimes: a useful model of an airplane autopilot needs to include runtime information, a useful model of a credit card's security needs to include electromagnetic radiation, ...

In typical semantics, side effects such as timing and power consumption are ignored. Even if in a mundane setting where you type an expression at a Haskell interpreter prompt, the printing of the result is a side effect (if you try to print out an infinite object, it matters). If the Haskell interpreter runs out of memory, this is also an observable side effect in a “real-world” model, but not in an idealized model of Haskell that effectively allows unbounded computations.

An observable side effect is one which is modeled in the semantics. In typical models of programming languages, memory consumption is not modeled, so a computation that requires 1TB of storage can be pure, even though if you try to run it on your PC it would observably fail.

Another kind of non-observable side effect is one that is internal to the function. This is, I think, what most semanticists would think of when talking about non-observable side effects. Consider a computation that uses mutable data internally, but does not share this mutable data with any other part of the program. For example, a list sorting function which builds an array with the same elements as the list, sorts the array in place, and returns a list containing the elements as the array in their final order: a semantic model of subexpressions of this function exhibits side effects (modifications of the array), but the function itself has no external side effect, so it is pure.

For a more subtle example, consider a function that writes some data to a temporary file and cleans up after itself. In a semantics where there is always enough room for temporary files and programs do not share temporary files, the function has no side effect; the temporary file acts as extra memory used by the function. In a semantics which takes filesystem full conditions into account, the function has a side effect — it may fail due to external circumstances. In a semantics that allows the machine to crash, the function has a side effect: if there is a crash during the execution of the function, the temporary file may be left behind. In a semantics that allows concurrently executed programs to see and maybe modify the temporary file, the function has a side effect."

Polymorphism and "Are data constructors functions"?

<zincy__> ski: Maybe I dont understand what polymorphism is but when I see a type variable I just think oh look polymorphic <zincy__> Is that misguided? (parametric types begets polymorphic operations on such types. so there is a relation) length' is polymorphic. its type [a] -> Int' is not polymorphic (nor is the type variable a', in that type, polymorphic) <zincy__> Oh so polymorphism refers to operations on values of different types <zincy__> Whereas type variable is just about representation <ski> the explicit type of length' is forall a. [a] -> Int'. Haskell allows you to leave out the forall' in source, and it'll be inserted implicitly by the language. but it's always, conceptually, there a value is polymorphic if and only if it has a type of general shape forall a. ..a..' <zincy__> Ah thanks! <ski> just like a value is a list if and only if it has a type of general shape [...]'. or is a function if and only if it has a type of general shape `... -> ...' <zincy__> So just functions can be polymorphic?

  • Tops2 has quit (Client Quit) no. e.g. Nothing' is not a function, but is still polymorphic. has type forall a. Maybe a' <zincy__> Is Nothing not a data constructor and data constructors are functions? not all deata constructors are functions, no Nothing' doesn't have a type that looks like ... -> ...'. hence it's not a function <zincy__> So only data constructors which are parameterised by at least one other value are functions?
  • jpcooper has quit (Ping timeout: 272 seconds) in fact, strictly speaking, Just' isn't a function, either. it has type forall a. a -> Maybe a'. it's a "polymorphic value (that when specialized, will become a function)"
  • ericsagnes (~ericsagne@2405:6580:0:5100:54da:6b21:e514:b8a2) has joined
  • fragamus_ has quit (Read error: Connection reset by peer) yes, data constructors which take arguments, which "pack data fields", are functions (or, in this case, "polymorphic functions", meaning "polymorphic value, that when specialized, will become a function") <zincy__> So forall a. id :: a -> a isn't a function? <zincy__> Until you parameterise the a?
  • Katarushisu has quit (Quit: The Lounge - https://thelounge.chat) strictly speaking, it's not a function. but if we take id :: forall a. a -> a', and specialize this, replacing a' by Bool' say, we get id :: Bool -> Bool', which is a function <zincy__> Thanks, I have upgraded my thinking
  • unlink2 has quit (Ping timeout: 240 seconds) one perhaps confusing part here is that the specializing of a polymorphic value is written as nothing, in the syntax. we still write id'. but conceptually, it's an operation <zincy__> And to think I understood data constructors and Maybe :D <ski> (with a language extension, you can actually write id @Bool' for this) <zincy__> Yeah so the polymorphic value becomes a function at some point during run time ski> you can think of it as being a kind of expression node, in the abstract syntax tree in the implementation. it's just that it's (usually) written as nothing, in the source code

Variable names

A variables name's length should depend on its scope. Shorter scope, shorter name.

Dependency injection

Dependenciy injection is what you use in Object oriented programming when you want impure functions to behave like pure functions so you can test them. You inject/stub the dependency to always return the same value (ie mock a query result from a DB) so you can test the impure function.

What is wrong with NULL

You end up NULL chasing. If you see a null you have no idea where it comes from and it will be really hard to debug.

NULL subverts types, it can appear anywhere and you don't know where it came from. Once one thing returns a NULL
because any value can be a NULL, then any function which uses the original function can have NULL, and then you have to litter
these functions too with NULL checks. This results in a case explosion of NULL checks.

zinzy: Why is NULL bad generally? I know it is I just don't know how to articulate it

Rembane: It never goes away. If you have one NULL it will poison the whole program, and it can show up anywhere and you're never sure.

Rembane: Compare thay with Maybe where you can see where you have to care about them, that you can't do with a NULL.

Rembane: Ah ok so you don't know which kind of value was expected because NULL can be of any type?

Rembane: zincy__: Yup

When you know a value shouldn't ever be Null the language doesn't know and you have to litter the code with checks.
On the other hand when it should be null you aren't forced to check for NULL

Good thing about a maybe/option type is that you are forced to handle the NULL case!

ski: when you know it can't be NULL (or shouldn't be allowed to be NULL'), you'd like the language to know that as well, so you don't have to add redundant checks (which might not be redundant if there's a bug someone, someone violating a precondition, not satisfying a postcondition

ski: and the other side is that when it could be NULL', you'd like the language to remind you of that, so that you do check ski: the basic problem is that every pointer, or every object reference, can be NULL ski: the basic problem is that every pointer, or every object reference, can be NULL'/null'

Types

"Polymorphism abstracts types, just as functions abstract values. Higher-kinded polymorphism takes things a step further, abstracting both types and type constructors, just as higher-order functions abstract both first-order values and functions." https://www.cl.cam.ac.uk/~jdy22/papers/lightweight-higher-kinded-polymorphism.pdf

computational effects

"In my opinion, the most useful definition of an “effect” is a computation that alters its environment. " https://www.quora.com/What-are-computational-effects-and-algebraic-effects

co-effects

"Comonads can be described as capturing input impurity, input effects, or context-dependent notions of computation... "coeffect systems have been introduced as the comonadic analogues of effect systems for analysing resource usage and context-dependence in programs" https://arxiv.org/pdf/1202.2922.pdf

Functors

Functorial computations look only at values https://arxiv.org/pdf/1202.2922.pdf

Monads

"Moggi called monads notions of computation, because they describe computational effects in a way that abstracts from the type of values produced by a computation." https://arxiv.org/pdf/1202.2922.pdf

free monads

"free monads ... have the ability to represent the very structure of monadic computations." https://arxiv.org/pdf/1202.2922.pdf

"A free monad satisfies all the Monad laws, but does not do any collapsing (i.e., computation). It just builds up a nested series of contexts. The user who creates such a free monadic value is responsible for doing something with those nested contexts, so that the meaning of such a composition can be deferred until after the monadic value has been created." https://stackoverflow.com/a/13388966

The term “free” comes from the fact that for any valid functor this particular monad construction necessarily exists. In other words, you get it “for free.”

To understand why the free monad is different from other monads, one must first understand that monad instances are not surjective with regard to functors. Most of the time there are multiple monads that can be constructed from a given functor. This explains why monad instances cannot be derived and must be written explicitly." https://www.quora.com/What-is-a-Free-Monad

Floating point

'Floating point artihmetic confuses a lot of people but it is easy when you realise it is just scientific notaion in base 2, we use significant figures just like back in school' https://youtu.be/PZRI1IfStY0?t=502

Depending on the base some numbers are always impossible to represent with floating point. Think about base 2 floating point, how would you express 1/10 or 10^-1. You can't.

Formal Methods

Notes (not a quote just my conjecture)- Does formal verification allows us to bridge levels of abstractions through proofs. It allow us to connect layers in a robust way rather than "gut feel". For example we might say this high level code is an abstraction of the following machine code. So formal methods allow you to prove that the operational semantics of different layers are equivalent.

Benefits of a good type system in a programming language

The types are there whether you think about them or not and you will have to think about types whether or not you have a static or dynamic type system. If you don't have a compiler which catches type errors effectively then you will need to recreate this checking in your unit tests manually.

General

" It is worth trying to understand what “science” means in “Computer Science” and what “engineering” means in “Software Engineering.

“Science” in its modern sense means trying to reconcile phenomena into models that are as explanatory and predictive as possible. There can be “Sciences of the Artificial” (see the important book by Herb Simon). One way to think of this is that if people (especially engineers) build bridges, then these present phenomena for scientists to understand by making models. The fun of this is that the science will almost always indicate new and better ways to make bridges, so friendly collegial relationships between scientists and engineers can really make progress."

Alan Kay https://qr.ae/pNrqH1

"Science makes mathematical models of reality" Prof. Leslie Lamport

"First, I shall make a preliminary comment that simplifies matters: we are seldom interested in explaining or predicting phenomena in all their particularity; we are usually interested only in a few properties abstracted from the complex reality. Thus, a NASA-launched satellite is surely an artificial object, but we usually do not think of it as "simulating" the moon or a planet. It simply obeys the same laws of physics, which relate

only to its inertial and gravitational mass, abstracted from most of its other properties. It is a moon. Similarly electric energy that entered my house from the early atomic generating station at Shipping port did not "simulate" energy generated by means of a coal plant or a windmill. Maxwell's equations hold for both."

Simon herbert -sciences of the artificial p.15-16

What most programmers arent taught is that in CS we use models just like any other science. What is the difference between the stack and the heap from the CPU's perspective. Well it doesn't care. But we care and distinguish between them in our mental model. Our model could just as well ignore the distinction. But it is a useful distinction in the same way that abstractions are useful.

"My policy on projects that I control is that we never add a feature unless the design can support it simply. This drives some people crazy, notably people who have no concept of the future. They start to foam at the mouth and say things like, “We can’t wait! This feature is so important!” or “Just put it in now and we’ll just clean it up later!” They don’t realize that this is their normal attitude. They’re going to say the same thing about the next feature. If you give in to them, then all of your code will be poorly designed and much too complex. It’ll be Frankenstein’s monster, jammed together out of broken parts. And just like the friendly green giant, it’ll be big, ugly, unstable, and harmful to your health." https://www.codesimplicity.com/post/design-from-the-start/

Boolean Blindness

As Conor McBride puts it, to make use of a Boolean you have to know its provenance so that you can know what it means. see - http://web.archive.org/web/20170710072419/https://existentialtype.wordpress.com/2011/03/15/boolean-blindness/

"In fact, Either is really nothing more than a plain old Haskell sum type - there is absolutely no magic, it is literally defined as something like this:

data Either a b = Left a | Right b That's almost literally how it's written in the base library.

The only issue with plain old sum types is that they're not extensible. That is, you cannot generalize a type to "any sum type, as long as it has a constructor named Foo that takes one argument of x". If you want to model extensible sum types, you will have to resort to nested Eithers, or something isomorphic to those. Then again, if you want extensible sum types, Either is arguably a bit too crippled to get you good ergonomics there; what you really want is some kind of type-level set, rather than type-level cons lists (which is what Either is, essentially).

So why have Either in the first place? Well; arguably, we shouldn't, for the same reasons we "shouldn't" have booleans: the concept is known as "boolean blindness", and it hints at the fact that booleans have no semantic meaning attached to them, "truthiness" is rarely, if ever, a concept that is useful on its own. The argument against booleans says that whenever you use booleans, you should instead use domain-specific types isomorphic to Bool. So not: doesFileExist :: FilePath -> IO Bool, but: data FileExists = FileDoesNotExist | FileExists; doesFileExist :: FilePath -> IO FileExists. And a similar argument can be made against Either: it's too abstract, and doesn't really capture a meaningful concept, instead you should be using domain-specific types - and in fact, many libraries do exactly that, e.g. by defining a Result type like so: data Result a = ErrorResult Error | SuccessResult a (which is isomorphic with Either Error a).

But Bool is a thing, and people use it. A lot. Why is that? Because the moral correctness of those domain-specific types comes at a practical cost - you have to create those types, there is a maintenance burden, users of your API have to know about them, and when writing more complex logic that involve these types, some boilerplate is usually required to marshal from those domain-specific types to something that the logic operators understand.

And for similar reasons, Either exists. Think of it as the lingua franca of error handling and decision flow forking and merging, in much the same way as booleans are the lingua franca of predicate logic. Domain-specific types would be morally correct, but the required boilerplate / hassle / practical shenanigans associated with them isn't always worth it." tdammers https://www.reddit.com/r/haskell/comments/gdv76j/hierarchical_free_monads_the_most_developed/

Env vars vs config files

"Basically the problem with config files is they are messy: you need to "manage" them, they're kind of action-at-a-distance, they're easy to leak, easy to accidentally commit to source control, etc." -tdammers

Having only env vars gives you granular control over config without having to add another config file this scales cleanly

Managing the "groupings" of envs into config files gets messy

However config files as a method does not scale cleanly: as more deploys of the app are created, new environment names are necessary, such as staging or qa. As the project grows further, developers may add their own special environments like joes-staging, resulting in a combinatorial explosion of config which makes managing deploys of the app very brittle.

Jimmy koppel

The power of a design lies not in what it can do but rather in what it can't do.

The use of more precise types are a solution to many problems.

Making code more robust does not have to mean more work or being more careful. It just means being more precise

Distributed Systems

“A distributed system is one in which the failure of a computer you didn't even know existed can render your own computer unusable.” Leslie Lamport

Learning to program with dependent types shows you how useful/pervasive abstract algebra is in programming.

"the experience of learning to program with dependent types is in large measure coming to grips with how comprehensively algebra pervades computer programming."

What are models

models are abstrations of real wworld messy phenomena where we get a useful ableit simplified representation of something. The details we leave out depends on the purpose of the model for example a model of a rocket will ignore the colour if we wish to understand the inertia and forces on a rocket. This is where the quote that "all models are wrong, some are useful" comes in as no model is a one to one reflection of every detail of the phenomena. Perhaps the brain experiences the world through model building and simulation to make sense of raw sensory input Analogies can be seen as cross-models you abstract similar features from different things and explore them through a different medium so analogy is a generalised method of connecting things through intermediate representations.

  • I wrote this

Passing tests doesnt mean you dont have bugs or that stuff definitely works. Obviously wrong vs not obviously wrong.

“Program testing can be used to show the presence of bugs, but never to show their absence!” ― Edsger W. Dijkstra

Models of computation / electronics vs computation

You should understand that computation and electronic computers are not the same thing. Electronics is only the implementation of computation. We spend far too much time trying to make electronics work rather than do computation.

Understand the pure computation models - particularly of Church and Turing.

Turing machines note have one level of memory - one kind, one speed. No mention of registers, cache, disk, SSL, etc. It is all unified. That is what virtual memory does. Also because memory is infinite, programmers do not have to think about memory management. And that is what high-level languages are about. We know that electronic memory is a limited resource and needs to be managed - but that is implementation and should not be done at high levels.

Programs as trees

Expressions are trees and programs are expressions. So programs are just trees (ASTs) https://aphyr.com/posts/301-clojure-from-the-ground-up-welcome

on numbers

When you have mastered numbers, you will in fact no longer be reading numbers, any more than you read words when reading books. You will be reading meanings. (Harold Geneen, “Managing”)

Cobra effect

https://sketchplanations.com/the-cobra-effect The story goes something like this. Back in colonial India the top Brit in charge decided there were too many cobras around Delhi. To reduce the population they put in place a cash reward, or bounty, for anyone who brought in a dead cobra. The intention was clear. Legend has it that people did bring in the cobras reliably because some enterprising souls had started breeding cobras for the very purpose of getting the bounty. When the authorities realised this they scrapped the scheme, the cobra farms closed and the bred cobras were released into the wild significantly increasing the cobra population by a few orders of magnitude. Hence, the cobra effect: when a well-intentioned measure can have the opposite effect to that desired.

on goodhart's law

https://sketchplanations.com/goodharts-law

Goodhart’s Law: when a measure becomes a target, it cease to be a good measure. In other words, if you pick a measure to assess people’s performance, then we find a way to game it. I like the illustration of a nail factory that sets number of nails produced as their measure of productivity and the workers figure out they can get tons of tiny nails. And, if they switch it to the weight of nails made, they get a few giant heavy nails. Or perhaps the story of measuring fitness by steps from a pedometer only to find they get put on the dog.

Software design

Most programmers would define a module by listing the names, parameters, and return values–the operation signatures–of its subprograms. Parnas again focuses on the programmers rather than the programs. He defines the “interface between two programs” to consist “of the set of assumptions that each programmer needs to make about the other program in order to demonstrate the correctness of his own program.” [Parnas et al. 1985]

The interface of an information-hiding module must enable programmers to replace one implementation of the module by another without affecting other modules. This is called an abstract interface because it represents the assumptions common to all implementations of the module [Britton et al. 1981; Parnas 1978]. It reveals the module’s unchanging aspects but obscures aspects that may vary among implementations.

When to panic - discussion with simon farnsworth

Is the gist of when to panic? Panicking is permissible whenever the system gets into a state where recovery is not possible without rebooting.

Simon Farnsworth

Panicking means that the programmer isn't sure what to do in this stage.

I am really glad you raised this because not panicking ever feels weird. Because panics exist for a reason. 11:27 I guess the advantage of panicking is you get a nice stack trace and are pointed to - this is the part of the logs where we blew up and panicked. 11:27 If the system limps on it would be much harder to debug

Simon Farnsworth :spiral_calendar_pad: 11:28 AM Something has gone wrong - there are two options: Return Result::Err(…) - something is wrong, the programmer knows it's wrong, the next layer out can try and fix it panic! (or its friends expect, unwrap, todo!, unreachable! and unimplemented! all of which mean that something is wrong, and the programmer did not know what to do.

11:28 Exactly - you are almost always better off panicking (especially via one of the things that isn't spelt panic!) and returning to a known-good state, than trying to soldier on in the face of a bad state. 11:29 Fundamentally, though, a panic should mean "something has happened, and you need a programmer to fix it", while Result::Err means "something is wrong, and the program can either autofix it, or report back.

Simon Farnsworth So, we should never panic in production - because we should have found all the failure cases before we get that far, and taught the program how to cope with them. 11:32 config.rs .expect("Could not find config file"); https://github.com/lunar-energy/lunar-edge|lunar-energy/lunar-edgelunar-energy/lunar-edge | Added by GitHub

11:34 But as guidance, any panic means "the programmers haven't seen this error condition often enough to understand what state we're in and how the program should continue". And panics are good if they all mean "the programmer needs to think about this error condition and find a real fix"

Tom Chambrier 11:34 AM Ah I have seen that then - sorry I thought it was something else 11:35 That makes sense. 11:37 expect and friends are more expressive ways of failing. If you can handle the failure you graduate to Result:Err .

expect is a way to convert a Result or an Option into a panic.

Simon Farnsworth 3:24 AM A panic happening (via expect, or panic! or todo!) means that the programmer didn't know the correct way to handle this error condition. A Result means the programmer knows something's gone wrong, and there are multiple ways to deal with this depending on what you're doing.

And a good strategy for reducing panics to the minimum is to iterate:

  1. Identify a place that panics.

  2. If you can handle the error intelligently at this location, handle it sensibly (e.g. missing user settings file => use defaults instead, or running without the right hardware underneath you => print error to user and exit with a failure code), and you're done

  3. If you can't handle it sensibly, return a Result::Err out one level (e.g. missing certificate file => return Err(Error::NoCertificates))

  4. At the next level out, handle the new error via expect, causing it to panic one level out. You now have more places that panic, to feed into step 1.

Modularity

Great discussion on what modularity means.

https://groups.csail.mit.edu/sdg/pubs/2020/demystifying_dependence_published.pdf

On good comunication for code reviews

https://mtlynch.io/human-code-reviews-1/

baez inversion

a lot of mathematicians ask - what is an example of this. But category theories instead ask, what is this an example of. this inversion is an important part of the category theoretic aesthetic where category theorist focus more on finding out why things work. More conceptual focus than just tricks for pulling things off. Conceptual mathematics is more about understanding the why? Why and how things work as they do. What make X work. https://youtu.be/6eWn9nG5d7o?t=2439

The usefulness of learning linear algebra

For programmers getting into AI, strongly recommend taking a linear algebra course. Yes, it’s two months of hard (and fun!) work. But it will easily put you in the top 1% of talent pool, which is probably the best ROI of all time in industry history.

https://twitter.com/spakhm/status/1617948174391607296

Interconnectedness makes big programs eventually crumble under their own weight.' -- Simon Peyton Jones

Programs are just generalized polynomials. — Bob Harper, famous type theorist

The Sixth Moral: As datatype definitions get more complicated, so do the functions over them. —Matthias Felleisen and Daniel P. Friedman, "The Little MLer"

On using a library where possible over shelling out to a script

tldr - because we want to remove the extra set of assumptions around adding another executable alongside your program.

This is the scenario where you have a script and you are thinking about calling another script versus importing the code from said script instead as libary code. This is generally better as you can get rid of a host of assumptions that need to be made about running an additional script as part of your logic - path of script, file permissions, python version etc etc.

Tom Chambrier :speech_balloon: 3:12 PM I appreciate this is a basic question but it isn't something I have thought about much. When should one use a library versus shelling out to run a script? This question was triggered upon reading this comment. (edited)

Simon Farnsworth (he/him/his) :spiral_calendar_pad: 3:14 PM Use a library whenever possible - shelling out has a whole pile of costs to it that you don't want to pay. Script if, and only if, the script is a customization point for a bigger system. :+1: 1

3:16 Notably, shelling out involves setting up a suitable environment for the thing you're shelling out to, and trusting that nothing goes wrong while setting up that environment - e.g. out of PIDs. 3:17 Facebook used to have a system that spawned a script to restart the host if things went horrifically wrong, which got changed to a library and native code, because one of the ways things go horrifically wrong at scale is "can't launch new scripts" 🙂 3:18 But just looking at that one, a simple set of questions: Is poetry installed? Is poetry the correct way to invoke it, or should it be poetry3.10 or something else? Is IFS correctly set? Is python3 the correct interpreter name to pass to poetry (and not say python3.10). What happens if the script is removed or moved? How are you checking that your CLI arguments are inline with the script's expectations?

Simon farsnorth on alarms (aws, cloud etc)

A useful rule of thumb for writing alarms: "when this triggers, what would I do if I received the alarm"? If the answer isn't "investigate and fix", the alarm is broken.

Simon farsnorth on tests

if a test doesnt run in ci it doesnt exist

On flakey tests, units and PBT - randomness over inputs we care about in tests is not bad!!

haskell irc

ncy> Since property based testing is non-deterministic does that pose a problem for CI? People don't like flakey tests so how should I respond if someone complains PBTs are flakey. I don't know where to stand on this. Like is there good flakey and bad flakey ;) any failure is a failure, no? Yes :) either you have not constrained your valid inputs enough for testing, or you have failing cases that shouldn't be failing Thanks So randomness over inputs we care about is fine even in a unit test? yes Maybe people get confused about determinism vs random I know I do Test results you can't reliably reproduce are always a problem but thats why we have seeds decent property based testers will tell you what inputs failed, so you should be able to reproduce immediately qc definitely does this

perhaps at the end of the day think of it like this - if your non-determinism in tests causes your ci to find valid bugs (input range is correct) then surely that is a good thing?

PRs

As an author you should read the diff thoroughly before sending it out for review.

https://artsy.github.io/blog/2021/03/09/strategies-for-small-focused-pull-requests/ generally PRs with a diff of 50-200 loc get merged twice as fast as a pr of 600+ diff! But also smaller PRs get picked up dramatically faster and therefore merged way way faster. Very roughly anything above a diff of 500 is a big pr. LoC diff is an arbitrary measure though so dont get hung up on it.

derek briggs

s3e12 podcast quantitde - 'in the future we may well look back on data mining and ai as "what were we doing?". We have a bunch of people bred on the notion that data speaks for itself. Data scientists that lack the theoretical background to realise that all conclusions come from subjective inquiry think about 100 years ago where people we using stats for bad things we may be repeating that mistake.'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment