Also known as: “The Boolean Trap” “Boolean parameters are wrong”
Consider a trivial software interface with one entry point proc
:
// Transforms a value (type T0) into another value (type T1) with effect E0
T1 proc(T0);
// Passing container explicitely to all its functions, and using typed cursors (i.e. indices) | |
// rather than all-knowing iterators (i.e. pointers) allows to use the same return type for | |
// iteration, look-up etc.. regardless of what's going to happen next. | |
// | |
// Downsides: | |
// A cursor requires an indirection when looking at the data in the debugger. | |
// I.e. a `char*` member shows you directly what you should be looking at, | |
// whereas if you have the pos, you need the container and to use debugger expressions | |
// to see the content. | |
// |
Here’s an example of a command line options that to push symbols to a store using symstore [fn::1]
symstore add /f DebuggingSeries.* /s \\camerons4\Symbols\MySymbols /t "My Version 1" /v "1.0.0.0" /c "Manually adding"
This simply takes the exe and the PDB from my output directory ( the directory I ran symstore in ), and copied the symbols to the UNC folder specified.
#pragma once | |
/* @language: c11 */ | |
#include <stddef.h> | |
#include <stdint.h> | |
#if defined(__cplusplus) | |
#define ARRAY_ALIGNAS alignas(8) | |
#else |
Note: a lot of programmers talk about UI without mentionning the user even once, as if it was entirely a programming problem. I wonder what we’re leaving off the table when we do that.
Asynchronous updates are somewhat useful to distribute computations.
However this makes behavior composition hard (callback/promise etc) and callstacks start losing their effectiveness when a crash occurs, since scope is unclear.
// @url: https://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD340.html | |
// @url: https://www.cs.utexas.edu/~EWD/transcriptions/EWD03xx/EWD361.html | |
// @quote{ | |
// For besides the need of precision and explicitness, the programmer is faced with a problem of size | |
// that seems unique to the programmer profession. When dealing with "mastered complexity", the idea | |
// of a hierarchy seems to be a key concept. But the notion of a hierarchy implies that what at one | |
// level is regarded as an unanalyzed unit, is regarded as a composite object at the next lower lever | |
// of greater detail, for which the appropriate grain (say, of time or space) is an order of magnitude | |
// smaller than the corresponding grain appropriate at the next higher level. As a result the number | |
// of levels that can meaningfully be distinguished in a hierarchical composition is kind of |
I find that certain problems attract the creation of many solutions. We are overwhelmed with slightly similar yet incompatible and potentially incomplete solutions.
Why is that so? Which domains show this pattern?
My hypothesis is that these problems seem easy to approach from one idiosyncratic perspective while at the same time being hard to complete. Therefore no-one’s satisfied or able to judge existing solutions and end up creating yet another one.
In certain domains, incumbent solutions also may appear bloated, and therefore it’s easy to think one can do better, because the 50% solution appears leaner. Problem is, the remaining 50% is where the necessary risk mitigation, adaptation is. An example of that is the appearance of poorly made (but lean) databases in the age of NoSQL, which claimed to be leaner just because they did not discover yet all the things their historical competitors had discovered they had to do.