Skip to content

Instantly share code, notes, and snippets.

@hdf
Created February 27, 2021 13:11
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save hdf/d42d6f12e18bc54151c8d65f3281342a to your computer and use it in GitHub Desktop.
Save hdf/d42d6f12e18bc54151c8d65f3281342a to your computer and use it in GitHub Desktop.
Dilemma solutions.
Chicken and egg dilemma solution:
---
Description of the dilemma:
What came first, the chicken or the [chicken] egg? (Eggs in general certainly existed before chickens did.)
Statistical approach:
Almost all chickens lay eggs. But most eggs do not hatch to be chickens. (Most are not fertilized, or it is a rooster that hatches.) Thus it is far more likely for an egg to come from a chicken, (thus chicken coming first) than for a chicken to come from an egg.
Nomenclaturical approach:
The egg that the first chicken came out of was laid by a proto-chicken. Since it could not be known ahead of time, what would emerge from the egg, it is logical to say, that a proto-chicken lays proto-chicken eggs, even if it is a chicken that emerges from it. Here as well the chicken came first (as it emerged from a proto-chicken egg).
Meta approach:
In the question / statement of the dilemma, the word "chicken" came before the word "egg", so the solution is the same here, as it is with the others. :)
TLDR: The chicken.
Trolley problem solution:
---
Description of the problem:
https://en.wikipedia.org/wiki/Trolley_problem#Implications_for_autonomous_vehicles
There are three solution options to this problem: do nothing and let many people die (default option), do something and let one person die, but it was your choice and action that led to that death, or time the switch such, that everybody dies, thus no unfairness to anyone (preferred solution of 5 year olds and demons).
The problem is that all these potential solutions exist in their own (moral) dimension, that does not intersect or overlap any of the others, thus any attempt to integrate them all will result in an undefined. Like trying to multiply oranges by cows.
The dimensions are:
Number optimization: Try to kill as few people as possible as death is undesirable.
Accountability: Responsibility for one's actions, however inaction is also an action.
Fairness: How do you rank one persons value against another? If a person's value is infinite, than the value of one is the same as the value of many. (Same cardinality.)
And an extra dimension; feasibility: Cars that do not prioritize their passengers will not be bought (or will be modded), thus can not exist (at volume).
Having the cars make complex situation based evaluation of the moral choice is a bad idea, as other AI agents perspectives need to be taken into consideration, and the whole situation becomes a game theoretical problem, the calculation of which may take longer than the amount of time available.
Having all agents be easily predictable may help a higher order AI (one with more context, like the city control system) make some optimizations to the situation.
Allowing AI agents to be led down a gradient of moral choices is also a bad idea. Also situations that can direct an AI agent to make a dubious choice can be exploited to nefarious ends.
When optimization of a situation is not possible, it is best to stay with the default option (on principle), and relinquish agency and responsibility to a higher system.
TLDR: Don't do anything.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment