Skip to content

Instantly share code, notes, and snippets.

@keybounce
Created October 20, 2017 21:03
Show Gist options
  • Save keybounce/d580fad13c1026c72bbb48429cdffe3f to your computer and use it in GitHub Desktop.
Save keybounce/d580fad13c1026c72bbb48429cdffe3f to your computer and use it in GitHub Desktop.
Ideas for Global Paperclips
Ideas for paperclips.
1. Rate limit the ability to press the buttons. At a minimum, prevent over clicking -- right now, people who have one spare trust can consistently click a button five times before it greys out.
Idea: Start each button's test with something like
if (testAndFail(self, 1/5))
return
if (test for valid conditions fails)
return
where "testAndFail()" takes something that can identify the button, checks to see if the button is disabled (and aborts if it is), otherwise disables the button, and uses the "1/5" as a timer (1/5th of a second) to activate a re-enable.
This prevents extra clicks (the first click will disable, any others stuffed in will see disabled and not run; the test for valid conditions prevents the current case of hitting a button 5 times before it can be disabled at all), and puts a maximum click rate, to make the auto-click thingies useful.
2. Right now there is no auto clicking for quantum computing. This is a serious oversight. Everything else is automated; this needs control as well. Note that with no rate limiting, the effect of quantum computing currently is highly dependent on the system that you're running on. This not only means there is no true game design here, it also means there is no way to compare runs in playthrough's from different people on different machines.
Idea: Your first unlock auto-clicks once per second when the QOps is over 100 + 200*chips. Additional improvements either increase the clicks per second (for ops), or the trigger level (for creativity).
(Additionally, add a "quantum reset" button to drive the clicks for negative; otherwise, this makes the -10,000 target near impossible for manual clicking).
3. Endgame: drifters, being probes with the same AI and technology that you have, should also have combat characteristics, namely speed, combat, and yomi. In particular, all of these should affect your combat ability.
Idea: As you lose more probes to value drift, the drifter values for speed, combat, yomi get closer to yours (you are contributing things to their fleet). Since speed and combat are pretty much limited by probe limits, the key thing is for the user to keep increasing yomi.
4. Understanding of natural evolution has taught us that our brains were basically driven by a "smartness arms race". Our sentient paperclip AI should reach this stage in stage 3. Once combat is unlocked, a new type of processor should unlock for creativity. This new processor will only generate creativity, not ops. But it generates creativity faster than the normal processors do. (Call it neuronic creativity generation, or something similar).
5. Yomi generation is currently based on either manually selecting a strategy for each specific payoff chart, or finding a strategy that works well enough to go on to an automatic tourney. Instead, change this to represent a machine learning model, where the AI determines which strategy to choose. This can be scaled in such a way as to drive very large number of operations needed for tournament, which can be used to make a large amount of memory needed to finish stage 3, where stage 3 is currently dominated by operations, creativity, and yomi.
Idea: A little complicated
(WARNING: this does not quite work as-is. The numbers don't work, and a full 8x8 will cost too much for stage 1. But it's some thoughts/ideas; perhaps something based on this can work by stage 3. In any event, consider this "draft 0".)
Right now, running random vs random costs 1000 ops. So lets keep that relative constant. 1000 ops per "test run".
Currently, when I unlock strategies 2-8, I currently pay n^2 * 1000 per combat. If I have all 8, then 8 by 8, 64 sets of combats, are done, and I pay 64,000 ops. But my return might go from an average of 100 per, to an average of 140 per. Not worth it.
Instead:
The basic idea is that each simulation will cost 1000 ops, and the payoff is scaled by the number of strategies. If I have 1 strategy instead of all, then I earn 1/8th the normal yomi output. If I have all 8, then I earn full value.
I am not getting an N^2 matrix initially. I am running a 1 by N matrix, at cost n*1000. My chosen strategy vs the N strategies that I have unlocked. If I have all 8, I pay 8,000 ops, run 8 simulations, score sum(8 fights), scaled by 8/8.
But that's still a bad way to do things, especially for auto-tourney. The user has to select a strategy. This is supposed to be a learning AI, so lets make it learn.
So, rather than saying "user chooses a strategy", the default is "randomly select a strategy" (not: use the random strategy). If I have 8 strategies, then this stage will select one at random, and play it against all 8 opponents. This is still a 1 by N, but you no longer control which of the ones is selected. At first, this looks like a loss -- less effective. But this is the first step to a learning AI. To compensate for the lower payout, this is where the two times yomi output comes into play -- this step replaces the current double yomi output.
So the next step is to do a pre-test. Run an N by N matrix, to find out which one is "best" for this payoff matrix. Then, use that for an full N by N tourney with bonus.
Note that at this point, for n=8, I'm running 64 test cases, and then a length 8 challenge. A total of 72 fights. 72,000 ops.
This is 9 sets of battles, so my yomi win is 9 times what I get from my "best" strategy against all 8 others.
... scaling problem ...
I do an N by N test, to find the best. Then I fight N, once per strategy. I've done N+1 total sets, and get N+1 times the yomi of the best.
This is a significant increase in yomi generation.
But we can go on. Instead of just "Play against a bunch, and learn something", we can move up to full tourneys. These would be the N by N's that we currently have. And a triple tourney would be first run an N by N; find out which strategy does best. Now you are betting on that. Do 2 more full N by N's. Score your total, compare to everyone else's total. Get a bonus for being in the top.
Again: multiply the winning yomi by the number of combats. If I am running an 8x8, that's 64 combats. If I'm running 3 sets, that's 192 combats, or 192,000 ops per full tourney. And the output should scale to 192 times as much yomi as the base set -- but instead of getting a random base, I'm probably getting close to the optimal, plus a bonus for first/second place.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment