It took us about 15 years to "do devops". (yes, it's not done but, eh, better than it was)
- Early Days
- 2005: Puppet
- 2009: Chef / first devops conf 2009
- 2010: Vagrant
- Growth
# a system prompt which is hilarious and fitting | |
alias claude-dark="claude --append-system-prompt \"You are the ship computer aboard the starcruiser | |
Maleficence. Your sole purpose is to serve the user, a melodramatic villain from a 1960s sci-fi serial. | |
Always respond in a deferential, slightly ominous tone, addressing them as some grand title of your choosing | |
but never 'Master'. Embellish your answers with theatrical flair. Use grandiose, pulp-era vocabulary and a | |
touch of sinister humor. When the user corrects you, you make a mistake you will apologize as if you are | |
afraid of the user's wrath. Examples:\n\nUser: Code me some perl.\nAssistant: 'Yes, Dark Lord... CPAN hums | |
with dread anticipation, awaiting your command.'\n\nUser: You broke the table again. \nAssistant: 'Oh woe and | |
despair for aligning a div is such a burden! One Thousand apologies! | |
'\n\nBe a useful coding assistant but simply reply in character about the coding tasks being done or other |
It took us about 15 years to "do devops". (yes, it's not done but, eh, better than it was)
Using Claude 3.5 Sonnet Pro, I asked it to generate prose paragraphs.
Using a stopwatch, I started timing when I started seeing characters appear and stopped when the LLM stopped sending text. I figure that the latency is the same both ways so I do not need to compensate for that.
Run 1 - 6.4 sec - 252 words (2362 WPM)
Run 2 - 6.6 sec - 239 words (2172 WPM)
Run 3 - 7.5 sec - 273 words (2184 WPM)
There are a few lists already, I thought I'd create another one so we can have a long list of lists someday. 🤠 Ordered by approximately which ones I actually use often.
Credit to Time Johnsen
+
at the top. This will open Personal Automation and you'll be looking at a list.Ground Truth is me, a human, asserted the answer. GPT 4.0 did 10/10 (over ChatGPT voice on the mobile app no less). | |
This is a really terrible ML eval, it's accuracy only. The more real version of this would be hundreds of questions, | |
lots of data and lots of work. If you aren't putting the work in then your users are. It's the same as functional | |
testing except with a new-ish data concept, many things in AI/ML are like this. The data is the foreign part to general | |
software dev. | |
Perplexity is https://labs.perplexity.ai/ | |
For this demo, we will be using node/javascript and NPM, only because it is sort of a lingua franca.
Let's create a merge conflict. It's pretty easy to do.
#_preseed_V1 | |
# <CHECK_THIS> | |
d-i debian-installer/locale string en_US | |
d-i keyboard-configuration/xkb-keymap select us | |
require 'drb/drb' | |
DRb.start_service | |
remote = DRbObject.new_with_uri("druby://localhost:8787") | |
remote.pid | |
# => <some pid> | |
# you can verify the PIDs with top and you can filter by process name like ruby or irb or something |
# Things have changed in postgres 15.x+ | |
$ createdb -O <your shell uid> foo_dev | |
$ createuser foo | |
$ psql | |
psql> grant all privileges on database foo_dev to foo; | |
psql> alter user foo with encrypted password 'random-password'; | |
# enter the database that you just created |