In order of first appearance in The Morning Paper.
This howto describes installing entware for the Tomato open-source router firmware.
- USB stick - 1G or more in size
- USB-capable router running TomatoUSB.
This NixOS code ensures that the system provide version-specific $LOCALE_ARCHIVE
environment variables to mitigate the effects of
NixOS/nixpkgs#38991.
To deploy it, copy the file into your /etc/nixos
folder using a file name
like multi-glibc-locale-paths.nix
. Then edit your configuration.nix
file to
contain the attribute:
imports = [ ./multi-glibc-locale-paths.nix ];
#!/bin/bash | |
# Copyright © 2017 Google Inc. | |
# Licensed under the Apache License, Version 2.0 (the "License"); | |
# you may not use this file except in compliance with the License. | |
# You may obtain a copy of the License at | |
# | |
# http://www.apache.org/licenses/LICENSE-2.0 | |
# | |
# Unless required by applicable law or agreed to in writing, software |
-
Simplest intro to git by github and codeschool - Try Git
-
[Intro to github]
Yoav Goldberg, April 2023.
With the release of the ChatGPT model and followup large language models (LLMs), there was a lot of discussion of the importance of "RLHF training", that is, "reinforcement learning from human feedback". I was puzzled for a while as to why RL (Reinforcement Learning) is better than learning from demonstrations (a.k.a supervised learning) for training language models. Shouldn't learning from demonstrations (or, in language model terminology "instruction fine tuning", learning to immitate human written answers) be sufficient? I came up with a theoretical argument that was somewhat convincing. But I came to realize there is an additional argumment which not only supports the case of RL training, but also requires it, in particular for models like ChatGPT. This additional argument is spelled out in (the first half of) a talk by John Schulman from OpenAI. This post pretty much