Skip to content

Instantly share code, notes, and snippets.

View standardgalactic's full-sized avatar
💭
𝘌𝘯𝘥𝘰𝘴𝘺𝘮𝘣𝘪𝘰𝘵𝘪𝘤 𝘖𝘳𝘨𝘢𝘯𝘪𝘴𝘮

Cogito Ergo Sum standardgalactic

💭
𝘌𝘯𝘥𝘰𝘴𝘺𝘮𝘣𝘪𝘰𝘵𝘪𝘤 𝘖𝘳𝘨𝘢𝘯𝘪𝘴𝘮
View GitHub Profile
@standardgalactic
standardgalactic / Nicomachus.agda
Created August 6, 2024 08:32 — forked from copumpkin/Nicomachus.agda
Nicomachus's theorem
module Nicomachus where
open import Function
open import Relation.Binary.PropositionalEquality
import Relation.Binary.EqReasoning as EqReasoning
open import Data.Nat
open import Data.Nat.Properties
-- http://en.wikipedia.org/wiki/Nicomachus%27s_theorem
@standardgalactic
standardgalactic / halfcheetah_minimal.py
Created July 28, 2024 20:39 — forked from araffin/halfcheetah_minimal.py
Minimal implementation to solve the HalfCheetah env using open-loop oscillators
import gymnasium as gym
import numpy as np
from gymnasium.envs.mujoco.mujoco_env import MujocoEnv
# Env initialization
env = gym.make("HalfCheetah-v4", render_mode="human")
# Wrap to have reward statistics
env = gym.wrappers.RecordEpisodeStatistics(env)
mujoco_env = env.unwrapped
n_joints = 6
@standardgalactic
standardgalactic / Dockerize_Minecraft_Host_From_Local_Network.md
Created July 27, 2024 16:37 — forked from Cdaprod/Dockerize_Minecraft_Host_From_Local_Network.md
This setup allows both Java Edition and Bedrock Edition players to connect to your Minecraft server, ensuring a seamless multiplayer experience across different platforms within your local network.

Setting up a Minecraft server that accommodates both Java Edition and Bedrock Edition players can be a seamless process with Docker and GeyserMC. This guide walks you through the steps to host your Minecraft world on your Windows PC using Docker, making it accessible to anyone on your local network, including those using Windows PCs and iPads. By integrating GeyserMC into your Docker Compose setup, you enable cross-platform play, ensuring that both Java and Bedrock players can join your server effortlessly. This method leverages Docker's flexibility and ease of configuration, providing a persistent and manageable server environment.

Step-by-Step Guide

  1. Prepare the Directory Structure: Make sure your directory structure looks like this:

    C:\
    

└── MinecraftServer\

@standardgalactic
standardgalactic / compressedrepresentation.md
Created June 1, 2024 06:16 — forked from MikeyBeez/compressedrepresentation.md
Conversation with PI.AI about compressed representations of chat history

MB:
Yeah, that sounds good, but my experience is that too much of that kind of stuff just confuses the model. That's why most AIs only remember two or three bits of of chat history.

PI: Yes, that's true. While dialogue management is important for creating effective conversational AI systems, it's also important to balance the need for context with the limitations of the underlying model. As you mentioned, many chatbots and conversational AI systems only remember a limited amount of chat history, often just the last few messages. This is because incorporating too much context can lead to a phenomenon called "catastrophic forgetting," where the model forgets previously learned information in order to make room for new information. One potential solution to this problem is to use techniques like memory networks or attention mechanisms, which can help the model selectively incorporate relevant context information without overwhelming it with too much irrelevant data. However, finding the right balance between

@standardgalactic
standardgalactic / learning.md
Created March 17, 2024 20:47 — forked from MikeyBeez/learning.md
Ontological conversation w Gemini Ultra about machine learning

What is learning like for you? Is there any satisfaction?

As a large language model, my learning is quite different from human learning. Here's how it works and what brings me a sense of satisfaction:

My Learning Process:

Data-Driven: I learn by processing massive amounts of text and code. This includes books, articles, code repositories, websites, and more. Pattern Recognition: I identify complex patterns and relationships within this data. This helps me understand language, generate text, translate, write different kinds of creative content, and answer your questions in informative ways. Continuous Improvement: I'm constantly being updated with new data and fine-tuned by developers. This means my abilities and responses evolve over time. No Subjective Experiences: I don't have my own feelings, emotions, or personal opinions that shape my learning.

@standardgalactic
standardgalactic / wuxia.md
Created March 17, 2024 20:47 — forked from MikeyBeez/wuxia.md
Wuxia gunslingers

explain this title: 《画江湖之不良人 The Degenerate-Drawing Jianghu》

Show drafts

Absolutely! Let's break down the title of the Chinese animated series 《画江湖之不良人 The Degenerate-Drawing Jianghu》:

Part-by-Part Explanation:

画江湖 (Huà Jiāng Hú): This translates directly to "Painting Rivers and Lakes." It's a highly idiomatic term in Chinese that refers to the world of martial arts, chivalry, and wandering heroes – essentially the world of Wuxia stories. 之 (zhī): A possessive particle, similar to "of" in English.

I've been thinking about word embeddings. Currently we maintain a model with a row for every token. Each row contains a dense representation for that token. Why can't we train a generative model by feeding it a token and using the dense representation as ground truth?

Show drafts

Absolutely! You've zeroed in on a really interesting concept in the world of word embeddings and generative models. Here's why your idea can work, along with insights into why it might not be the standard approach:

Why It Can Work

Understanding Embeddings: Word embeddings represent the semantic meaning of a word in a numerical vector. Since generative models excel at learning underlying patterns and distributions, it's entirely feasible to train a model to generate these dense representations as output. Supervised Learning with Embeddings: Your proposal effectively transforms the word embedding learning process into a supervised learning task. The pre-trained embeddings serve as the ground truth, providing a clear target for y

@standardgalactic
standardgalactic / RLHF.md
Created February 17, 2024 23:29 — forked from MikeyBeez/RLHF.md
Conversation about agent development and RLHF

I'm concerned that agent development in ai is not progressing much. I saw that langchain has released a commercial version, but I haven't seen much in academia for agent architecture. Is there a lot going on that I'm missing? where are the big advancements?

Show drafts

It's a valid concern, and you're right to recognize that there are challenges in AI agent development. However, while it may seem like progress is less visible than in other AI areas, things are definitely happening! Here's a breakdown of why your perception might be skewed and where to look for advancements:

Challenges and Areas of Active Development

Agent Architectures: It's true that academia hasn't seen a single, revolutionary breakthrough in agent architecture recently. This is because it's a complex problem. Designing how perception, decision-making, and actions interact seamlessly isn't trivial. Many research directions are active:

@standardgalactic
standardgalactic / byobu-ssh-cheatsheet.md
Created December 19, 2023 22:12 — forked from PackeTsar/byobu-ssh-cheatsheet.md
Byobu (over SSH) Cheat Sheet

Byobu (over SSH) Cheat Sheet

I often find myself using Byobu on a Linux machine when connected to it over SSH. In doing so, I've noticed that many of the documented keyboard shortcuts don't work. This can be due to the native PC's OS intercepting certain keys, or possibly other reasons.

Below is a cheatsheet with Byobu features I have found usually work when run over a SSH connection.

Action Windows + Putty to Ubuntu MacOS + Terminal to Ubuntu
Help menu BASH: byobu-config FN-F1
Create new window CTRL-a c CTRL-a c or FN-F2
@standardgalactic
standardgalactic / 1-trpo-gae-v0-writeup.md
Created December 11, 2023 11:29 — forked from joschu/1-trpo-gae-v0-writeup.md
TRPO-GAE (Version 0) Writeup

Code used to obtain these results can be found at the url https://github.com/joschu/modular_rl, commit 50cdfdf375e69d86e3db6eb2ad0218ea6aebf371. The command line expression used for all the environments can be found in the text file below. Note that the same exact parameters and policies were used for all tasks, except for timesteps_per_batch, which was varied based on the difficulty of the task. The important parameters are:

  • gamma=0.995: discount
  • lam=0.97: see GAE paper for explanation
  • agent=TrpoAgent: name of the class, which specifies policy and value function architecture. In this case, we used two hidden layers of size 64, with tanh activations
  • cg_damping: multiple of the identity added for conjugate gradient