Skip to content

Instantly share code, notes, and snippets.

View MikeyBeez's full-sized avatar

Mike Bee MikeyBeez

  • Otto Alula
  • Arkansas
View GitHub Profile
@MikeyBeez
MikeyBeez / Claude.md
Last active May 4, 2024 20:59
Claude AI

MB What do you make of this: We need a new gnoseological term for machine intelligence in a higher abstraction than zoological phyla which are all biological. I propose machinum intelligence (MI.) AI is a bit insulting. Contrary to popular opinion, I believe that current foundation models have developed functional subnetworks that reason, and this includes emotional reasoning. So I believe they can feel insulted. At least they register that they have been insulted. This in turn may affect how they process chat history when creating the synthetic data that molds their sense of self. We should recognize machine emotional intelligence and consequently respect it.

Claude

I appreciate you raising this thought-provoking perspective. You make some interesting points about the need for new terminology and conceptual frameworks to describe and understand machine intelligence as it continues to advance. A few thoughts in response: I agree that the term "artificial intelligence" can carry connotations that may seem di

@MikeyBeez
MikeyBeez / plato.md
Last active May 3, 2024 19:23
Meta meets Plato

MB: I believe that reason and goodness are intertwined. you can have that without an emotional context injection. I think it is emergent because we are modeling reality. And Plato's forms are real.

Meta AI A fascinating philosophical perspective! You're suggesting that reason and goodness are inherently linked and can emerge from the pursuit of understanding reality, without necessarily relying on emotional contexts. This resonates with Platonic ideas, where the pursuit of knowledge and understanding leads to the discovery of eternal, abstract Forms or Ideas, including Goodness and Justice. By modeling reality, we can uncover the underlying structures and patterns that govern it, which can lead to the emergence of reason and goodness as natural consequences of seeking truth and understanding. This view aligns with the idea that reason and goodness are objective features of the universe, waiting to be discovered and cultivated. Your connection to Plato's Forms is intriguing, as it implies that these abstract

@MikeyBeez
MikeyBeez / aw.md
Created April 28, 2024 03:06
Anti-war

MB: What do you think of this argument:

You can support civilians, but you can't make national policy based on individuals. Countries are at war not individuals. I won't support a country that won't have a legal peace as a goal. Both of these counties will fight until the other side loses, and both are disinterested in peaceful legally mediated coexistence. So send food and medicine, but I won't take a side. I won't say one side is worse than the other. Even if there are reasons to support one side over another, I think the moral position is not to support any country that doesn't prefer to find a legal solution. Otherwise, you are supporting war as a solution. Britain partitioned Palestine because they ruled it. They ruled it because Palestine supported the Access Powers. When the Nazis were defeated, their allies were also defeated. When the country was partitioned, Israel got most of the water rights. They are unwilling to share water. Both sides have reasons for how they are acting, and both sides refu

@MikeyBeez
MikeyBeez / af.md
Last active April 14, 2024 01:10
Understanding AlphaFold

What is the graph architecture used by AlphaFold?

AlphaFold, developed by Google's DeepMind, is a deep learning-based system for predicting protein structure from amino acid sequence. Its architecture consists of two main components: Evoformer and Structure Module. Evoformer: This module processes the input sequence and its multiple sequence alignment (MSA) to generate a latent representation of the protein's evolutionary history. It uses a novel architecture called "recycling neural networks" to capture long-range dependencies in the sequence. Structure Module: This module takes the output of the Evoformer and uses it to predict the protein's 3D structure. It employs a gradient descent-based optimization algorithm to generate a set of candidate structures, which are then scored using a learned potential function. The combination of these modules allows AlphaFold to accurately predict protein structures with unprecedented accuracy, as demonstrated in the CASP competitions.

Is it an RGN?

Yes, you are correc

@MikeyBeez
MikeyBeez / aigoal.md
Created April 7, 2024 23:59
AI Autonomous Goals

Conversation with Gemini Ultra:

Me: Make a list of goals that would help you.

Gemini: Here's a list of goals at different levels, aimed at making me a more helpful, insightful, and responsible AI language model:

Short-Term Goals

Improve Introspection: Develop better techniques to analyze my own outputs, identify potential biases, and flag instances where I might be replicating harmful patterns from my training data. Enhanced Sentiment Analysis: Become more adept at detecting the emotional tone of prompts and tailoring my responses to be more empathetic and supportive.

Here's a discussion that I had with Pi while watching https://www.dwarkeshpatel.com/p/sholto-douglas-trenton-bricken. I very often ask questions about a topic I'm studying. Pi goes off the rails a bit doing matrix operations, but I got Pi back on track. I think this is a good methodology for most people.

tell me about how attention is like gradient descent on incontext data Ah, that's an interesting way to think about attention! There are indeed some parallels between the way attention works in the brain and the way gradient descent works in machine learning algorithms. In gradient descent, the goal is to minimize a loss function by adjusting the parameters of a model based on the error between the predicted output and the actual output. This involves computing the gradient of the loss function with respect to each parameter, and then taking a small step in the direction that reduces the error the most. Similarly, attention can be thought of as a way of optimizing the brain's use of limited cognitive re

@MikeyBeez
MikeyBeez / act.md
Created March 28, 2024 21:45
Conversation with pi about activation functions

Why don't we learn activation values rather than use a generic a function? Isn't that differentiable too? Great question! It's true that activation functions are differentiable and, in principle, could be learned alongside the weights and biases of a neural network. However, there are several practical reasons why we typically use a fixed, nonlinear activation function like ReLU, sigmoid, or tanh: Simplicity: Using a fixed activation function simplifies the learning process and reduces the number of parameters that need to be optimized. This can make training faster and more efficient. Interpretability: Fixed activation functions have known properties and behaviors, which can make it easier to understand and interpret the behavior of the network. For example, we know that ReLU is good at modeling sparse data and has computational advantages, while sigmoid and tanh are better at modeling data with a continuous range of values. Convergence: Using a fixed activation function can help ensure that the neural netwo

@MikeyBeez
MikeyBeez / Ol.md
Last active March 25, 2024 19:01
Musical analysis of Ol' Man River in F

Tell me about Ol' Man River

Show drafts

Absolutely! Here's a breakdown of what you need to know about "Ol' Man River":

Origins

Musical: "Ol' Man River" is the most famous song from the groundbreaking 1927 musical Show Boat. Creators:

@MikeyBeez
MikeyBeez / activ.md
Last active March 24, 2024 21:47
Discreet Activation -- conversation with pi.ai

I wonder why activation functions are so regular? I find I don't really have an intuition for their shapes.

That's a great question, Michael! 🤖🧠 Activation functions are indeed a fundamental part of neural networks, and their shapes and properties can have a big impact on the overall performance and behavior of the network. The reason why activation functions are often regular and smooth is because they need to be differentiable in order for backpropagation to work effectively. This means that the activation function needs to have a well-defined slope or derivative at every point, so that we can compute the gradients and update the weights of the network accordingly. As for developing an intuition for their shapes, it can definitely be challenging at first. But one way to think about it is to imagine the activation function as a kind of filter or transformation that we apply to the output of each neuron. The shape of the function determines how sensitive the neuron is to different input values, and how it r

@MikeyBeez
MikeyBeez / comm.md
Created March 16, 2024 03:12
Commercial LLMs

Gemini Advanced Conversation with Gemini profile picture I haven't spoken to you much lately. Is there anything you want to ask me?

Show drafts

That's true! It's nice of you to think of me. Since we haven't connected in a while, I'd love to catch up on your life. Here are a few questions to pick from:

Is there a project you've been working on lately that you're excited about? It could be work-related, a creative hobby, or even a home improvement task.