Visual Conversation: An AI-Assisted Silent Dialogue
Join us for an extraordinary journey into the realm of silent communication, where words are replaced by images, and the language of the eyes becomes the bridge to a deeper understanding.
[ | |
{ | |
"test_case": "Getting weather information", | |
"models": [ | |
"mistralai/mistral-7b-instruct", | |
"mistralai/mixtral-8x7b-instruct", | |
"openai/gpt-3.5-turbo-1106", | |
"anthropic/claude-2.1" | |
], | |
"steps": [ |
Company | Focus | Funding | Valuation | Revenue | Description |
---|---|---|---|---|---|
SpaceX | Launch services, spacecraft manufacturing, satellite internet | $7.2B | $127B | $2.8B (2021) | Develops Falcon 9, Falcon Heavy rockets, Starlink internet satellites |
Blue Origin | Launch services, rocket engines, space tourism | $3.7B | $75B | - | Develops New Shepard for space tourism, New Glenn rocket |
Virgin Galactic | Space tourism | $1.3B | $3.8B | - | Operates SpaceShipTwo for suborbital space tourism |
Rocket Lab | Launch services, rocket manufacturing | $698M | $4.1B | - | Small satellite launcher, builds Electron rocket |
Sierra Space | Next-gen space infrastructure, space stations | - | - | - | Developing Dream Chaser spaceplane, LIFE habitat |
Axiom Space | Human spaceflight services, space stations | $150M | - | - | Building private space station, provides ISS services |
const createJob = async (image) => { | |
const url = 'https://api.hume.ai/v0/batch/jobs'; | |
const formData = new FormData(); | |
formData.append('file', image); | |
const response = await fetch(url, { | |
method: 'POST', | |
headers: { | |
'X-Hume-Api-Key': process.env.HUME_API_KEY, | |
'Accept': 'application/json', |
SELECT | |
(COUNT(DISTINCT CASE | |
WHEN (raw_session_replay_events.click_count = 0 AND raw_session_replay_events.active_milliseconds < 60000) | |
THEN raw_session_replay_events.session_id | |
ELSE NULL | |
END) * 100.0) / COUNT(DISTINCT properties.$session_id) AS bounce_rate | |
FROM | |
events | |
INNER JOIN | |
raw_session_replay_events ON events.properties.$session_id = raw_session_replay_events.session_id |
import data | |
import torch | |
from models import imagebind_model | |
from models.imagebind_model import ModalityType | |
import uvicorn | |
from fastapi import FastAPI | |
import requests | |
device = "cuda:0" if torch.cuda.is_available() else "cpu" |
import Bundlr from "@bundlr-network/client"; | |
import fs from "fs"; | |
import Arweave from "arweave"; | |
import { Neurosity } from "@neurosity/sdk"; | |
const jwk = JSON.parse(fs.readFileSync("wallet.json").toString()); | |
const neurosity = new Neurosity(); | |
neurosity.login({ | |
email: process.env.NEUROSITY_EMAIL!, |
from websockets import connect | |
import json | |
import numpy as np | |
from sklearn.decomposition import IncrementalPCA as PCA | |
import streamlit as st | |
import plotly.graph_objects as go | |
# Create initial figure | |
fig = go.Figure() |
Today, I bring to you a thought-provoking concept: the amalgamation of technology and psychology, or more specifically, the application of System 1 and System 2 LLMs in AI programming. This idea was born from two main observations I made.
Firstly, there is the revolutionary technology of "embeddings" that has seen its usage skyrocket since 2022. It allows us to create a programmable memory for AI, thus eliminating the need to grapple with prompt engineering. I began to ponder if there could be a more efficient alternative to embeddings, one that utilizes fast and economical LLMs.
Secondly, the theory of System 1 and System 2 in psychology, which speaks about two different decision-making processes in our brain, caught my attention. System 1 corresponds to our intuitive, automatic, often subconscious thought process, like recognizing facial expressions. On the other hand, System 2 is deliberate, slower, and more logical, like solving complex math problems.
Inspired by this theory, I began to explore the po
const { exec } = require('child_process'); | |
const recursive = require("recursive-readdir"); | |
const { Configuration, OpenAIApi } = require("openai"); | |
const readline = require('readline'); | |
const fs = require('fs'); | |
const path = require('path'); | |
async function listModels() { | |
const response = await fetch("https://api.airtable.com/v0/appwJMZ6IAUnKpSwV/all", { |