Skip to content

Instantly share code, notes, and snippets.

@gladiopeace
gladiopeace / vm2_3.9.17_sandbox_escape.md
Created January 16, 2024 20:48 — forked from arkark/vm2_3.9.17_sandbox_escape.md
Sandbox Escape in vm2@3.9.17 - CVE-2023-32314

Sandbox Escape in vm2@3.9.17

A sandbox escape vulnerability exists in vm2 for versions up to 3.9.17. It abuses an unexpected creation of a host object based on the specification of Proxy, and allows RCE via Function in the host context.

Impact

A threat actor can bypass the sandbox protections to gain remote code execution rights on the host running the sandbox.

PoC

0.414 | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0.414 /home/arch/docker-eyeos/darwin-xnu/bsd/sys/kpi_socket.h:46:30: note: in ex pansion of macro '__API_DEPRECATED'
0.414 46 | #define __NKE_API_DEPRECATED __API_DEPRECATED("Network Kernel Exte nsion KPI is deprecated", macos(10.4, 10.15))
0.414 | ^~~~~~~~~~~~~~~~
0.414 /home/arch/docker-eyeos/darwin-xnu/bsd/sys/kpi_socket.h:463:1: note: in ex pansion of macro '__NKE_API_DEPRECATED'
0.414 463 | __NKE_API_DEPRECATED;
0.414 | ^~~~~~~~~~~~~~~~~~~~
0.415 /home/arch/docker-eyeos/darwin-xnu/EXTERNAL_HEADERS/Availability.h:407:31: error: expected '=', ',', ';', 'asm' or '__attribute__' before '__
@gladiopeace
gladiopeace / openchat_3_5.preset.json
Created December 26, 2023 16:28 — forked from beowolx/openchat_3_5.preset.json
This is the prompt preset for OpenChat 3.5 models in LM Studio
{
"name": "OpenChat 3.5",
"load_params": {
"n_ctx": 8192,
"n_batch": 512,
"rope_freq_base": 10000,
"rope_freq_scale": 1,
"n_gpu_layers": 80,
"use_mlock": true,
"main_gpu": 0,
@gladiopeace
gladiopeace / main.py
Created October 13, 2023 10:17 — forked from disler/main.py
vueGPT: Automatically Generate Vue 3 <script setup lang='ts'> components like a boss.
from vueGPT import prompt, make_client
from dotenv import load_dotenv
# load .env file
load_dotenv()
# get openai api key
OPENAI_API_KEY = environ.get('OPENAI_API_KEY')
@gladiopeace
gladiopeace / main.py
Created October 13, 2023 10:17 — forked from disler/main.py
vueGPT: Automatically Generate Vue 3 <script setup lang='ts'> components like a boss.
from vueGPT import prompt, make_client
from dotenv import load_dotenv
# load .env file
load_dotenv()
# get openai api key
OPENAI_API_KEY = environ.get('OPENAI_API_KEY')
# once you have an app created and downloaded:
curl -sSL https://get.wasp-lang.dev/installer.sh | sh
cd {app_folder}
# to install NVM:
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.5/install.sh | bash
nvm install 18
nvm use 18
wasp db migrate-dev
wasp start
@gladiopeace
gladiopeace / pmxcfs.py
Created July 24, 2023 20:42 — forked from samicrusader/pmxcfs.py
Proxmox Virtual Environment config.db dump utility
#!/usr/bin/env python3
import os
import sqlite3
import sys
try:
os.mkdir('config_restore')
except:
pass
@gladiopeace
gladiopeace / llama2-mac-gpu.sh
Created July 23, 2023 14:39 — forked from adrienbrault/llama2-mac-gpu.sh
Run Llama-2-13B-chat locally on your M1/M2 Mac with GPU inference. Uses 10GB RAM
# Clone llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
# Build it
LLAMA_METAL=1 make
# Download model
export MODEL=llama-2-13b-chat.ggmlv3.q4_0.bin
wget "https://huggingface.co/TheBloke/Llama-2-13B-chat-GGML/resolve/main/${MODEL}"
// Name: Midjourney Prompt
// Description: Generate a Random Midjourney Prompt
// Author: John Lindquist
// Twitter: @johnlindquist
import "@johnlindquist/kit"
let count = parseInt(
await arg({
placeholder: "How many prompts to generate and paste?",