Skip to content

Instantly share code, notes, and snippets.

@andy108369
andy108369 / api-error.md
Created July 28, 2022 10:17 — forked from aursu/api-error.md
ACPI Error: AE_NOT_EXIST, Evaluating _PMM (20190816/power_meter-325)

The same problem on an HP server and there is a smarter solution found here: https://www.novell.com/support/kb/doc.php?id=7010449

Instead of disabling sensors in netdata altogether, you can just disable the acpi_power_meter kernel module, which doesn't work anyway on affected HP servers due to a BIOS bug. This way netdata can still get the temperature readings.

This is how you can disable the module immediately:

sudo modprobe -r acpi_power_meter

And this is how you make it permanent:

@andy108369
andy108369 / sleep-beta3.md
Created July 29, 2022 12:06
beta3 NVME persistent storage deployment manifest example

Persistent storage deployment manifest (SDL) example.

It deploys an app with the 10GiB of persistent storage mounted over /opt/data path.

The image used ubuntu:22.04 with sleep infinity, so it does nothing, just sleeps.. So you can akash provider lease-shell into your deployment and inspect the storage.

---
@andy108369
andy108369 / akash-recover-from-apphash-error.md
Last active November 12, 2023 01:11
Recovering from the AppHash error without having to use snapshot / state-sync

Recovering from the AppHash error without having to use snapshot / state-sync

akash 0.26.2 (cosmos-sdk v0.45.16, tendermint v0.34.27)

Follow this doc https://gist.github.com/andy108369/da1279257c018be6370310c990c25738

akash 0.16.4 (cosmos-sdk v0.45.4, tendermint v0.34.19)

With the 0.14.x client's TX create certificate transaction sent over 0.16.4 RPC causing the majority of 0.16.3 to reject that TX, it in turn caused the minority of 0.16.4 validators halt at 6955214 block. This is due to this commit in 0.16.4 which allowed old type of TX be present in the block whilst 0.16.3 - did not.

have signed these provider's new attributes

on July 31st 21:58 UTC

"https://akash-01.crono.co:8443"
"https://d3akash.cloud:8443"
"https://provider.akash.rocks:8443"
"https://provider.akash.world:8443"
"https://provider.akt.computer:8443"
@andy108369
andy108369 / ubuntu-2204-remove-snap.md
Created August 17, 2022 07:30 — forked from allisson/ubuntu-2204-remove-snap.md
Ubuntu 22.04 remove snap

Remove snaps

sudo snap remove --purge firefox
sudo snap remove --purge snap-store
sudo snap remove --purge snapd-desktop-integration
sudo snap remove --purge gtk-common-themes
sudo snap remove --purge gnome-3-38-2004
sudo snap remove --purge core20
sudo snap remove --purge bare
sudo snap remove --purge snapd
@andy108369
andy108369 / sync-akashnet-2-from-height-0.md
Last active January 23, 2024 01:37
How to sync Akash Node from height=0 in akashnet-2 network

How-to sync Akash Node from height=0 in akashnet-2 network

Important notes before you start

Make sure you are running your archival node with pruning = nothing since height=0 to keep all historic states (i.e. archiving node).

With akash 0.18.0 (aka mainnet4) you HAVE TO start the chain with AKASH_PRUNING=nothing set. (This is fixed in akash 0.20.0)

Do NOT change pruning in between the restarts since this can corrupt the chain data (IAVL) cosmos/cosmos-sdk#6370 (comment)

@andy108369
andy108369 / app.js
Created August 20, 2022 14:48 — forked from atmoner/app.js
Akash WebSocket
import WebSocket from 'ws';
const ws = new WebSocket('wss://rpc-akash-ia.notional.ventures/websocket');
ws.on('open', function open() {
console.log('Connected on Akash blockchain from WebSocket');
ws.send(JSON.stringify({
"method":"subscribe",
"params": ["tm.event='NewBlock'"],
"id":"1",
"jsonrpc":"2.0"

Migration from akash-rook to the upstream rook-ceph Helm Charts

Impact: Akash deployments using Persistent storage will temporarily stall due to having their I/O stuck to the RBD mounted devices.

1. Take a snapshot of Ceph Mon locations

This will be needed in later steps.

kubectl -n rook-ceph get pods -l "app=rook-ceph-mon" -o wide

Ceph Service Recovery Procedure

I've been playing with Rook Ceph, have been able to helm uninstall it (all the K8s bits including ceph CRDs), and installing it back again without the data loss while having the Pods using the persistent storage (the RBD).
The impact: the Akash deployments using persistent storage disks will hang for the time until Ceph services are restored.

They key locations which need to be preserved are:

/var/lib/rook/* isn't removed when you uninstall akash-rook helm chart

  • /var/lib/rook/mon-a;
  • /var/lib/rook/rook-ceph;
  • rook-ceph-mon secret;

Scaling OSD from 3 to 1 in rook-ceph

This procedure is for removing all OSD off of a selected storage node in the environment with more than a single storage node in the cluster and enough free disk space.
If you have a single storage node only, then you'll have to remove disk by disk.
If you have only a single disk, then you'll have to remove OSD by OSD, reclaiming the freed disk space in the VG (if that is your case) using this and that hints or waiting until rook-ceph will support this.

  • Scale osdsPerDevice from 3 to 1 and apply the rook-ceph-cluster helm chart;