Skip to content

Instantly share code, notes, and snippets.

View hongbo-miao's full-sized avatar

Hongbo Miao hongbo-miao

View GitHub Profile
hongbo-miao / code_line_count.json
Last active June 13, 2024 10:00 code line count
{"label":"code lines","message":"190.9k","schemaVersion":1,"color":"blue","labelColor":"gray"}
#if defined(ESP32)
#include <WiFiMulti.h>
WiFiMulti wifiMulti;
#define DEVICE "ESP32"
#elif defined(ESP8266)
#include <ESP8266WiFiMulti.h>
ESP8266WiFiMulti wifiMulti;
#define DEVICE "ESP8266"
hongbo-miao / hm-connect-cluster-connect-api.json
Created December 4, 2021 10:17
hm-connect-cluster-connect-api verify error
"name": "io.confluent.connect.elasticsearch.ElasticsearchSinkConnector",
"error_count": 3,
"groups": [
"Error Handling",
"Transforms: unwrap",
"Transforms: key",
hongbo-miao / docker-compose.yml
Created July 9, 2021 17:24 — forked from asafc/docker-compose.yml
OPAL example configuration with decision logs
version: "3.8"
# When scaling the opal-server to multiple nodes and/or multiple workers, we use
# a *broadcast* channel to sync between all the instances of opal-server.
# Under the hood, this channel is implemented by encode/broadcaster (see link below).
# At the moment, the broadcast channel can be either: postgresdb, redis or kafka.
# The format of the broadcaster URI string (the one we pass to opal server as `OPAL_BROADCAST_URI`) is specified here:
image: postgres:alpine
hongbo-miao /
Created July 9, 2021 17:24 — forked from asafc/
Example fastapi server that accept OPA decision logs and prints them to the console
you may run this example with uvicorn, by using this command:
uvicorn opalogger:app --reload
import gzip
from typing import Callable, List
from fastapi import Body, FastAPI, Request, Response
hongbo-miao / Count lines in Git repo
Created June 18, 2021 18:07 — forked from mandiwise/Count lines in Git repo
A command to calculate lines of code in all tracked files in a Git repo
// Reference:
$ git ls-files | xargs wc -l
hongbo-miao /
Created May 4, 2021 13:30 — forked from karpathy/
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)
from math import sqrt
import torch
from tqdm import tqdm
import matplotlib.pyplot as plt
import networkx as nx
from torch_geometric.nn import MessagePassing
from import Data
from torch_geometric.utils import k_hop_subgraph, to_networkx
import numpy as np
hongbo-miao /
Last active February 18, 2021 12:09
Health Form

Version 2/18/2021

The script is for IT employees who work in Shanghai office.

Feel free to contact me if you have any questions.

How to use the script?

Step 1

<link rel="shortcut icon" width=32px>
<canvas style="display: none" id="loader" width="16" height="16"></canvas>
class Loader {
constructor(link, canvas) { = link;
this.canvas = canvas;
this.context = canvas.getContext('2d');
this.context.lineWidth = 2;