Skip to content

Instantly share code, notes, and snippets.

View xiong-jie-y's full-sized avatar

Xiong Jie xiong-jie-y

  • Tokyo, Japan
View GitHub Profile
@xiong-jie-y
xiong-jie-y / outputs.py
Last active December 1, 2018 03:45
Architecture search of image classifier for cifar10.
import argparse
import os
import pickle
from keras.datasets import cifar10
from autokeras.image.image_supervised import ImageClassifier, PortableImageSupervised
import autokeras.constant
import torch
from sklearn.metrics import accuracy_score
@xiong-jie-y
xiong-jie-y / gist:7099829b1797d6867eb928264597979d
Created April 2, 2019 02:23
GDB stack trace of polynomial svm clasification hang
#0 0x00007fffe6a70bb8 in svm::Solver::Solve(int, svm::QMatrix const&, double const*, signed char const*, double*, double const*, double, svm::Solver::SolutionInfo*, int, int) ()
from /home/yusuke/miniconda3/envs/py37_automl_examples2/lib/python3.7/site-packages/sklearn/svm/libsvm.cpython-37m-x86_64-linux-gnu.so
#1 0x00007fffe6a7271b in svm::svm_train_one(svm_problem const*, svm_parameter const*, double, double, int*) ()
from /home/yusuke/miniconda3/envs/py37_automl_examples2/lib/python3.7/site-packages/sklearn/svm/libsvm.cpython-37m-x86_64-linux-gnu.so
#2 0x00007fffe6a74207 in svm_train ()
from /home/yusuke/miniconda3/envs/py37_automl_examples2/lib/python3.7/site-packages/sklearn/svm/libsvm.cpython-37m-x86_64-linux-gnu.so
#3 0x00007fffe6a579c9 in __pyx_pf_7sklearn_3svm_6libsvm_fit.isra.21 ()
from /home/yusuke/miniconda3/envs/py37_automl_examples2/lib/python3.7/site-packages/sklearn/svm/libsvm.cpython-37m-x86_64-linux-gnu.so
#4 0x00007fffe6a5f471 in __pyx_pw_7sklearn_3svm_6libsvm_1fit ()
fr
/**
* (C) Copyright IBM Corp. 2015, 2020.
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
@xiong-jie-y
xiong-jie-y / stream_window_to_virtual_camera.py
Created April 12, 2020 08:07
Virtual Character Streaming in Ubuntu
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gdk
from gi.repository import GdkPixbuf
import numpy
def get_window_screen(window_id):
window = Gdk.get_default_root_window()
screen = window.get_screen()
typ = window.get_type_hint()
@xiong-jie-y
xiong-jie-y / stream_window_to_virtual_camera.py
Created April 12, 2020 08:07
Virtual Character Streaming in Ubuntu
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gdk
from gi.repository import GdkPixbuf
import numpy
def get_window_screen(window_id):
window = Gdk.get_default_root_window()
screen = window.get_screen()
typ = window.get_type_hint()
@xiong-jie-y
xiong-jie-y / stream_window_to_virtual_camera.py
Created April 12, 2020 08:07
Virtual Character Streaming in Ubuntu
import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gdk
from gi.repository import GdkPixbuf
import numpy
def get_window_screen(window_id):
window = Gdk.get_default_root_window()
screen = window.get_screen()
typ = window.get_type_hint()
@xiong-jie-y
xiong-jie-y / HelloClient.cs
Last active April 13, 2020 13:31
Virtual Character Streaming in Ubuntu
using UnityEngine;
using VRM;
using AsyncIO;
using NetMQ;
using NetMQ.Sockets;
using MessagePack;
using System;
@xiong-jie-y
xiong-jie-y / subscribe_to_detection.py
Created April 19, 2020 10:15
Detection Subscriber
import zmq
import sys
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://localhost:5555")
# receive only message with zipcode being 10001
zipfilter = sys.argv if len(sys.argv) > 1 else "Detection"
subscriber.setsockopt(zmq.SUBSCRIBE, zipfilter.encode('utf-8'))
@xiong-jie-y
xiong-jie-y / hand_face_tracking_desktop.pbtxt
Created April 21, 2020 15:32
Multi face and multi hand tracking using mediapipe on Ubuntu.
# MediaPipe graph that performs multi-hand tracking with TensorFlow Lite on GPU.
# Used in the examples in
# mediapipe/examples/android/src/java/com/mediapipe/apps/multihandtrackinggpu.
# Images coming into and out of the graph.
input_stream: "input_video"
output_stream: "output_video"
# Collection of detected/processed faces, each represented as a list of
# landmarks. (std::vector<NormalizedLandmarkList>)
@xiong-jie-y
xiong-jie-y / gist:debd100f71f550707897872774f5c0cd
Created April 21, 2020 15:34
Multi face and multi hand tracking with Mediapipe on Ubuntu
We couldn’t find that file to show.