Skip to content

Instantly share code, notes, and snippets.

View goweiting's full-sized avatar

Wei Ting GOH goweiting

  • Edinburgh
View GitHub Profile
@goweiting
goweiting / main.cpp
Created April 27, 2018 17:13
LUT transfer function
#include <stdio.h>
#include <stdlib.h>
#include <GL/glut.h>
#define GLUT_KEY_ESCAPE 27
#ifndef GLUT_WHEEL_UP
#define GLUT_WHEEL_UP 3
#define GLUT_WHEEL_DOWN 4
#endif
@goweiting
goweiting / longjob-on-DICE.md
Last active November 20, 2017 16:17
Running longjob on DICE machines

A quick overview of how to run longjob on DICE - here, focusing on opening a jupyter notebook for a long period of time (28days)

Setup

  1. ssh through the network gateway: ssh sXXX@student.ssh.inf.ed.ac.uk, and into any compute server you wish to use.
  2. create a screen so that it doesnt kill the process after you log out:
$ screen -S <session-name> # name the screen session
$ screen -S mlp (e.g) 

This opens a new screen terminal (more on screen here)

@goweiting
goweiting / install.conda.md
Last active January 7, 2021 17:05
Installing graph-tool with other packages staying in an virtual env (conda)

There is no guarantee that this will work, i wrote this so in case anyone else have the same goal in mind. Using: Ubuntu 16.04; miniconda2

What i want to achieve:

  1. use graph-tool
  2. all other packages should fall in a virtual environment

Failed attempts:

  1. I attempted to build graph-tool from scratch but did not work, and couldnt seem to pull everything together.
  2. Attempt to use all the graph-tool packages on anaconda cloud... (idgi)
@goweiting
goweiting / config2a.sh
Last active March 23, 2017 20:31
network assignment bash script
#!/bin/sh
# IPFW Configuration file for part2a:
pwd
echo 'Creating networks for task 2A'
echo 'For each propagation delay (5, 25, 100ms), pipe config with plr 0.005 and bw 10MBit/s'
ipfw flush
ipfw add pipe 100 in
ipfw add pipe 200 out
@goweiting
goweiting / scrapper.py
Created January 14, 2017 22:26
Web scrapper for downloading documents from a webpage
from bs4 import BeautifulSoup
import urllib
import os
# define the link to scrap from:
mainLink = 'http://www.cs.cmu.edu/~aarti/Class/10701_Spring14/lecs.html'
link = urllib.urlopen(mainLink)
# ext = '.pdf'
# use bs4 to download all the files with .pdf extension