Skip to content

Instantly share code, notes, and snippets.

@e96031413
e96031413 / screening.py
Created January 12, 2020 07:42
screening your PC with Python(pillow, numpy, openCV)
'''
original from https://clay-atlas.com/blog/2019/08/16/python-screening-tutorial/
'''
from PIL import ImageGrab
import numpy as np
import cv2
image = ImageGrab.grab()
width, height = image.size
@e96031413
e96031413 / cython_usage.py
Created January 12, 2020 08:19
Save your time with Cython!!!
'''
Simple Cython Usage
'''
''' Step0:Install Cython '''
sudo pip3 install cython
''' Step0:Install Cython '''
''' Step1:Create "a.py" '''
import sys
@e96031413
e96031413 / 104_views.py
Created January 13, 2020 10:28
Demo views.py of my "104-Django-APP" project
from django.shortcuts import render
from django.http import HttpResponse
from django.views.decorators import csrf
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
def main(request):
return render(request,'index.html')
@e96031413
e96031413 / Heroku操作指令.sh
Created January 16, 2020 12:40
紀錄Heroku的指令與問題解決方法
#初始化
$ heroku login #登入heroku
$ git init #設定git
$ heroku git:remote -a your-app-name #填入在Heroku中設定好的名字
$ heroku config:set DISABLE_COLLECTSTATIC=1
#Push到Heroku
$ git add .
$ git commit -am "make it better"
$ git push heroku master
@e96031413
e96031413 / file_input&output.py
Created January 16, 2020 12:48
使用Python進行檔案讀取及寫入
with open ('fileName') as f: #開啟檔案,並且在操作結束後自動關閉
f.write() #寫入字串
f.writelines() #寫入list
f.read() #讀入檔案
f.readline() #逐行讀取(只讀第一行,第二行要再執行一次)
f.readlines() #讀取全部而逐行處理(一次讀完)
@e96031413
e96031413 / SeleniumOnLinuxVM.sh
Created January 18, 2020 06:21
How to install selenium and chromedriver on Linux OS
sudo apt-get update
sudo apt-get install -y unzip openjdk-8-jre-headless xvfb libxi6 libgconf-2-4
# Install Chrome.
sudo curl -sS -o - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add
sudo echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list
sudo apt-get -y update
sudo apt-get -y install google-chrome-stable
# Install ChromeDriver.
@e96031413
e96031413 / split_data.py
Created January 22, 2020 11:19
How to split YOLOv3 data set to training set and validation set?
import glob
import os
import numpy as np
import sys
current_dir = "./data/dataset/images"
split_pct = 10;
file_train = open("data/dataset/train.txt", "w")
file_val = open("data/dataset/val.txt", "w")
counter = 1
@e96031413
e96031413 / geocoder.py
Created January 29, 2020 02:26
How To find the location info with geocoder
#pip install geocoder
import geocoder
g = geocoder.ip('me')
g.latlng
#Out[1]: [22.0063, 120.3933]
g.address
#Out[2]: 'Kaohsiung, Takao, TW'
g.geojson
@e96031413
e96031413 / 104_crawler.py
Created January 30, 2020 06:15
104職缺爬蟲(可自定義關鍵字、工作地點、全職/不限定......)
# 程式碼來自:https://tlyu0419.github.io/2019/04/18/Crawl-JobList104/
# 程式碼最後匯出成Excel的函式需要xlrd套件(pip install xlrd)
import pandas as pd
import re, time, requests
from selenium import webdriver
from bs4 import BeautifulSoup
# 加入使用者資訊(如使用什麼瀏覽器、作業系統...等資訊)模擬真實瀏覽網頁的情況
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.103 Safari/537.36'}
@e96031413
e96031413 / FB_FanPageCrawler.py
Created January 30, 2020 06:32
臉書粉絲專頁爬蟲-以FB台灣大哥大粉絲專頁為例
# 程式碼來自:https://tlyu0419.github.io/2019/05/01/Crawl-Facebook/
# 程式碼最後匯出成Excel的函式需要xlrd套件(pip install xlrd)
import pandas as pd
import re, time, requests
from selenium import webdriver
from bs4 import BeautifulSoup
def FindLinks(url, n):
Links = []