Skip to content

Instantly share code, notes, and snippets.

View aconanlai's full-sized avatar

Conan aconanlai

View GitHub Profile
@aconanlai
aconanlai / node-nfs.txt
Created June 27, 2019 01:31
node on nearlyfreespeech
Since mopsled didn't renew their domain, this is from their original blogpost which was a useful reference:
https://web.archive.org/web/20180220190550/http://www.mopsled.com/2015/run-nodejs-on-nearlyfreespeechnet/
NearlyFreeSpeech.net (NFSN) is a very inexpensive web host, DNS provider and domain registrar. NFSN added support in 2014 for NodeJS, Django, and tons of other languages through persistent processes. This guide, based on the Django tutorial provided by NFSN, will demonstrate the setup I used for creating a node.js daemon.
NFSN Configuration
If you’re creating a new website, select the Custom domain type:
Choose custom domain option for new site
(ns clj-youtube-server.core
(:require [clojure.java.jdbc :as sql]
[compojure.core :refer :all]
[compojure.handler :as handler]
[ring.middleware.json :as middleware]
[ring.adapter.jetty :as ring]
[compojure.route :as route]))
(def spec (or (System/getenv "DATABASE_URL")
"postgresql://localhost:5432/youtuber"))
let hua = [];
for (let i = 1; i < 10; i ++) {
let num = 0;
for (let j = 0; j < i; j++) {
num += ((9 * (Math.pow(10, j))) * (140 - (j + i + 3)))
}
hua.push(num);
}
<?php
require(dirname(__FILE__) . '/wp-load.php');
require(dirname(__FILE__) . '/wp-content/plugins/sitepress-multilingual-cms/sitepress.php');
function __update_post_meta( $post_id, $field_name, $value = '' )
{
if ( empty( $value ) OR ! $value )
{
delete_post_meta( $post_id, $field_name );
}
/*
SQL queries for matricules
*/
SELECT m.*
FROM (
SELECT nid, MAX(vid) AS mvid
FROM content_type_content_document
GROUP BY
nid
function pickDryer(){
return Math.floor((Math.random() * 2) );
}
function placePairs (num){
var pairs = [];
for (l=0; l<num; l++){
pairs[l]=new Array(2);
}
var results = [];
from scrapy.item import Item, Field
class CraigslistItem(Item):
title = Field()
link = Field()
description = Field()
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from craig.items import CraigslistItem
from scrapy.http import Request
class MySpider(CrawlSpider):
name = "craig"
allowed_domains = ["craigslist.org"]
start_urls = ["https://cleveland.craigslist.org/search/mis"]