This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Since mopsled didn't renew their domain, this is from their original blogpost which was a useful reference: | |
https://web.archive.org/web/20180220190550/http://www.mopsled.com/2015/run-nodejs-on-nearlyfreespeechnet/ | |
NearlyFreeSpeech.net (NFSN) is a very inexpensive web host, DNS provider and domain registrar. NFSN added support in 2014 for NodeJS, Django, and tons of other languages through persistent processes. This guide, based on the Django tutorial provided by NFSN, will demonstrate the setup I used for creating a node.js daemon. | |
NFSN Configuration | |
If you’re creating a new website, select the Custom domain type: | |
Choose custom domain option for new site |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
(ns clj-youtube-server.core | |
(:require [clojure.java.jdbc :as sql] | |
[compojure.core :refer :all] | |
[compojure.handler :as handler] | |
[ring.middleware.json :as middleware] | |
[ring.adapter.jetty :as ring] | |
[compojure.route :as route])) | |
(def spec (or (System/getenv "DATABASE_URL") | |
"postgresql://localhost:5432/youtuber")) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
let hua = []; | |
for (let i = 1; i < 10; i ++) { | |
let num = 0; | |
for (let j = 0; j < i; j++) { | |
num += ((9 * (Math.pow(10, j))) * (140 - (j + i + 3))) | |
} | |
hua.push(num); | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
<?php | |
require(dirname(__FILE__) . '/wp-load.php'); | |
require(dirname(__FILE__) . '/wp-content/plugins/sitepress-multilingual-cms/sitepress.php'); | |
function __update_post_meta( $post_id, $field_name, $value = '' ) | |
{ | |
if ( empty( $value ) OR ! $value ) | |
{ | |
delete_post_meta( $post_id, $field_name ); | |
} |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/* | |
SQL queries for matricules | |
*/ | |
SELECT m.* | |
FROM ( | |
SELECT nid, MAX(vid) AS mvid | |
FROM content_type_content_document | |
GROUP BY | |
nid |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
function pickDryer(){ | |
return Math.floor((Math.random() * 2) ); | |
} | |
function placePairs (num){ | |
var pairs = []; | |
for (l=0; l<num; l++){ | |
pairs[l]=new Array(2); | |
} | |
var results = []; |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from scrapy.item import Item, Field | |
class CraigslistItem(Item): | |
title = Field() | |
link = Field() | |
description = Field() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
from scrapy.contrib.spiders import CrawlSpider, Rule | |
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor | |
from scrapy.selector import HtmlXPathSelector | |
from craig.items import CraigslistItem | |
from scrapy.http import Request | |
class MySpider(CrawlSpider): | |
name = "craig" | |
allowed_domains = ["craigslist.org"] | |
start_urls = ["https://cleveland.craigslist.org/search/mis"] |