Skip to content

Instantly share code, notes, and snippets.

View shofetim's full-sized avatar

Jordan Schatz shofetim

View GitHub Profile
<!--# include file="/partials/reseller-header.html" -->
<div class="container-fluid">
<div class="row">
<div class="col-sm-12 main">
<center>
<img src="/partials/dashboard.png">
</center>
</div>
</div>
</div>
-- Signups by date and site
SELECT
DATE_TRUNC('week', created_at)::date AS "date",
COUNT(CASE WHEN site = 'bobosales.com' THEN 1 END) AS "Bobosales",
COUNT(CASE WHEN site = 'shops.ksl.com' THEN 1 END) AS "Shops",
COUNT(CASE WHEN site is null THEN 1 END) AS "Uncategorized"
FROM users
GROUP BY 1
ORDER BY 1 DESC
LIMIT 1;
-- Signups by date and site
SELECT
DATE_TRUNC('week', created_at)::date AS "date",
COUNT(CASE WHEN site = 'bobosales.com' THEN 1 END) AS "Bobosales",
COUNT(CASE WHEN site = 'shops.ksl.com' THEN 1 END) AS "Shops",
COUNT(CASE WHEN site is null THEN 1 END) AS "Uncategorized"
FROM users
GROUP BY 1
ORDER BY 1 DESC
LIMIT 1;
#!/usr/bin/env node
'use strict';
/*
I am like git, except that I act on 'projecttown' repos that share a
directory with me, and I add an extra command, `recent` which
displays each repos branches, and last commit time for each branch.
*/
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJOPbAlvLKk6uLkcvQATL7XCKTWBBh55uXH0hYdrkfT1NELxLum6/gNJl3leKmxabnroq6d3XG6FsMDXZC9RfaXjUtuLoyyTDXm1t5UxOWDJmrHLCFWQO+ojMu8u7zTRkpA5++hJpVpVnR3zN6Sde6EZeYDZwTMRfv0pQ+bhxNlb2kEIes6gKDOFajrvtn7T7dWi8au6nQOm2Out1Fy/rcD+xeLYudcXEgPxwJqljieNJMPJO+1cv6rK91GsIGCPahYaCZMP4Vc1IZiGyuDlF/wYchp5vx4/QOK4y6LovDjwIQaq9n8MrupJo4MM7XQVWnWMaYKQRkO+rh7PglBua3 jordans@manifestwebdesign.com
'use strict';
var env = process.env;
var fs = require('fs');
var path = require('path');
var models = require('./models.js');
var Sequelize = require('sequelize');
var parse = require('csv-parse');
var log = console.log.bind(console);
'user strict';
var Sequelize = require('sequelize');
var env = process.env;
var log = console.log.bind(console);
var sequelize = new Sequelize(env.META_DB_NAME, env.META_DB_USER, env.META_DB_PASS, {
host: env.META_DB_HOST,
port: env.META_DB_PORT,
dialect: 'mssql',
cat /etc/issue
Ubuntu 16.04 LTS
uname -a
Linux conn 4.4.0-22-generic #40-Ubuntu SMP Thu May 12 22:03:46 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
lspci -vvn|grep 43 -A7
04:00.0 0280: 14e4:4331 (rev 02)
Subsystem: 14e4:4331
Control: I/O- Mem+ BusMaster- SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx-
@shofetim
shofetim / README.md
Created March 24, 2016 23:06 — forked from dannguyen/README.md
Using Google Cloud Vision API to OCR scanned documents to extract structured data

Using Google Cloud Vision API's OCR to extract text from photos and scanned documents

Just a quickie test in Python 3 (using Requests) to see if Google Cloud Vision can be used to effectively OCR a scanned data table and preserve its structure, in the way that products such as ABBYY FineReader can OCR an image and provide Excel-ready output.

The short answer: No. While Cloud Vision provides bounding polygon coordinates in its output, it doesn't provide it at the word or region level, which would be needed to then calculate the data delimiters.

On the other hand, the OCR quality is pretty good, if you just need to identify text anywhere in an image, without regards to its physical coordinates. I've included two examples:

####### 1. A low-resolution photo of road signs