Skip to content

Instantly share code, notes, and snippets.

View pyadav's full-sized avatar
🐢
learning new stuff

Praveen Yadav pyadav

🐢
learning new stuff
View GitHub Profile
@pyadav
pyadav / Event-stream based GraphQL subscriptions.md
Created March 29, 2017 16:32 — forked from OlegIlyenko/Event-stream based GraphQL subscriptions.md
Event-stream based GraphQL subscriptions for real-time updates

In this gist I would like to describe an idea for GraphQL subscriptions. It was inspired by conversations about subscriptions in the GraphQL slack channel and different GH issues, like #89 and #411.

Conceptual Model

At the moment GraphQL allows 2 types of queries:

  • query
  • mutation

Reference implementation also adds the third type: subscription. It does not have any semantics yet, so here I would like to propose one possible semantics interpretation and the reasoning behind it.

@pyadav
pyadav / amazon.md
Created December 30, 2016 19:21 — forked from anonymous/amazon.md
Amazon Noncompete and Invention Assignment

I recently received a job offer from Amazon AWS. The pay and benefits are quite decent. I do really like the role and it would be work that interests me. The people that I have met so far in interviews are great.

What I do not like is the documents I have to sign to get started. They include stuff like this:

"ATTENTION AND EFFORT. During employment, Employee will devote Employee’s entire productive time, ability, attention, and effort to

@pyadav
pyadav / Middleware.js
Created December 23, 2016 06:00 — forked from unbug/Middleware.js
Powerful Javascript Middleware Pattern Implementation, apply middleweare to any object.
'use strict';
/* eslint-disable consistent-this */
let applyMiddlewareHash = [];
/**
* Composes single-argument functions from right to left. The rightmost
* function can take multiple arguments as it provides the signature for
* the resulting composite function.
*
@pyadav
pyadav / README.md
Created December 14, 2016 11:15 — forked from sterlingwes/README.md
Converting a scanned PDF to EPUB ebook (or other format)

Caveat

You're not going to get a beautiful EPUB out the other end - if that's what you're looking for, expect to do some manual clean-up work yourself.

Basic order of operations:

  • Convert your PDF to an OCR-friendly format
  • OCR that shit into plaintext
  • Convert that plaintext into your format of choice (in this case, an EPUB)
@pyadav
pyadav / gist:1f19aaea62fb3b0ccbdb07add6cc9278
Created December 9, 2016 05:17 — forked from Devko/gist:1064553
Node.JS + Redis = Link Crawling
/*
* Crawling the Web (http://bitbucket.org/feuervogel/nodejs-crawler)
*
* 1. Start mit einer nicht-leeren Liste von URLs
* 2. Für jede URL:
* 2. a) Lade HTML runter
* 2. b) Extrahiere Hyperlinks
* 2. c) Hänge Hyperlinks an neue Liste an
* 3. Verschmelze beide Listen
* 4. GOTO 1.
@pyadav
pyadav / webcrawler.js
Created December 9, 2016 05:15 — forked from amoilanen/webcrawler.js
Simple PhantomJS-based web crawler library
//PhantomJS http://phantomjs.org/ based web crawler Anton Ivanov anton.al.ivanov@gmail.com 2012
//UPDATE: This gist has been made into a Node.js module and now can be installed with "npm install js-crawler"
//the Node.js version does not use Phantom.JS, but the API available to the client is similar to the present gist
(function(host) {
function Crawler() {
this.visitedURLs = {};
};
@pyadav
pyadav / crawler.js
Created December 9, 2016 05:15 — forked from evgeny-t/crawler.js
crawler for stitcher podcasts
'use strict';
/*
npm install cheerio promise request request-promise
*/
var fs = require('fs');
var rp = require('request-promise');
var request = require('request');
var cheerio = require('cheerio');
var WebSiteHistory = [{
Id:1038984165156974140 /* Vai repetir muito(pra cada WebSite_Id x History_Id), é hash do Href. */
,WebSite_Id: 542 /* Id do site crawleado */
,History_Id:6 /* Id da data-tempo que foi crawleado */
,Data: new Date()
,Page: [{
Id:3125795879564125365
,Href:"http://www.terra.com.br" /* Vai repetir muito(pra cada WebSite_Id x History_Id). */
@pyadav
pyadav / package.json
Created December 9, 2016 05:08 — forked from tpai/package.json
A web crawler sample based on nightwatch.
{
"name": "nightwatch_webcrawler",
"description": "A web crawler sample based on nightwatch.",
"version": "0.0.1",
"author": {
"name": "tonypai",
"email": "tony77794@gmail.com"
},
"homepage": "http://github.com/tpai",
"dependencies": {
@pyadav
pyadav / Contract
Created December 8, 2016 04:48
Contract
#Website Contract
##[DATE]
##Summary:
I’ll always do my best to fulfil your needs and meet your expectations, but it’s important to have things written down so that we both know what’s what, who should do what and when, and what will happen if something goes wrong. In this contract you won’t find any complicated legal terms or long passages of unreadable text. I’ve no desire to trick you into signing something that you might later regret. What I do want is what’s best for both parties, now and in the future.
So in short;
You [CLIENT NAME], is hiring me, [YOUR NAME] trading as nothingrandom, located at [ADDRESS], to [THE JOB YOU ARE DOING] for the estimated total price of [ESTIMATED PRICE] as outlined in our previous correspondence. Of course it’s a little more complicated, but I’ll get to that.