- arrays: https://gist.github.com/ourmaninamsterdam/1be9a5590c9cf4a0ab42
- questions: https://gist.github.com/ourmaninamsterdam/800ea80d463a72711adf#html
- random str: https://gist.github.com/ourmaninamsterdam/122d97d549e609e458fa
- async wait https://ponyfoo.com/articles/understanding-javascript-async-await
- es6 https://ponyfoo.com/articles/es6
- Business Cheat Sheet https://gist.github.com/eugeneiiim/6322717 (also remember that most successful businesses didn't use a cheat sheet or guide to achieve success, or maybe I'm wrong ¯\(°_o)/¯ )
- All of 'em https://duck.co/ia?repo=goodies&topic=programming
- Another one https://gist.github.com/blagoeres/f02c68d6a8c51d067a68
- Some french guy https://gist.github.com/erkobridee/3794134
- Moar https://gist.github.com/daveamato/27f2e5b7c0614cf0719d0aa39b278843
- Most starred repos https://gist.github.com/kaizensoze/00ccfb395ec8410daec2
- Mixins https://gist.github.com/andreascarpello/ac769d14588b52950d16
- *nix reference https://gist.github.com/pebreo/fe254c364abfd958d5d8
- Wiki all the things https://gist.github.com/demidovakatya/3caab70db1b23716ad09
- Dictionary of the most common english words https://gist.github.com/krig/4590005 (so imagine a text editor that does not allow writing of any words except these ... or a database-driven application that does not allow any entries except these ... or a markov chain that asks "what did you mean by X?" if any words not in this list are used)
- all the functional programming https://gist.github.com/sadcitizen/7d656e0778cd2d1ce10efed4374d83cb
- testing (specifically: capybara vim markdown github git siteprism tmux) https://gist.github.com/kingslayer/d1ff88249fbc4d0d4381
- accessibility guide https://gist.github.com/blzaugg/9813439
- dev tools https://gist.github.com/Demwunz/7468194
- links to live by https://gist.github.com/patrickbrandt/2d3bee7466dc06bb7264
- Security resources https://gist.github.com/0xf165/f3ebce17db0c836449dc32001ad12a89
- SEO https://gist.github.com/dypsilon/8275167
- awesome python https://github.com/vinta/awesome-python#web-content-extracting
- tradecraft https://grugq.github.io/resources/some_elements_of_intelligence_work-dulles.txt
- decent microdata implementation https://schema.org/Recipe
- needful https://github.com/novnc/websockify
- http layering https://www.npmjs.com/package/dualapi
because it doesn't fucking matter anymore — focus on what gets the job done
- gulp-angular https://github.com/Swiip/generator-gulp-angular
- angularAMD https://github.com/marcoslin/generator-angularAMD
- angularAMD working with ui-router http://stackoverflow.com/a/27466890
- get functional with videos https://gist.github.com/bryanhunter/c670233f7b4a942b9c8e
- daily challenges https://www.reddit.com/r/dailyprogrammer
- cs https://gist.github.com/alexkuhl/a62c2a798c89080528ac
- algos https://gist.github.com/erikgrueter/af5bb427a45c3548874f ang https://gist.github.com/billhance/5e83b0c673f7ed8812e4271756a010bc
- tech https://gist.github.com/TSiege/cbb0507082bb18ff7e4b
- JS https://gist.github.com/monkecheese/adc88c0213f708bb9f22
- JS https://gist.github.com/rasheedamir/cfe7fc29103f408e13e8
- Python https://gist.github.com/denhartog/0d975d787f2806fce044
- DataFrames in Spark https://databricks.com/blog/2015/02/17/introducing-dataframes-in-spark-for-large-scale-data-science.html
- Cheat sheet for Spark with Python https://gist.github.com/evenv/b4d5f3054d7260e6c3d3
- Long Guide https://gist.github.com/gpfreitas/334dc2a6c0bac16a71f6
data.table
in R https://gist.github.com/TSiege/cbb0507082bb18ff7e4b- Needful https://gist.github.com/crockettcobb/7094b1ea2932da1c9f8e
- Probability Cheat sheet http://www.wzchen.com/probability-cheatsheet/
- Calculus https://www.quora.com/What-are-the-best-resources-for-mastering-multivariable-calculus
- Linear Algebra https://ocw.mit.edu/courses/mathematics/18-06-linear-algebra-spring-2010/
- Data Blogs https://www.quora.com/What-are-the-best-blogs-about-data
- public data sets http://www.datasciencecentral.com/profiles/blogs/great-github-list-of-public-data-sets?overrideMobileRedirect=1
- https://github.com/toddmotto/public-apis
- password construction guidelines https://www.sans.org/security-resources/policies/general/pdf/password-construction-guidelines
- niceware https://diracdeltas.github.io/niceware/
- big-o cheat sheet http://bigocheatsheet.com/
- cheat sheet http://www.datavizcatalogue.com/
- cheat sheet https://gist.github.com/emekankurumeh/49f7701171a773b74954
- bootcamp http://gribblelab.org/CBootcamp/
- nyancat https://github.com/klange/nyancat.git
- https://gist.github.com/sethbergman/b1102dba03b0e25679f2
- https://gist.github.com/lin/9edf4e4ab0d9351ebe96
- https://gist.github.com/theoretick/6033200
- https://gist.github.com/Algogator/f309eb93abee91b72f52
- https://gist.github.com/Coolagin/2990403
- https://gist.github.com/kenrett/7553278
- https://gist.github.com/carols10cents/f505ed97f495ea37a4b4
fs, os, net, http, events
- Complete Cheat sheet https://gist.github.com/LeCoupa/985b82968d8285987dc3
- Cheat sheet https://gist.github.com/5310/68ac677e968eb6d0cfb54913ab975b8d
- IBM quiz https://www.ibm.com/developerworks/library/j-nodejsquiz/sidefile.html
var events = require('events')
var emitter = new events.EventEmitter()
emitter.on('knock', function() {
console.log('Who\'s there?')
})
emitter.on('knock', function() {
console.log('Go away!')
})
emitter.emit('knock')
emitter.emit('knock')
var express = require('express')
var app = express()
// respond with "Hello World!" on the homepage
app.get('/', function (req, res) {
res.send('<h1>Hello JavaScript!</h1>');
});
// accept POST request on the homepage
app.post('/', function (req, res) {
res.send('Got a POST request</h1>');
});
// accept GET request at /user
app.get('/user', function (req, res) {
res.send('<h1>Hello John Doe!</h1>');
});
// accept PUT request at /user
app.put('/user', function (req, res) {
res.send('Got a PUT request at /user');
});
// accept DELETE request at /user
app.delete('/user', function (req, res) {
res.send('Got a DELETE request at /user');
});
var server = app.listen(3000, function () {
var host = server.address().address;
var port = server.address().port;
console.log('Example app listening at http://%s:%s', host, port);
});
const Seneca = require('seneca')
const SenecaWeb = require('seneca-web')
const Express = require('express')
const seneca = Seneca()
seneca.use(SenecaWeb, {
context: Express(),
adapter: require('seneca-web-adapter-express')
})
seneca.ready(() => {
const app = seneca.export('web/context')()
app.listen(3000)
})
'use strict'
let defer = require('promise-defer')
let request = require('request')
let docopt = require('docopt-js');
let cli = __parser__(function () {/*!
Usage:
node index.js all
node index.js get <id>
node index.js -h | --help
node index.js --version
*/});
function __parser__ (f) {
/// @inner
/// @description
/// Simple comment-based usage document parser.
return f.toString().
replace(/^[^\/]+\/\*!?/, '').
replace(/\*\/[^\/]+$/, '');
}
function __cli__ (config) {
let API = {
baseUrl: 'http://jsonplaceholder.typicode.com/',
all () {
this.request('posts').then(function (result) {
console.log(result)
}, function (error, result) {
console.log(result)
})
},
get (id) {
this.request('posts/' + id).then(function (result) {
console.log(result)
}, function (error, result) {
console.log(result)
})
},
request (resource) {
let def = defer()
let content = ''
request.get(this.baseUrl + resource)
.on('error', function (error) {
def.reject(error.message)
})
.on('data', function (chunk) {
content += chunk
})
.on('end', function () {
let result = JSON.parse(content)
def.resolve(result)
})
return def.promise
}
}
if (config['all']) {
API.all()
} else if (config['get']) {
API.get(config['<id>'])
} else
console.log('No command provided.')
}
let initConfig = docopt.docopt(cli, { version: '0.0.1' })
module.exports = __cli__(initConfig)
HTTP is commonly associated with REST, which uses "resources" as its core concept. In contrast, GraphQL's conceptual model is an entity graph. As a result, entities in GraphQL are not identified by URLs.
GraphQL is often referred to as more efficient than REST because it allows clients to ask for multiple resources in one request, which saves round trips, and also allows clients to filter down to only the fields they actually need. So at the end of the day the way requests are done seems similar, but the more powerful query language allows the client to get exactly the data they need and no more.
https://stackoverflow.com/questions/40669050/is-graphql-stateless?rq=1
- https://www.nginx.com/blog/introduction-to-microservices/
- https://www.martinfowler.com/articles/microservices.html
- http://www.nearform.com/nodecrunch/microservices-software-components-work/
- http://www.cmswire.com/digital-experience/microservices-make-inroads-replacing-the-cms-monolith/
- https://www.ics.uci.edu/~fielding/pubs/dissertation/rest_arch_style.htm
REST provides access to a specific resource, e.g. a user or a product. The result from a request will likely be an assumption of what data you will want or use, i.e. it's probably everything about that resource regardless of whether you use all the data or not.
There is also the problem of N+1. As an example, take the user has and belongs to many relationships scenario; with a RESTful API you would make a request to the user, e.g. /users/:id then make a request to all their relationships, e.g. /users/:id/relationships, so that's two requests already. There could be an assumption of the relationships endpoint to include both the relationship (friend, family member, etc.) and the user in the resulting array, but if the API doesn't make that assumption, you're going to have to make a request to each user endpoint to get the data on each user in each relationship.
https://stackoverflow.com/questions/40671105/projects-where-rest-is-more-suitable-over-graphql
Also worth noting is, that GraphQL puts responsibilty on the client, because the backing API is reduced to be a datastore that just needs to be queried. REST on the other hand enforces the behaviour of the client and therefore reduces responsibility on him. The client gets reduced to be something similar to a browser.
https://stackoverflow.com/questions/41141577/graphql-or-rest
Falcor lets you represent all your remote data sources as a single domain model via a virtual JSON graph. You code the same way no matter where the data is, whether in memory on the client or over the network on the server.
The Router is appropriate as an abstraction over a service layer or REST API. Using a Router over these types of APIs provides just enough flexibility to avoid client round-trips without introducing heavy-weight abstractions. Service-oriented architectures are common in systems that are designed for scalability. These systems typically store data in different data sources and expose them through a variety of different services. For example, Netflix uses a Router in front of its Microservice architecture.
It is rarely ideal to use a Router to directly access a single SQL Database. Applications that use a single SQL store often attempt to build one SQL Query for every server request. Routers work by splitting up requests for different sections of the JSON Graph into separate handlers and sending individual requests to services to retrieve the requested data. As a consequence, individual Router handlers rarely have sufficient context to produce a single optimized SQL query. We are currently exploring different options for supporting this type of data access pattern with Falcor in future.
- CMS + Web Application http://demo.keystonejs.com/
npm install -g generator-keystone-react
- JSON Web Tokens can be used for CSRF and service endpoints http://www.keycloak.org/keycloak-nodejs-auth-utils/
- Inspired by Ember Data, JSData is the model layer you've been craving. It consists of a convenient framework-agnostic, in-memory store for managing your data, which uses adapters to communicate with various persistence layers. http://www.js-data.io/docs/home
- Lovefield is a relational database for web apps. Written in JavaScript, works cross-browser. Provides SQL-like APIs that are fast, safe, and easy to use. https://google.github.io/lovefield/
- OnionRM is an ORM for node.js targeting Postgres. OnionRM owes it's existence to node-orm2 from which OnionRM forked. https://github.com/ScoutGroup/onionrm
- Restmod creates objects that you can use from within Angular to interact with your RESTful API. https://github.com/platanus/angular-restmod
- A hypermedia client for AngularJS applications. Supports relations in HTTP Link headers, JSON properties and JSON HAL, and resource profiles. https://github.com/jcassee/angular-hypermedia
npm install seneca-web
- Seneca with React https://github.com/vtardia/seneca-varo-react-example.git
- Data Entities http://senecajs.org/docs/tutorials/understanding-data-entities.html
- Props vs State https://github.com/uberVU/react-guide/blob/master/props-vs-state.md
- ReactJS http://reactcheatsheet.com/
- Raj's app is good https://github.com/brainix/cassette
- Cheatsheet https://gist.github.com/marcj/dda218b489cedc5cc3e8
- Testing cheat sheet https://github.com/sydcanem/angularjs-testing-cheat-sheet
- Good cheat sheet https://gist.github.com/hofmannsven/d67f0cb2f67911a438ed
- angular all the things https://gist.github.com/swashcap/91407687f21725fbee32
Model, Collection, View, Event, Router
- Backbone Fundamentals https://github.com/addyosmani/backbone-fundamentals/raw/gh-pages/backbone-fundamentals.pdf
- Marionette https://gist.github.com/cmcculloh/35e6c6c7408cf8d45899
- cordova, etc. https://gist.github.com/Frulko/744bc8e0eb4177dc1f40
- ionic cheat sheet https://gist.github.com/jspenc72/a65724542ebb0047a69338a8f31626e5
- remote debugging with weinre https://gist.github.com/KDawg/4029505
- cheat sheet https://github.com/audreyr/favicon-cheat-sheet/
- Advanced SASS http://12devs.co.uk/articles/handy-advanced-sass/
- Organic CSS https://github.com/krasimir/organic-css
- Critical CSS https://github.com/addyosmani/critical
- Cheat sheet https://gist.github.com/AllThingsSmitty/3bcc79da563df756be46
- Cheat Sheet https://gist.github.com/SPJPGRD/6765e93fa6403220b78eef7a6fcd58dc
- A (stupid) "declarative" SASS framework https://github.com/nerdfiles/grammuelle
- Needful https://gist.github.com/IschaGast/0c46a1b5a36cbc920ca7c2ce19b4bea4
- Button Gradient https://gist.github.com/victorshkoda/7be41d0f4ff432c0adf6d298535b4c71
- Moar needful https://gist.github.com/pvrt/4e1d0be591c82e4d7191
- Animation focus https://gist.github.com/abods/ae2afec6fbecd8fe01e8
- UI mixins https://gist.github.com/andreascarpello/ac769d14588b52950d16
- Has a triangle https://gist.github.com/tylerkidd/9464da07c396827c6706
- Principles of Design https://gist.github.com/dnieh/7c2a8d7eac7ef63d7515#archetypes
- Visual Grammar - Christian Leborg https://drive.google.com/file/d/0BwLXUGQklH50NWJlMWY3NGQtYTRmZi00NGUzLWI5NzMtMjM2NjI3ZjQ1MWZl/view?usp=sharing
- Cheat sheet https://gist.github.com/bartholomej/8415655
- Cheat sheet https://github.com/AndreLion/mediaquery
- Guides https://gist.github.com/rafszul/b9433d404a941ed68b5d
- CSS properties https://gist.github.com/legomushroom/7397561
- Intense http://tholman.com/intense-images/
- Straightforward responsive images with grunt build https://gist.github.com/anumsh/54eb199059cfff2638a4d0183a4b7592
- Moar https://gist.github.com/anumsh/54eb199059cfff2638a4d0183a4b7592
- Accessibility and Dev Checklist http://webdevchecklist.com/
- Moar https://gist.github.com/maxbrockman453/0bb92c9ac5e39c771a71
- online tools https://gist.github.com/robertpateii/1566445
- ARIA template https://gist.github.com/kristindiannefoss/8aee1b56bafe5bce14d404908b831f6c
- ARIA summary https://gist.github.com/domenic/ae2331ee72b3847ce7f5
- ARIA scenarios https://gist.github.com/domenic/bc8a36d9608d65bd7fa9
- Creating ARIA accessibility with JS https://gist.github.com/domenic/8ae33f320b856a9aef43
- Landmarks https://gist.github.com/hmig/050d5b7e4f9b8b60d560
- Implementation details https://gist.github.com/onsa/15bd66a75c2bd3b6e73d7ef68db7714e
- A good menu https://gist.github.com/Melindrea/6329779
- ARIA tabs with d3 https://gist.github.com/shawnbot/eb40c7801a527e1949e6
- Bookmarklet for testing ARIA support https://gist.github.com/nfreear/7770852 and https://gist.github.com/nfreear/eeea0dc0f4b127b8547189cc1d464be2
- microformats https://gist.github.com/daviddarnes/cc14d353bc9ee557b7d9
- Accessible
<input>
with AngularJS Directive https://gist.github.com/peterkc/e59ba160d06f49123e69496f3845bcb5 - Using
ngAria
https://gist.github.com/marcysutton/8e4f6e51d82cb36710c4 and https://gist.github.com/marcysutton/367fe823e0a48eee2351 - moar with tools https://gist.github.com/JoniWeiss/759b257e8553448de58546d7a827f3c1
- Consider that users who user assistive tech are thinking differently about: Who, What, When, Where, Why, and How about content — screen readers do not always make these questions obvious; that's what semantics are for
- captchas: Ask if "X" is "R" with respect to "Y"? e.g., Is a car bigger than a cat?
- Use
alt
andlongdesc
. - Think about
<noscript>
- Think about logical order of content
- Think about navigational content and what should appear first or last
- Consider if something actually should be a
<dl>
versus something else - Try to emulate known standards like File, Edit, Help menus with
- jasmine https://jasmine.github.io/
npm install --save-dev cucumber selenium-webdriver@3.0.1 chromedriver@2.25.1
- Protractor http://www.protractortest.org/#/
- CasperJS https://www.npmjs.com/package/casper-test-runner
- Nightwatch http://nightwatchjs.org/
- Istanbul with Karma https://github.com/gotwarlost/istanbul
- Chai http://chaijs.com/api/assert/
- Jasmine http://nightwatchjs.org/
- Swagger http://swagger.io/
- APIDOC http://apidocjs.com/
npm install --save-dev mochawesome
- Plato on JS https://github.com/es-analysis/plato
- Settings file with Mongoose https://stackoverflow.com/questions/25471339/how-to-integration-test-nodejs-mongo
- Visual regression testing lib https://github.com/uberVU/mugshot
npm install jenkins
- NightmareJS http://www.nightmarejs.org/
- node-scrapy https://github.com/eeshi/node-scrapy
- artoo https://medialab.github.io/artoo/
- PhearJS https://davidwalsh.name/run-scraping-api-phearjs
- https://zapier.com + https://dexi.io
- https://parsehub.com
- pattern https://github.com/clips/pattern
- Cheat sheet https://gist.github.com/gnakan/5137a1128f9ed8b9aa41c4c2ccbd5110
- algos cheat sheet http://eferm.com/wp-content/uploads/2011/05/cheat3.pdf
- in a week https://medium.com/learning-new-stuff/machine-learning-in-a-week-a0da25d59850#.p6n5geisf
- neural networks https://medium.com/learning-new-stuff/how-to-learn-neural-networks-758b78f2736e#.jwmcbp1nb
- for investing https://www.youtube.com/playlist?list=PLQVvvaa0QuDd0flgGphKCej-9jp-QdzZ3
- support vector machines, support vector regressions https://github.com/jeff1evesque/machine-learning
- The Idea That Eats Smart People http://idlewords.com/talks/superintelligence.htm
- Part III: On the Origin and Nature of the Emotions https://en.wikisource.org/wiki/Ethics_(Spinoza)/Part_3
- Cheat sheet https://gist.github.com/turingbirds/3df43f1920a98010667a
- resource description framework for web of trust http://xmlns.com/wot/0.1/index.rdf
- what is tcp/ip? https://www.concise-courses.com/what-is-tcp-ip/
- Commands https://gist.github.com/tappoz/847229167d9c031ab6815fd72c0cb509
- Commands https://gist.github.com/drorm/321a41d2d89bc772a9fb64d3c20f4514
- Commands https://gist.github.com/lexsys27/4239ac157f80ed967044
- Commands https://gist.github.com/apolloclark/b3f60c1f68aa972d324b
- Cheat sheet https://github.com/wsargent/docker-cheat-sheet
- zerotomulti https://gist.github.com/hangtwenty/817940df693816f52919
On a high level, both fulfill similar goals: orchestration of applications inside the datacenter / cloud.
Marathon is a cluster-wide init and control system for running Linux services in cgroups and Docker containers. Marathon has a number of different canary deploy features and is a very mature project.
Marathon runs on top of Mesos and is a "native" Mesos framework. Mesos is a highly scalable, battle tested and flexible resource manager. Marathon is proven to scale and runs many in many production environments.
The Mesos and Mesosphere technology stack provides a cloud-like environment for running existing Linux workloads, but it also provides a native environment for building new distributed systems - this is a big differentiator as Mesos is the "native" platform for datacenter services such as Spark. But also Hadoop can run on top of Mesos, allowing users of the Mesos ecosystem to share their resources amongst all these datacenter services.
Mesos is a distributed systems kernel, with a full API for programming directly against the datacenter. It abstracts underlying hardware (e.g. bare metal or VMs) away and just exposes the resources. It contains primitives for writing distributed applications (e.g. Spark was originally a Mesos App, Chronos, etc.) such as Message Passing, Task Execution, etc. Thus, entirely new applications are made possible. Apache Spark is one example for a new (in Mesos jargon called) framework that was built originally for Mesos. This enabled really fast development - the developers of Spark didn't have to worry about networking to distribute tasks amongst nodes as this is a core primitive in Mesos.
To my knowledge, Kubernetes is not used inside Google in production deployments today. For production, Google uses Omega/Borg, which is much more similar to the Mesos/Marathon model. However the great thing about using Mesos as the foundation is that both Kubernetes and Marathon can run on top of it.
In my opinion you should always use Mesos as the base and then decide if you prefer Marathon or Kubernetes - installing either one on top of Mesos isn't much work.
https://www.quora.com/What-is-the-difference-between-Googles-Kubernetes-and-Mesospheres-Marathon
- Cheat sheet https://gist.github.com/johndstein/9cc6d22c392533a5a8f0
- Cheat sheet https://gist.github.com/filipefigcorreia/3db4c7e525581553e17442792a2e7489
- Cheat sheet https://gist.github.com/miharp/10280061
- AngularJS + ElasticSearch https://www.sitepoint.com/building-recipe-search-site-angular-elasticsearch/
- Consider Raj's ReactJS Search strategy https://github.com/brainix/cassette/blob/master/src/Search.jsx
- Cheat Sheet https://gist.github.com/vkroz/5c9d589cbb62a1ac77d7cd2ef8fe471e
- Some CRUD-like stuff for Recipe and RecipeCategory https://gist.github.com/piotrgrundas/b543540f1bd3cf0943932ae1bdc9c68e
- Just use serverless.com https://github.com/serverless/serverless
- Event Scheduling https://gist.github.com/wakproductions/048418c1a7892b28eb65958b224e4f5f
- Cheat sheet https://github.com/jalateras/aws-lambda-cheatsheet
If you do not have a search requirement, go with MongoDB but generally you should use NoSQL for known document lookups and index with SOLR.
Also please note that some people have integrated Solr/Lucene into Mongo by having all indexes be stored in Solr and also monitoring oplog operations and cascading relevant updates into Solr.
With this hybrid approach you can really have the best of both worlds with capabilities such as full text search and fast reads with a reliable datastore that can also have blazing write speed.
https://stackoverflow.com/questions/3215029/nosql-mongodb-vs-lucene-or-solr-as-your-database
- High-level https://lucidworks.com/blog/2010/04/30/nosql-lucene-and-solr/
- High-level https://lucidworks.com/blog/2010/04/29/for-the-guardian-solr-is-the-new-database/
- If you just want to store data using key-value format, Lucene is not recommended because its inverted index will waste too much disk spaces.
- And with the data saving in disk, its performance is much slower than NoSQL databases such as redis because redis save data in RAM.
- The most advantage for Lucene is it supports much of queries, so fuzzy queries can be supported.
- However we observe that query performance of Solr decreases when index size increases.
- We realized that the best solution is to use both Solr and Mongo DB together.
- Then, we integrate Solr with MongoDB by storing contents into the MongoDB and creating index using Solr for full-text search.
- We only store the unique id for each document in Solr index and retrieve actual content from MongoDB after searching on Solr.
- Getting documents from MongoDB is faster than Solr because there is no analyzers, scoring etc. [...]
- All the databases http://nosql-database.org/
- Another DB comparison http://db-engines.com/en/system/Elasticsearch%3BMongoDB%3BSolr
- Split brain problem https://discuss.elastic.co/t/need-more-clarity-and-understanding-on-split-brain-problem/53184/
- Solr 4+ does support partial updates, and soft commits / near real time do away with most of the issues of "old-style" Solr commits.
- Solr has a feature that support schema or no-schema!
- MongoDB is schema-less.
- Use Mongoose
npm i -g mongoui
-
cheat sheet on optimizin' https://www.sisense.com/blog/8-ways-fine-tune-sql-queries-production-databases/
- Define Business Requirements before Beginning
- Define SELECT Fields instead of SELECT *
- Select More Fields to Avoid SELECT DISTINCT
- Create Joins with INNER JOIN Rather than WHERE
- Use WHERE instead of HAVING to Define Filters
- Use Wildcards at the End of a Phrase Only
- Use LIMIT to Sample Query Results
- Run Analytical Queries During Off-Peak Times
3360
A composite key consists of more than one attribute to uniquely identify an entity occurrence. This differs from a compound key in that one or more of the attributes, which make up the key, are not simple keys in their own right.
For example, you have a database holding your CD collection. One of the entities is called tracks, which holds details of the tracks on a CD. This has a composite key of CD name, track number.
CD name in the track entity is a simple key, linking to the CD entity, but track number is not a simple key in its own right.
Answer from http://stackoverflow.com/questions/12240280/what-is-a-composite-foreign-key-in-mysql
I try to attempt to explain normalization in layman terms here. First off, it is something that applies to relational database (Oracle, Access, MySQL) so it is not only for MySQL.
Normalisation is about making sure each table has the only minimal fields and to get rid of dependencies. Imagine you have an employee record, and each employee belongs to a department. If you store the department as a field along with the other data of the employee, you have a problem - what happens if a department is removed? You have to update all the department fields, and there's opportunity for error. And what if some employees does not have a department (newly assigned, perhaps?). Now there will be null values.
So the normalisation, in brief, is to avoid having fields that would be null, and making sure that the all the fields in the table only belong to one domain of data being described. For example, in the employee table, the fields could be id, name, social security number, but those three fields have nothing to do with the department. Only employee id describes which department the employee belongs to. So this implies that which department an employee is in should be in another table.
Here's a simple normalization process.
EMPLOYEE ( < employee_id >, name, social_security, department_name)
This is not normalized, as explained. A normalized form could look like
EMPLOYEE ( < employee_id >, name, social_security)
Here, the Employee table is only responsible for one set of data. So where do we store which department the employee belongs to? In another table
EMPLOYEE_DEPARTMENT ( < employee_id >, department_name )
This is not optimal. What if the department name changes? (it happens in the US government all the time). Hence it is better to do this
EMPLOYEE_DEPARTMENT ( < employee_id >, department_id )
DEPARTMENT ( < department_id >, department_name )
There are first normal form, second normal form and third normal form. But unless you are studying a DB course, I usually just go for the most normalized form I could understand. Hope this helps.
Answer from http://stackoverflow.com/questions/1258743/normalization-in-mysql
The idea behind partitioning isn't to use multiple servers but to use multiple tables instead of one table. You can divide a table into many tables so that you can have old data in one sub table and new data in another table. Then the database can optimize queries where you ask for new data knowing that they are in the second table. What's more, you define how the data is partitioned.
Simple example from the MySQL Documentation :
CREATE TABLE employees (
id INT NOT NULL,
fname VARCHAR(30),
lname VARCHAR(30),
hired DATE NOT NULL DEFAULT '1970-01-01',
separated DATE NOT NULL DEFAULT '9999-12-31',
job_code INT,
store_id INT
)
PARTITION BY RANGE ( YEAR(separated) ) (
PARTITION p0 VALUES LESS THAN (1991),
PARTITION p1 VALUES LESS THAN (1996),
PARTITION p2 VALUES LESS THAN (2001),
PARTITION p3 VALUES LESS THAN MAXVALUE
);
This allows to speed up e.g.: Dropping old data by simple: ALTER TABLE
employees DROP PARTITION p0;
Database can speed up a query like this:
SELECT COUNT(*)
FROM employees
WHERE separated BETWEEN '2000-01-01' AND '2000-12-31'
GROUP BY store_id;
Knowing that all data is stored only on the p2 partition.
Answer from http://stackoverflow.com/questions/1579930/what-is-mysql-partitioning
INSERT statements that use VALUES syntax can insert multiple rows. To do this, include multiple lists of column values, each enclosed within parentheses and separated by commas. Example:
INSERT INTO tbl_name
(a,b,c)
VALUES
(1,2,3),
(4,5,6),
(7,8,9);
Answer from http://stackoverflow.com/questions/6889065/inserting-multiple-rows-in-mysql
-
All changes you make are visible within the same transaction. If you do
START TRANSACTION; INSERT INTO MyTable VALUES ('Hi there'); SELECT * FROM MyTable;
your output will include the 'Hi there'. But if you start a second database-connection the new row won't be displayed until you commit your transaction from within the first connection. Try playing with this using two database-connections using the command-line. You're not seeing the effect in your website because you can't have the same transaction within two database-connection (a new db-connection will be made at the beginning of your request). 2) All transactions that aren't committed will be rolled back when the connection with the database is closed. So if these are your only two queries, there are no difference. However there is a difference between
START TRANSACTION;
INSERT INTO MyTable VALUES ('This one would be discarded on rollback');
ROLLBACK;
INSERT INTO MyTable VALUES ('This one will be permanent because not within transaction');
- Yes, these are all the same.
Answer from http://stackoverflow.com/questions/19890966/mysql-transaction-commit-and-rollback
DELETE
DELETE is a DML Command. DELETE statement is executed using a row lock, each row in the table is locked for deletion. We can specify filters in where clause It deletes specified data if where condition exists. Delete activates a trigger because the operation are logged individually. Slower than truncate because, it keeps logs. Rollback is possible.
TRUNCATE
TRUNCATE is a DDL command. TRUNCATE TABLE
always locks the table and page but not each row. Cannot use Where Condition. It Removes all the data. TRUNCATE TABLE
cannot activate a trigger because the operation does not log individual row deletions. Faster in performance wise, because it doesn't keep any logs. Rollback is possible.
DELETE and TRUNCATE both can be rolled back when used with TRANSACTION. if there is a PK with auto increment, truncate will
reset the counter
http://beginner-sql-tutorial.com/sql-delete-statement.htm
Answer from http://stackoverflow.com/questions/20559893/comparison-of-truncate-vs-delete-in-mysql-sqlserver
You could use the DATE() function.
SELECT `tag`
FROM `tags`
WHERE DATE(`date`) = '2011-06-07'
However, for better performance you could use...
WHERE `date`
BETWEEN '2011-06-07'
AND '2011-06-07 23:59:59'
Answer from http://stackoverflow.com/questions/6273361/mysql-query-to-select-records-with-a-particular-date
I've seperated this answer into two(2) methods. The first method will separate your fullname field into first, middle, and last names. The middle name will show as NULL if there is no middle name.
SELECT
SUBSTRING_INDEX(SUBSTRING_INDEX(fullname, ' ', 1), ' ', -1) AS first_name,
If( length(fullname) - length(replace(fullname, ' ', ''))>1,
SUBSTRING_INDEX(SUBSTRING_INDEX(fullname, ' ', 2), ' ', -1) ,NULL)
as middle_name,
SUBSTRING_INDEX(SUBSTRING_INDEX(fullname, ' ', 3), ' ', -1) AS last_name
FROM registeredusers
This second method considers the middle name as part of the lastname. We will only select a firstname and lastname column from your fullname field.
SELECT
SUBSTRING_INDEX(SUBSTRING_INDEX(fullname, ' ', 1), ' ', -1) AS first_name,
TRIM( SUBSTR(fullname, LOCATE(' ', fullname)) ) AS last_name
FROM registeredusers
There's a bunch of cool things you can do with substr, locate, substring_index, etc. Check the manual for some real confusion. http://dev.mysql.com/doc/refman/5.0/en/string-functions.html
Answer from http://stackoverflow.com/questions/14950466/how-to-split-the-name-string-in-mysql
SELECT DISTINCT
flrhost_mls.mlsdata.MLS_LISTING_ID
FROM flrhost_mls.mlsdata
INNER JOIN flrhost_forms.ft_form_8
ON flrhost_mls.mlsdata.MLS_AGENT_ID = flrhost_forms.ft_form_8.nar_id
WHERE flrhost_mls.mlsdata.MLS_AGENT_ID = '260014126'
AND flrhost_forms.ft_form_8.transaction_type = 'listing'
AND flrhost_mls.mlsdata.MLS_LISTING_ID NOT IN (SELECT b.mls_id FROM flrhost_forms.ft_form_8 b)
I just realized the query does not return results if there are no comments attached to the news table, here's the fix as well as an added column for the total # of posts:
SELECT news.*, comments.name, comments.posted, (SELECT count(id) FROM comments WHERE comments.parent = news.id) AS numComments
FROM news
LEFT JOIN comments
ON news.id = comments.parent
AND comments.id = (SELECT max(id) FROM comments WHERE parent = news.id)
Answer from http://stackoverflow.com/questions/469338/get-the-latest-row-from-another-table-in-mysql
SELECT LEFT(field1,LOCATE(' ',field1) - 1)
Answer from http://stackoverflow.com/questions/3471199/get-all-characters-before-space-in-mysql
the best answer that does not need to hard-code the column names is:
DECLARE @sqlStr VARCHAR(max) = (
SELECT stuff((
SELECT 'and ' + c.NAME + ' is null '
FROM sys.columns c
WHERE object_name(object_id) = 'yourtablename'
ORDER BY c.NAME
FOR XML PATH('')
), 1, 3, '')
)
SET @sqlStr = 'select * from ' + yourtablename + ' where ' + @sqlStr
PRINT @sqlStr
EXEC (@sqlStr)
Answer from http://stackoverflow.com/questions/14112211/mysql-selecting-rows-with-null-columns
I think you should consider that the guidelines you read apply to how an invoice should be displayed , and not how it should be stored in the database.
When a number is stored as an INT, it's a pure number. If you add zeros in front and store it again, it is still the same number.
You could select the NUMER field as follows, or create a view for that table:
SELECT LPAD(NUMER,6,'0') AS NUMER
FROM ...
Or, rather than changing the data when you select it from the database, consider padding the number with zeros when you display it, and only when you display it.
I think your requirement for historical data to stay the same is a moot point. Even for historical data, an invoice numbered 001203 is the same as an invoice numbered 1203.
However, if you absolutely must do it the way you describe, then converting to a VARCHAR
field may work. Converted historical data can be stored as-is, and any new entries could be padded to the required number of zeros. But I do not recommend that.
Answer from http://stackoverflow.com/questions/17612920/lpad-with-leading-zero
You are looking for the trim() function . Alright, here is your example
SELECT TRIM(LEADING '0' FROM myfield) FROM table
Answer from http://stackoverflow.com/questions/96952/how-to-trim-leading-zeros-from-alphanumeric-text-in-mysql-function
You can use GROUP_CONCAT .
As in: SELECT person_id, GROUP_CONCAT(hobbies SEPARATOR ', ') FROM peoples_hobbies GROUP BY person_id
Death :
As Dag stated in his comment, there is a 1024 byte limit on result. To solve this, run this query before your query:
SET group_concat_max_len = 2048
Of course, you can change 2048 according to your needs.
Answer from http://stackoverflow.com/questions/276927/can-i-concatenate-multiple-mysql-rows-into-one-field
I find this explanation quite clear (it's pure copy from Technet ): There are two types of temporary tables: local and global. Local temporary tables are visible only to their creators during the same connection to an instance of SQL Server as when the tables were first created or referenced. Local temporary tables are deleted after the user disconnects from the instance of SQL Server. Global temporary tables are visible to any user and any connection after they are created, and are deleted when all users that are referencing the table disconnect from the instance of SQL Server.
- A select with NOLOCK will complete faster than a normal select.
- A select with NOLOCK will allow other queries against the effected table to complete faster than a normal select.
NOLOCK typically (depending on your DB engine) means give me your data, and I don't care what state it is in, and don't bother holding it still while you read from it. It is all at once faster, less resource-intensive, and very very dangerous. You should be warned to never do an update from or perform anything system critical, or where absolute correctness is required using data that originated from a NOLOCK read. It is absolutely possible that this data contains rows that were deleted during the query's run or that have been deleted in other sessions that have yet to be finalized. It is possible that this data includes rows that have been partially updated. It is possible that this data contains records that violate foreign key constraints. It is possible that this data excludes rows that have been added to the table but have yet to be committed.
You really have no way to know what the state of the data is.
If you're trying to get things like a Row Count or other summary data where some margin of error is acceptable, then NOLOCK is a good way to boost performance for these queries and avoid having them negatively impact database performance. Always use the NOLOCK hint with great caution and treat any data it returns suspiciously.
Answer from http://stackoverflow.com/questions/210171/effect-of-nolock-hint-in-select-statements
- cheat sheet http://www.cheat-sheets.org/sites/sql.su/#database_manipulation
- SQL Languages https://gist.github.com/janikvonrotz/6e27788f662fcdbba3fb
- cheat sheet https://gist.github.com/Neceros/03021276eafa546d61e4
- 5.1 is out http://www.lesliesikos.com/what-are-the-differences-between-html5-and-html-5-1/
- cheat sheet https://gist.github.com/aldomendez/7a09d1327a1fe1c07379dd8aaa008ccf
- hfactor http://amundsen.com/hypermedia/hfactor/
- the good parts https://hackernoon.com/html5-tutorial-for-beginners-examples-features-list-review-901f3aea2386#.8xr8eyr5q
- Cheatsheet https://gist.github.com/anotheredward/850c944bca5b6db221730f93c4cd5f5f
- Cheat sheet https://www.owasp.org/index.php/Content_Security_Policy_Cheat_Sheet
- CSRF https://www.owasp.org/index.php/CSRF_Prevention_Cheat_Sheet
- Don't use
innerHTML
: https://gist.github.com/Incognito/2002949 - intrusion detection through full-packet capture https://github.com/google/stenographer
- Winpayloads (undetectable windows payloads) https://github.com/nccgroup/Winpayloads
- xss filter https://www.owasp.org/index.php/XSS_Filter_Evasion_Cheat_Sheet
- E-mail templates http://foundation.zurb.com/emails/email-templates.html
- SparkPost
- AWS SES https://github.com/nerdfiles/campaign/blob/master/README.md
- We do not have a server so would we have to use a service?
Ultimately almost everyone uses Cloud Services — the idea that anyone does roll their own is more or less a bugbear of the technology industry. From SparkPost's report (https://www.sparkpost.com/sites/default/files/resources/downloads/sparkpost-wp-bigrewards.pdf):
SparkPost is the cloud solution from the world’s number one email infrastructure provider, whose customers—including Pinterest, Twitter, CareerBuilder, LinkedIn, Groupon, Salesforce, Marketo, Zillow, and Comcast—send over 3 trillion messages a year, over 25% of the world’s legitimate email.
It's a question of What-Is versus What-Does — without getting too philosophical... It's a question of server (What-Is) versus service (What-Does). Really, whether it's Cloud or Not-Cloud, everything boils down to: What's the level of support? Because if you spin up your own virtualized Linux super-sexy-awesome container on AWS, if things break, you're going to be wondering about something really low-level that breaks, when you'd think all I'm really doing is sending e-mails — what could break?!
From SparkPost:
SparkPost comes from the company whose infrastructure delivers 25% of the world’s legitimate email.
More importantly, it's best to understand that almost everyone uses "the Cloud" in some sense, and it's almost always been that way — it's only now that people are starting to use it in a home/everyday setting. This is actually all sort of connected with the Periodic Table of Information — that distinction of What-Is (Server) versus What-Does (Service) will only increasingly vanish to the point where we are literally drafting and serving services if you will directly from HTML! (That is ultimately what PTI is saying/describing.) And this gets to the heart of your question: What really is the thing that triggers getting blocked? — Domain Keys
More specifically, it's DKIM — the standard for e-mail authentication. Basically it's a similar concept to SSL — digital signatures are used to match domains with verified businesses. From SparkPost:
SparkPost implements and adheres to email authentication standards including DKIM. In fact, all email we deliver for our users is required to be authenticated. Configuring DKIM is an important step for verifying sending domains when you set up a new SparkPost account.
Public Key Infrastructure is less concerned with the distinction between Server (What-Is) versus Service (What-Does) — it all comes down to Who Owns The Keys to the Car. The timeless philosophical question of Is it a cloud car versus a real car? washes out. This is what we mean by abstracting away from the low-level stuff. The most important thing to understand is that it's almost always been this way it's only that now as technologists we're able to market it better. That's why organizations like SparkPost support so many huge companies we all use already — only now it's safer from a marketing standpoint to expose the engineers behind the scenes because, well, instead of them talking low-level assembly code, they can talk about high-level code that's much much easier to explain (and quite frankly to just print out).
That's why readability has become so important. The cloud has already been the case; it's just that programmers haven't been the best at pointing to it when people want to make out what's what. At the end of the day, clouds or machines (servers) it's all about keys. Public Key Infrastructure where we're creating public-private key pairings.
From SparkPost:
SparkPost is the industry’s leading email delivery service with nearly 98% inbox placement—15 points higher than the industry average, and 8% higher than the next best cloud infrastructure vendor.
You can see ZURB's Templates here: http://foundation.zurb.com/emails/email-templates.html.
The templates have the most basic styling to start, so you can choose your logo, etc. It's the difference between Boilerplate Template and Polished Template. The advantage here is that you can start from cross-email-client-compatible templates to achieve the look and feel you want with a polished template. ZURB Foundation is an industry leader in creating designs that achieve cross-browser and cross-email-client compatibility: https://litmus.com/checklist/emails/public/eb690d2 (to see their Basic template rendered across all sorts of e-mail clients).
The main part to this concern is DKIM (Domain Keys Internet Mail) and a few other standards in e-mail authentication help control that problem. SparkPost provides the infrastructure to know when you will likely get bounced from a list: reporting, metrics, analytics all in the dashboard of the account.
The other part is practices:
- Sending frequency — this is where testing and reporting help a lot. And doing conditionals to ensure that when you are getting close to a bad quota for bounce banks, you stop the program from continuing its campaign blast.
- Double opt-in to confirm email list subscribers — you've already done this.
- Email from a legitimate address — may want to consider hello@DOMAIN.COM or something inviting and warm, unlike "admin@DOMAIN.COM"
- Don’t use punctuation like "Read now!!!!"
- Don’t use too many images as per the overall weight of the email template. If your images total 5000% larger than the e-mail itself that's clearly going to get flagged. If your e-mail has attachments and it's non-transactional (it's part of an e-mail blast, that's going to get flagged by the ISP).
All of these little rules count. Writing good subject lines and engaging content, etc.
Guide https://gist.github.com/nbardiuk/7f23bc1e7e596dddade9f76bf8a67771
- Gamma or bust https://gist.github.com/denji/84b7b6a07b318ca89919
- General notions https://gist.github.com/grkvlt/3778834
- Ethereum White Paper https://gist.github.com/gtallen1187/46e72c673dc723327a917fd36292594d
- self-hosted bitcoin payment gateway https://github.com/Overtorment/Cashier-BTC
- p2p web https://elendirx.github.io/web2web-gateway/#/
- article on financial engineering at a high-level http://www.thedailyliberator.com/bitcoin-anarchist-financial-engineering/ (basically, bitcoin hoarders can pull the plug on bitcoin at any time)
- portfolio price matrix http://catx.io/
Randomization of list comps of word lists with randomized replacement of narratological constructs like nouns, satellites, etc. https://gist.github.com/wibbia/7aa10ad6181f187db57f
- super long read https://beej.us/guide/bgnet/output/html/singlepage/bgnet.html
- https://gist.github.com/vpayno/2286ebf81b39f18f38c5
- cheat sheet https://gist.github.com/reubenjohn/bab039ffa629f8bb53f86856c035f540
- Unittesting https://gist.github.com/mogproject/fc7c4e94ba505e95fa03
- Index https://gist.github.com/filipkral/740a11c827422264c757
- Decorators https://gist.github.com/hdemers/5357602 https://gist.github.com/xpostudio4/72fbae83f2a69dd8f69e
- pyenv https://gist.github.com/xnoder/a4aa1532c29c2e90a60a93fa9da8075f
- virtualenv(wrapper) https://gist.github.com/maxxst/a76b3b8888b4245c3b91 https://gist.github.com/whhone/a2b3baa132269483ae1eebcc4a83955c
- Needful https://gist.github.com/nicolasramy/5668610
- progress bar https://github.com/tqdm/tqdm
- cheat sheet for 2.7 http://www.astro.up.pt/%7Esousasag/Python_For_Astronomers/Python_qr.pdf
- google search http://www.catonmat.net/blog/python-library-for-google-search/
- python3 https://automatetheboringstuff.com/ (windows based guide, tho)
- Cheat sheet https://gist.github.com/schaitanya/5113345
- General stuff https://gist.github.com/misho-kr/9674524
- Linux Shell http://www.freeos.com/guides/lsst/index.html
- first 10 minutes on a server https://www.codelitt.com/blog/my-first-10-minutes-on-a-server-primer-for-securing-ubuntu/
Useful for IT Automation
- pip https://gist.github.com/nev3rm0re/1d5b05df9e5faf88711e67249102277b
- cheatsheet https://gist.github.com/revolunet/861775b516970267bbb0
- https://gist.github.com/rgaidot/2bfe5ab270fb4242049e
- Cheat sheet https://gist.github.com/perigee/19a927f60aaddb804925
- with EC2 https://gist.github.com/ryanmaclean/e49cbc421c88815aef88
- Deploy tools https://gist.github.com/denji/e721855c696006f15a65
- Huge list https://github.com/nerdfiles/douadevops
Also reading up on Hyperledger (to supplement my knowledge of Openchain)[0]. Use cases:
Business contracts can be codified to allow two or more parties to automate contractual agreements in a trusted way. [...]
Final assemblers, such as automobile manufacturers, can create a supply chain network managed by its peers and suppliers so that a final assembler can better manage its suppliers and be more responsive to events that would require vehicle recalls (possibly triggered by faulty parts provided by a supplier). [...]
Assets such as financial securities must be able to be dematerialized on a blockchain network so that all stakeholders of an asset type will have direct access to that asset, allowing them to initiate trades and acquire information on an asset without going through layers of intermediaries. [...]
From the time that a trade is captured by the front office until the trade is finally settled, only one contract that specifies the trade will be created and used by all participants. [...]
Company A announces its intention to raise 2 Billion USD by way of rights issue. [...]
Assets should always be owned by their actual owners, and asset owners must be able to allow third-party professionals to manage their assets without having to pass legal ownership of assets to third parties (such as nominee or street name entities). [...]
If an organization requires 20,000 units of asset B, but instead owns 10,000 units of asset A, it needs a way to exchange asset A for asset B. [...]
Hyperledger Fabric use-cases are inspiring, too: https://wiki.hyperledger.org/groups/requirements/use-case-inventory
Openchain as I have been developing its module system for its ledger can support many of these features as well, since X.509 PKI signatures can be used to sign payment messages to the blockchain[1]. So some of the "smart card" limitations envisioned years back can be resolved since the Bitcoin network supports X.509 Digital Signatures à la BIP70[2].
The Periodic Table of Information allows for one to describe the "statefulness" of gaming systems like Poker, Blackjack, etc. through hypermedia (as the engine of application state) and the expression of transmedia support criteria[3] such that transactions can be treated as modules which function as linked data to other transactions. So, transmedia means "modular transaction media" in my system. Hyperledger is building toward the same end. Ultimately what this boils down to is being able to combine a Representation of a stateful game (like a slot machine), that is a representation of a state that it is currently in when a user is playing, and securely fit that game state into a transaction stream that will be encrypted and transferred or processed within a distributed network of nodes. Openchain enables us to link together thousands of transactions a second with arbitrary metadata (which can be stateful representations of the game application, whether it's on the internet or within an intranet). My background is with building REST APIs — and what I have been learning is microservices; so to build the robust, antifragile software architecture that supports REST (representational state transfer). Microservices will eventually even replace Content Management Systems like WordPress[4]. __ [0]: https://hyperledger-fabric.readthedocs.io/en/latest/biz/usecases/ — a follow-up point is that "At the maximum resolution (one anchor per block), no more than 4,320 transactions per month (in average) will be committed into the blockchain, which will cost about $10 per month (as of October 2015), regardless of the number of transactions processed." https://docs.openchain.org/en/latest/general/anchoring.html [1]: "Integrating Secure and Smart Card Technologies into On-line Cashless Gaming Solutions for Clubs and Casinos": "Our work with smart cards has centred on the telecommunications industry with the supply of SIM cards for GSM mobile phones. We are also working with the Financial industry, the Health Sector and with a range of specialist companies focused on Internet access control and the portability of X.509 Digital Signatures for secure Electronic Document Interchange and e-commerce. For these applications, smart cards provide a secure, cost effective solution to a number of complex issues. Common to all is the requirement for off-line, secure data processing." http://www.securitymagnetics.com.au/content/wmpaperCashlessGamingSols_full.html [2]: https://github.com/bitcoin/bips/blob/master/bip-0070.mediawiki#certificates