Skip to content

Instantly share code, notes, and snippets.

@devender-yadav
devender-yadav / genesis.json
Created November 13, 2017 11:50 — forked from dickolsson/genesis.json
Example genesis.json for an Ethereum blockchain
{
"config": {
"chainId": 33,
"homesteadBlock": 0,
"eip155Block": 0,
"eip158Block": 0
},
"nonce": "0x0000000000000033",
"timestamp": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
@devender-yadav
devender-yadav / gist:444d7ae2082ac24e66173cbd6e34b6f8
Created December 25, 2018 12:35 — forked from rxaviers/gist:7360908
Complete list of github markdown emoji markup

People

:bowtie: :bowtie: 😄 :smile: 😆 :laughing:
😊 :blush: 😃 :smiley: ☺️ :relaxed:
😏 :smirk: 😍 :heart_eyes: 😘 :kissing_heart:
😚 :kissing_closed_eyes: 😳 :flushed: 😌 :relieved:
😆 :satisfied: 😁 :grin: 😉 :wink:
😜 :stuck_out_tongue_winking_eye: 😝 :stuck_out_tongue_closed_eyes: 😀 :grinning:
😗 :kissing: 😙 :kissing_smiling_eyes: 😛 :stuck_out_tongue:
@devender-yadav
devender-yadav / stackoverflow.sql
Created December 27, 2018 16:33 — forked from gousiosg/stackoverflow.sql
Script to import the stackexchange dumps into MySQL
# Copyright (c) 2013 Georgios Gousios
# MIT-licensed
create database stackoverflow DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci;
use stackoverflow;
create table badges (
Id INT NOT NULL PRIMARY KEY,
UserId INT,
@devender-yadav
devender-yadav / PySpark_to_Pandas_with_Arrow.ipynb
Created January 24, 2019 11:12 — forked from BryanCutler/PySpark_to_Pandas_with_Arrow.ipynb
Spark to Pandas Conversion with Arrow Example
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@devender-yadav
devender-yadav / spark-svd.scala
Created November 14, 2019 15:53 — forked from vrilleup/spark-svd.scala
Spark/mllib SVD example
import org.apache.spark.mllib.linalg.distributed.RowMatrix
import org.apache.spark.mllib.linalg._
import org.apache.spark.{SparkConf, SparkContext}
// To use the latest sparse SVD implementation, please build your spark-assembly after this
// change: https://github.com/apache/spark/pull/1378
// Input tsv with 3 fields: rowIndex(Long), columnIndex(Long), weight(Double), indices start with 0
// Assume the number of rows is larger than the number of columns, and the number of columns is
// smaller than Int.MaxValue