Skip to content

Instantly share code, notes, and snippets.

View melnikaite's full-sized avatar

Eugene Melnikov melnikaite

View GitHub Profile
@melnikaite
melnikaite / gist:f0730dc6ade93ec83527
Created October 3, 2014 11:24
create yml file from ENV variable
# remove all unused environments from applicationюньд and generate hash
ruby -e 'require "yaml"; puts YAML.load(File.read("config/application.yml"))'
# run script to create yml file
application_yml='{"test"=>{"protocol"=>"https", "host"=>"example.com"}}' ruby -e 'require "yaml"; File.open("config/application.yml", "w") {|f| f.write(eval(ENV["application_yml"]).to_yaml) }'
@melnikaite
melnikaite / bootstrap-fileupload.js
Created June 13, 2013 13:48
client-side-validations-formtastic jasny bootstrap fileupload image input
//conflict with ClientSideValidations
//this.$input.attr('name', this.name)
Benchmark activerecord-mysql activerecord-postgresql activerecord-sqlite mongoid-mongodb sequel-mysql sequel-postgresql sequel-sqlite
Eager Loading Query Per Association With 1-1 Records: 640 objects 22 times-No Transaction 5.921149 6.284935 5.860397 1.314835 0.254055 0.283381 0.356529
Eager Loading Query Per Association With 1-1 Records: 640 objects 22 times-Transaction 6.012105 6.493957 5.738628 0.253842 0.282353 0.376487
Eager Loading Query Per Association With 1-32 Records: 1024 objects 22 times-No Transaction 1.087165 1.114392 1.037673 0.908334 0.190097 0.206294 0.273892
Eager Loading Query Per Association With 1-32 Records: 1024 objects 22 times-Transaction 1.085952 1.150046 1.015849 0.191712 0.212131 0.271282
Eager Loading Query Per Association With 1-32-32 Records: 2048 objects 9 times-No Transaction 0.848494 0.894651 0.810860 0.082494 0.168703 0.170906 0.239950
Eager Loading Query Per Association With 1-32-32 Records: 2048 objects 9 times-Transaction 0.835610 0.867175 0.787067 0.168840 0.171462 0.
0x63CE9f57E2e4B41d3451DEc20dDB89143fD755bB
@melnikaite
melnikaite / enable.rb
Created October 13, 2016 07:31
enable JSON functions in SQLite
# To enable JSON functions in SQLite run in irb
require 'sqlite3'
c = SQLite3::Database.new('database')
c.enable_load_extension(1)
c.load_extension('/usr/local/Cellar/sqlite/3.14.2/lib/libsqlitefunctions.dylib')
const bls = require('bls-lib');
bls.onModuleInit(() => {
bls.init();
const numOfPlayers = 7;
const threshold = 3;
const msg = 'hello world';
console.log({numOfPlayers, threshold, msg});
// set up master key share
@melnikaite
melnikaite / yandex.radio.bttpreset
Created July 16, 2019 15:26
adds to touch bar button to next track and to like song on yandex radio in chrome browser
{
"BTTPresetName" : "Default",
"BTTPresetUUID" : "153FFA02-38FC-4862-B91A-24CB898C9B55",
"BTTPresetContent" : [
{
"BTTAppBundleIdentifier" : "BT.G",
"BTTAppName" : "Global",
"BTTAppAutoInvertIcon" : 1,
"BTTAppSpecificSettings" : {
# lib/capistrano/tasks/assets.rake
Rake::Task['deploy:assets:precompile'].clear
namespace :deploy do
namespace :assets do
desc 'Precompile assets locally and then rsync to remote servers'
task :precompile do
local_manifest_path = %x{ls public/assets/manifest*}.strip
const { Readable } = require('stream');
class RandomNumberStream extends Readable {
constructor(options) {
super(options);
this.isTimeToEndIt = false;
setTimeout(() => {
this.isTimeToEndIt = true;
}, 10 * 1000);
I don't understand the purpose of using EVM off-chain or limiting solutions by underground programming languages when it finally ends up with publishing data in cheap storage and a possibility to validate proofs:
- Polygon's solution is limited by 100TPS
- zkSync and StarkWare can achieve 3k TPS but are limited by L1 block size
- even my favorite Avalanche Subnet reports about 4.5k TPS, but also is limited by the vertical scaling potential of validators' hardware
I propose using "Transparency protocol" to achieve the same but in a more flexible, efficient, and scalable way.
You just save for your web2.0 application a changelog with public data, share this data and periodically save Merkle tree to a blockchain. In this case, we are not flooding L1 with application data, we save just small hashes once a day.
In "Transparency protocol" applications share only significant, anonymized pieces of data. Anyone can make sure his data is valid and in place via "read" endpoints. Anyone can make sure that history is va