Skip to content

Instantly share code, notes, and snippets.

View melnikaite's full-sized avatar

Eugene Melnikov melnikaite

View GitHub Profile
@melnikaite
melnikaite / readme.md
Created April 20, 2022 13:52
Decentralized microinsurance with DAO

Classic bike insurance is cheap, so I recommend using it if possible. But unfortunately in Germany, it is difficult to ensure bikes that are older than 60 months, and if you bought a used bike and you don't have a serial number, then insurance is not possible at all. This is where decentralized microinsurance with elements of a decentralized autonomous organization (DAO) comes to the rescue.

The first step to ensure your bike is to apply to your local DAO. The application consists of the first payment, an indication of the current value of the bike in the network's native currency, and a photo. Next, you need to wait until the DAO allows you to join. You will be insured as long as you pay and are a member of the DAO.

If your parking lot does not yet have a DAO, then you can just as easily create one and invite participants by printing a QR code.

This model has unique features that are not found in classic insurance. Your contributions are yours in full until an incident occurs. You can leave the organizati

@melnikaite
melnikaite / decentralized messenger.png
Last active April 6, 2022 18:01
Decentralized messenger
decentralized messenger.png
@melnikaite
melnikaite / contract.sol
Created February 27, 2022 12:59
Onchain Metadata
// SPDX-License-Identifier: MIT
pragma solidity 0.8;
contract owned {
constructor() { owner = msg.sender; }
address owner;
modifier onlyOwner {
require(msg.sender == owner);
_;
I don't understand the purpose of using EVM off-chain or limiting solutions by underground programming languages when it finally ends up with publishing data in cheap storage and a possibility to validate proofs:
- Polygon's solution is limited by 100TPS
- zkSync and StarkWare can achieve 3k TPS but are limited by L1 block size
- even my favorite Avalanche Subnet reports about 4.5k TPS, but also is limited by the vertical scaling potential of validators' hardware
I propose using "Transparency protocol" to achieve the same but in a more flexible, efficient, and scalable way.
You just save for your web2.0 application a changelog with public data, share this data and periodically save Merkle tree to a blockchain. In this case, we are not flooding L1 with application data, we save just small hashes once a day.
In "Transparency protocol" applications share only significant, anonymized pieces of data. Anyone can make sure his data is valid and in place via "read" endpoints. Anyone can make sure that history is va
const { Readable } = require('stream');
class RandomNumberStream extends Readable {
constructor(options) {
super(options);
this.isTimeToEndIt = false;
setTimeout(() => {
this.isTimeToEndIt = true;
}, 10 * 1000);
@melnikaite
melnikaite / yandex.radio.bttpreset
Created July 16, 2019 15:26
adds to touch bar button to next track and to like song on yandex radio in chrome browser
{
"BTTPresetName" : "Default",
"BTTPresetUUID" : "153FFA02-38FC-4862-B91A-24CB898C9B55",
"BTTPresetContent" : [
{
"BTTAppBundleIdentifier" : "BT.G",
"BTTAppName" : "Global",
"BTTAppAutoInvertIcon" : 1,
"BTTAppSpecificSettings" : {
const bls = require('bls-lib');
bls.onModuleInit(() => {
bls.init();
const numOfPlayers = 7;
const threshold = 3;
const msg = 'hello world';
console.log({numOfPlayers, threshold, msg});
// set up master key share
0x63CE9f57E2e4B41d3451DEc20dDB89143fD755bB
@melnikaite
melnikaite / enable.rb
Created October 13, 2016 07:31
enable JSON functions in SQLite
# To enable JSON functions in SQLite run in irb
require 'sqlite3'
c = SQLite3::Database.new('database')
c.enable_load_extension(1)
c.load_extension('/usr/local/Cellar/sqlite/3.14.2/lib/libsqlitefunctions.dylib')
Benchmark activerecord-mysql activerecord-postgresql activerecord-sqlite mongoid-mongodb sequel-mysql sequel-postgresql sequel-sqlite
Eager Loading Query Per Association With 1-1 Records: 640 objects 22 times-No Transaction 5.921149 6.284935 5.860397 1.314835 0.254055 0.283381 0.356529
Eager Loading Query Per Association With 1-1 Records: 640 objects 22 times-Transaction 6.012105 6.493957 5.738628 0.253842 0.282353 0.376487
Eager Loading Query Per Association With 1-32 Records: 1024 objects 22 times-No Transaction 1.087165 1.114392 1.037673 0.908334 0.190097 0.206294 0.273892
Eager Loading Query Per Association With 1-32 Records: 1024 objects 22 times-Transaction 1.085952 1.150046 1.015849 0.191712 0.212131 0.271282
Eager Loading Query Per Association With 1-32-32 Records: 2048 objects 9 times-No Transaction 0.848494 0.894651 0.810860 0.082494 0.168703 0.170906 0.239950
Eager Loading Query Per Association With 1-32-32 Records: 2048 objects 9 times-Transaction 0.835610 0.867175 0.787067 0.168840 0.171462 0.