Skip to content

Instantly share code, notes, and snippets.

✈️
Out-of-country from Sept. 5 to 13 and 16 to 19

Mamy Ratsimbazafy mratsim

✈️
Out-of-country from Sept. 5 to 13 and 16 to 19
Block or report user

Report or block mratsim

Hide content and notifications from this user.

Learn more about blocking users

Contact Support about this user’s behavior.

Learn more about reporting abuse

Report abuse
View GitHub Profile
@mratsim
mratsim / cpu.nim
Created Aug 29, 2019
CPU info test
View cpu.nim
import std/cpuinfo
import macros
echo "CPU name: ", cpuName()
echo "CPUs: ", countProcessors()
macro checkFeatures(body: untyped) =
result = newStmtList()
for call in body:
@mratsim
mratsim / 20190829 - Eth 2 implementers call 24.md
Created Aug 29, 2019
20190829 - Eth 2 implementers call 24
View 20190829 - Eth 2 implementers call 24.md
@mratsim
mratsim / RFC - Project Picasso - Nim Multithreading runtime.md
Last active Aug 9, 2019
RFC proposal - Nim multithreading runtime
View RFC - Project Picasso - Nim Multithreading runtime.md

Project Picasso - a multi-threaded runtime for Nim

"Good artists borrow, great artists steal." -- Pablo Picasso

Introduction

The Nim destructors and new runtime were introduced to provide GC-less path forward for Nim libraries and applications where it made sense. One of their explicit use case is making threading easier.

View threadpool_deadlock.nim
import
# STD lib
os, strutils, threadpool, strformat
# bench
# ../wtime
# Using Nim's standard threadpool
# Compile with "nim c --threads:on -d:release -d:danger --outdir:build benchmarks/fibonacci/stdnim_fib.nim"
#
# Note: it breaks at fib 16.
View 20190724 - ETH 2.0 Implementers Call 22.txt
20190724 - ETH 2.0 Implementers Call 22
https://github.com/ethereum/eth2.0-pm/issues/64
----------------------------------
Nimbus team
Core:
- Most of 0.8.1 implemented except SSZ
- On the verge of important changes:
View ssz_minimal_one_v7.1.yaml
title: ssz testing, with minimal config, randomized with mode one
summary: Test suite for ssz serialization and hash-tree-root
forks_timeline: testing
forks: [phase0]
config: minimal
runner: ssz
handler: static
test_cases:
- Attestation:
value:
View bls_sign_msg_v8.1.yaml
title: BLS sign msg
summary: BLS Sign a message
forks_timeline: mainnet
forks: [phase0]
config: mainnet
runner: bls
handler: sign_msg
test_cases:
- input: {privkey: '0x263dbd792f5b1be47ed85f8938c0f29586af0d3ac7b977f21c278fe1462040e3',
message: '0x0000000000000000000000000000000000000000000000000000000000000000',
@mratsim
mratsim / libdispatch-efficiency-tips.md
Created Jul 15, 2019 — forked from tclementdev/libdispatch-efficiency-tips.md
Making efficient use of the libdispatch (GCD)
View libdispatch-efficiency-tips.md

libdispatch efficiency tips

I suspect most developers are using the libdispatch inefficiently due to the way it was presented to us at the time it was introduced and for many years after that, and due to the confusing documentation and API. I realized this after reading the 'concurrency' discussion on the swift-evolution mailing-list, in particular the messages from Pierre Habouzit (who is the libdispatch maintainer at Apple) are quite enlightening (and you can also find many tweets from him on the subject).

My take-aways are:

  • You should have very few queues that target the global pool. If all these queues are active at once, you will get as many threads running. These queues should be seen as execution contexts in the program (gui, storage, background work, ...) that benefit from executing in parallel.
You can’t perform that action at this time.