Skip to content

Instantly share code, notes, and snippets.

@corollari
Last active December 8, 2020 12:45
Show Gist options
  • Save corollari/edca751ed2daa284791bfff2a61a64e2 to your computer and use it in GitHub Desktop.
Save corollari/edca751ed2daa284791bfff2a61a64e2 to your computer and use it in GitHub Desktop.
Analysis of lightning node uptime

Date: 8/12/2020

Methodology

It's likely that the data collected has some biases due to the source (ACINQ explorer) and the filters we apply to the node list. Here's the list of steps performed:

  1. Get the list from ACINQ (6111 nodes)
  2. Filter out nodes for which ACINQ doesn't provides a url (3092 nodes left)
  3. Filter out nodes that don't use the standard port 9735 (2589 nodes left). Reasoning: it's annoying to pass a different port for each IP to nmap
  4. Filter out those that use IPv6 addresses (2553 nodes left). Reasoning: Again annoying to handle with nmap since it requires passing the -6 flag and splitting the batches
  5. Ignore the nodes that had their ports closed on the first pass. Reasoning: This may be nodes that don't allow incoming tcp connections and instead initiate all the connections themselves. Here a better solution would be to ping them through the p2p network, but I only wanted to get a rough estimate and setting that up is too much work.

Usage

npm i node-fetch
node script.js

Possible biases

I think that the main bias we might be incurring in is that by selecting nodes that have their ports open we are getting a subset of them in which the likelihood of a node being meant to be a routing one is higher than the average, and these kind of nodes are likely to strive to meet high uptime targets, since that could be used as part of the decision on whether channels are opened to them, thus our statictics on uptime might get skewed upwards and not be completely representative of all the nodes on the network.

const { execSync } = require('child_process');
const fetch = require('node-fetch');
const {writeFileSync} = require('fs');
(async ()=>{
let nodes = await fetch('https://explorer.acinq.co/nodes').then(r=>r.json())
nodes=nodes.map(node=>node.ip).filter(ip=>ip!==undefined)
nodes=nodes.filter(b=>/.*:9735$/.test(b))
nodes=nodes.filter(b=>(b.match(/:/g) || []).length==1)
nodes=nodes.map(b=>b.substr(0, b.length-':9735'.length))
writeFileSync('ips.txt', nodes.join('\n'))
execSync(`rm -f uptime.csv`)
for(let i=0; i<(60*24/5); i++){ // Every 5 mins
const output = execSync(`nmap -Pn -p 9735 -iL ips.txt | grep unknown`).toString()
const states = output.split('\n').map(b=>b.split(' ')[1])
const results = [Date.now(), ...states].join(',')+"\n"
writeFileSync('uptime.csv', results, {flag: 'a'})
execSync(`sleep ${60*5}`) // 5 mins
}
execSync(`python transpose.py`)
})()
import csv
a = zip(*csv.reader(open("uptime.csv", "r")))
csv.writer(open("uptime-transposed.csv", "w")).writerows(a)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment