slug | title |
---|---|
node-introduction |
Introduction |
-
-
Save danwbyrne/362dadb026ec45ef18c5c11ae5e0ed6a to your computer and use it in GitHub Desktop.
slug | title |
---|---|
node-docker |
Local Docker Development |
In this section we will cover how to deploy a NEO•ONE Node locally using the NEO•ONE Node Docker image.
If you are unfamiliar with Docker or do not have a version installed locally visit their getting started page.
[[toc]]
- Docker
- Minimum: at least 2GB Memory, and 50GB Storage allocated
- Recommended: 4GB Memory, 60GB+ Storage allocated (if you plan to deploy to a cluster you will need this for each pod/container)
NEO•ONE uses quay to automatically build the docker image every time a new Node version is published.
After you have installed Docker, run the following in a terminal:
docker pull quay.io/neoone/node
docker run quay.io/neoone/node
Voila! You should now be running the most recent NEO•ONE Node in a local docker container and will see logs to confirm it has started.
There are several ways to configure the node; any rc type configuration is accepted. as an example we can set the monitor
level of the node to verbose using either:
docker run quay.io/neoone/node --monitor.level=verbose
or
docker run -e neo_one_node_monitor__level=verbose quay.io/neoone/node
Additionally you have the option of creating a config
(no extension) file and mounting it directly to the container. By default the node will look for a config at /etc/neo_one_node
.
So if we have a config
## /path/to/config
{
"monitor": {
"level": "verbose"
}
}
located at /path/to/config
we could mount this to the default location as:
docker run -v /path/to:/etc/neo_one_node/ quay.io/neoone/node
(Note that you must mount the entire folder the config file is in)
After running any the above you should see more logging on startup! For more configuration options see the configuration reference.
Similarly to how we can mount a configuration folder to the container for local testing we can also mount a folder for storing the blockchain data our node will collect. By default, the node will use /root/.local/share/neo_one_node
as its storage. We can mount a local folder /path/to/node-data/
using
docker run -v /path/to/node-data:/root/.local/share/neo_one_node quay.io/neoone/node
This is helpful when testing locally as you won't have to re-sync your node-data on every restart.
By default the container will be able to access external resources, such as connecting and syncing with other relay nodes after setting node.rpcURLs
.
If you would like your local Docker container to be able to send its own data, you'll need to publish
the port using docker commands. As an example we can enable node metrics using the following command:
docker run -p 8001:8001 quay.io/neoone/node --environment.telemetry.port=8001
Upon visiting localhost:8001/metrics
you should now see the node-metrics page.
::: warning
Note
By default metrics are disabled so you must include the --environment.telemetry.port=8001
argument or provide a telemetry port through other means of configuration (see above).
:::
The following configurations should be a solid jumping off point for working with the node. For each of the three examples here we will also show how to implement them using Docker Compose.
In all three examples we will use
docker run -v /node-config/:/etc/neo_one_node/ -v /node-data/:/root/.local/share/neo_one_node quay.io/neoone/node
to mount our configuration and local data file before starting the node. Go ahead and create the two folders node-config
and node-data
if you would like to follow along.
To sync your node with other nodes on the network, you must specify them using the options.node.rpcURLs
configuration setting. A list of current mainnet nodes can be found at: http://monitor.cityofzion.io/
## /node-config/config
{
"options": {
"node": {
"rpcURLs": [
"http://seed6.ngd.network:10332",
"http://node1.nyc3.bridgeprotocol.io:10332"
]
}
}
}
Now, if we apply this configuration we can begin to request block information from other nodes. After saving this to node-config/config
, run the command listed above.
Upon successfully starting the node, you should begin to see relay_block
events!
::: warning
Note
Its worth mentioning that syncing the entire blockchain can take a very long time. We recommend restoring
to a recent backup (described below) and then syncing.
:::
To download a backup of the most recent blockchain data and extract it you can configure the node using NEO•ONE's public backup hosted on Google Cloud. We'll specify bucket information and mark the restore
option as true.
## /node-config/config
{
"backup": {
"restore": true,
"options": {
"gcloud": {
"projectID": "neotracker-172901",
"bucket": "bucket-1.neo-one.io",
"prefix": "node_0",
"maxSizeBytes": 419430400
}
}
}
}
This tells the node where we want to restore from. Assuming there is an available google-cloud bucket to restore from (there will be for our example) it will download and extract the blockchain data to our defined node-data
folder. This process can take multiple hours depending on network speeds as a fully synced backup is ~16GB in size.
To restore and sync, simple combine the above configurations.
slug | title |
---|---|
node-kubernetes |
Kubernetes |
In this section we will cover how to deploy a NEO•ONE Node to a kubernetes cluster.
If you are unfamiliar with Kubernetes visit their getting started page, in particular we will be implementing a stateful set locally using Kubernetes through Docker.
[[toc]]
- Docker
- Minimum: at least 2GB Memory, and 50GB Storage allocated
- Recommended: 4GB Memory, 60GB+ Storage allocated (you will need ~60GB of storage per pod)
- Kubectl
- You can enable kubectl through docker; Docker >> Preferences... >> Kubernetes >> Enable Kubernetes
The following deployment spec will create a statefulSet of n
nodes defined by spec.replicas
. Each requests 60GB of storage and 4GB of memory. If you do not have a default storage class set (docker will automatically create for local deployments) you will need to create one, see storage classes for more information.
We include a headless service named neo-one-service
, this is a requirement of StatefulSets.
# node-spec.yml
apiVersion: v1
kind: Service
metadata:
name: neo-one-service
labels:
app: neo-one-service
spec:
ports:
- port: 1443
name: node
clusterIP: None
selector:
app: neo-one-node
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: node
spec:
serviceName: "neo-one-node"
replicas: 1
selector:
matchLabels:
app: neo-one-node
template:
metadata:
labels:
app: neo-one-node
spec:
containers:
- name: neo-one-node
image: quay.io/neoone/node
imagePullPolicy: IfNotPresent
ports:
- containerPort: 1443
name: node
volumeMounts:
- name: node-data
mountPath: /root/.local/share/neo_one_node
args: [
"--options.node.rpcURLs=http://seed6.ngd.network:10332",
"--options.node.rpcURLs=https://seed1.red4sec.com:10332",
"--options.backup.restore=true",
"--options.backup.options.gcloud.projectID=neotracker-172901",
"--options.backup.options.gcloud.bucket=bucket-1.neo-one.io",
"--options.backup.options.gcloud.prefix=node_0",
"--options.backup.options.gcloud.maxSizeBytes=419430400"
]
resources:
requests:
memory: "4Gi"
cpu: "1"
volumeClaimTemplates:
- metadata:
name: node-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 60Gi
Running this deployment with
kubectl create -f node-spec.yml
will start a single pod which:
- makes a persistent volume claim for 60GB of storage.
- starts the node with this volume mounted to the default node-storage path
- if node-data isn't present, restore from the public google-cloud backup
- sync the node using two seeds from http://monitor.cityofzion.io/
There are two main benefits to deploying the nodes this way. If a pod needs to restart for any reason it will always attempt to bind to the same persistent volume and will not start until it is scheduled on the same machine as that volume. It also makes it incredibly simple to scale the number of nodes you would like to run.
assuming you are interested in the pod "node-x" of your deployment you run either
kubectl logs node-x
Or attach directly to the pod and see the logs as they come in with
kubectl attach node-x
(???)
slug | title |
---|---|
node-compose |
Docker Compose |
slug | title |
---|---|
node-source |
Building From Source |
In this brief walk-through we will show you how to build the NEO•ONE Node from source code.
This can be useful for local debugging and if you would like to make your own contribution to the node repository.
[[toc]]
- Node >= 8.9.0 (We recommend the latest version)
- Linux and Mac: We recommend using Node Version Manager.
- Windows: We recommend using Chocolatey.
- Yarn (recommended)
Once you have cloned the NEO•ONE repository (or preferably your own fork of the repository) you can run the following to build the node entry point
cd neo-one
yarn install
yarn build:node
cd ./dist/neo-one/packages/neo-one-node-bin/bin/
yarn build:node
will build a bin for the node as well as the @neo-one
packages that it depends on. For this tutorial we will cd
into the entry point's build directory to save time. Running the new node then is as simple as
node neo-one-node
When running the node locally it is quite easy to apply a configuration file compared to docker since we don't have to mount it to a container. An example configuration for syncing the node
## path/to/config.json
{
"options": {
"node": {
"rpcURLs": {
"http://seed6.ngd.network:10332",
"http://seed10.ngd.network:10332"
}
}
}
}
can be run using
node neo-one-node --config /path/to/config.json
individual options can also be layered on top of our configuration
node neo-one-node --config /path/to/config.json --monitor.level=verbose
Finally you have the option of adding a .neo_one_noderc
app configuration file anywhere in the app directory (recommended at /neo-one/
) to apply you configuration by default; See rc.
slug | title |
---|---|
node-heroku |
Heroku Deployment |
The NEO•ONE Node can be quickly deployed on Heroku using the deployment button below.
More information on Heroku can be found here.
[[toc]]
Upon successfully building the node-container and launching your app, you should see the node monitor in your apps logs.
You can quickly apply environment variable configuration options to the node using the Config Vars in App >> Settings >> Config Vars. As an example we can set the monitor log-level using a config var
with key:value
neo_one_node_monitor__level verbose
After applying the node will restart and update its configuration.
::: warning
Note
Because of the environment-variable syntax rc
expects, you must use the neo_one_node_<parent>__<child>
syntax when applying a value.
:::
Currently it is not possible to enable two or more port
requiring processes simultaneously. This is because Heroku only allocates a single port to the app. By default the node's rpc server is using this port so if you would like to enable telemetry through a config var
you will also need to disable the rpc server.
Additionally it is not possible right now to set environment variable values for Array config options. This should be addressed soon.
::: warning
Note
If you would like to see metrics or enable other features that require a port, you must assign the port to $PORT
, this is the environment variable supplied by heroku.
:::
slug | title |
---|---|
node-configuration |
Configuration Reference |
This section will serve as a reference for the NEO•ONE Node's many configuration options.
[[toc]]
{
"environment": {
"dataPath": string,
"rpc": {
"http?": {
"port": number,
"host": string
},
"https?": {
"port": number,
"host": string,
"cert": string,
"key": string
}
},
"node": {
"externalPort": number
},
"network": {
"listenTCP": {
"port": number,
"host?": string
}
},
"backup": {
"tmpPath?": string,
"readyPath?": string
},
"telemetry": {
"port": number
}
}
}
defaults to the data path supplied by env-paths
environment.dataPath
is the path used for storing chain data.
In the local docker example we could store chain data in a location other than /root/.local/share/neo_one_node
, it should be noted you will need to change the mount location as well.
defaults to false
environment.haltAndBackup
enables a watcher which will halt the node when the rpc server readyHealthCheck
passes and begin a backup if a location is specified which you are authorized to push to.
::: warning
Note
To properly halt and backup, you must also provide options.backup
and options.rpc.readyHealthCheck
configurations. In the case of google-cloud you must also provide service credentials.
:::
by default only http
is enabled on localhost:8080
OR localhost:$PORT
if you have set the environment variable $PORT
environment.rpc
is used to configure the rpc server's host options. You do not need to specify both http
and https
options.
...
disabled by default
environment.node.externalPort
allows you to enable an external port to communicate with the node. (??? need to enable forward port from local docker ???)
disabled by default
environment.network.listenTCP
when provided at least a port this allows other nodes to create TCP connections with this one over that port. host
is optional and defaults to localhost
.
defaults to ${environment.dataPath}/tmp and ${environment.dataPath}/ready respectively
environment.backup.tmpPath
specifies the location of downloaded backup files.
environment.backup.readyPath
specifies the location of the ready
file generated after successfully extracting the backup.
disabled by default
environment.telemetry.port
specifies the port to use when serving node-metrics. When enabled, you can visit localhost:<port>/metrics
to view metrics.
{
"settings": {
"type": string,
"privateNet": boolean,
"address": string,
"standbyValidators": string[]
}
}
defaults to 'main'
settings.type
specifies which NEO network we are connecting to (???). Supported types are main
and test
.
defaults to false
settings.privateNet
specifies whether or not the node is connecting to a private network.
defaults to (???)
settings.address
(???)
(???)
{
"options": {
"node": {
"rpcURLs?": string[],
"unhealthyPeerSeconds?": number,
"consensus?": {
"enabled": boolean,
"options": {
"privateKey": string,
"privateNet": boolean
}
}
},
"network": {
"seeds?": string[],
"peerSeeds?": string[],
"externalEndpoints?": string[],
"maxConnectedPeers?": number,
"connectPeersDelayMS?": number,
"socketTimeoutMS?": number
},
"rpc": {
"server": {
"keepAliveTimeout": number,
},
"liveHealthCheck": {
"rpcURLs?": string[],
"offset?": number,
"timeoutMS?": number,
"checkEndpoints?": number
},
"readyHealthCheck": {
"rpcURLs?": string[],
"offset?": number,
"timeoutMS?": number,
"checkEndpoints?": number
},
"tooBusyCheck": {
"enabled": boolean,
"interval?": number,
"maxLag?": number
},
"rateLimit": {
"enabled": boolean,
"duration?": number,
"max?": number
}
},
"backup": {
"restore": boolean,
"backup?": {
"cronSchedule": string
},
"options": {
"gcloud?": {
"projectID": string,
"bucket": string,
"prefix": string,
"keepBackupCount?": number,
"maxSizeBytes?": number
},
"mega?": {
"download?": {
"id": string,
"key": string
},
"upload?": {
"email": string,
"password": string,
"file": string
}
}
}
}
}
}
by default none of these are defined
Unlike other configuration options, settings in options
are watched and applied immediately to the node without having to restart.
options.node
controls connection and consensus options for connecting with other nodes.
rpcURLs
specifies a list of known node RPC paths you would like to try and connect to. A list of public, mainnet paths can be found at http://monitor.cityofzion.io/.
unhealthyPeerSeconds
sets how long (in seconds) to wait for a peer response before deeming it 'unhealthy'. Defaults to 300 seconds.
conensus
(???)
options.network
can be used to control seeds, endpoints, and socketTimeout defaults.
seeds
specifies external seeds you would like to connect to. (??? I know we talked about all these the other day but I'm still pretty unclear)
peerSeeds
specifies trusted seeds, typically ones run by yourself or on the same cluster.
externalEndpoints
specifies specific external endpoints you would like to connect to.
maxConnectedPeers
sets the maximum number of peers the node will attempt to hold a connection with at once. Defaults to 10.
connectPeersDelayMS
sets the amount of time (in milliseconds) to wait after requesting a peer connection before requesting another. Defaults to 5000.
socketTimeoutMS
sets the timeout of peer requests (in milliseconds). Defaults to 1 minute.
options.rpc
configures the internal RPC Server of the node. See @neo-one/node-http-rpc
.
server.keepAliveTimeout
: if you would like your server to close after x seconds without activity set a timeout here (in milliseconds).
liveHealthCheck
& readyHealthCheck
share the same configuration.
rpcURLs
: a list of RPC URLs to compare our node tooffset
: the acceptable difference of blocks ahead/behind to count aslive
orready
timeoutMS
: timeout for RPC connectionscheckEndpoints
: the number of different endpoints to check against before passingtrue
/false
.
tooBusyCheck
(experimental): enable the tooBusy middleware which throttles requests to the node when under significant load. Currently an experimental feature, see toobusy-js for more. Set tooBusyCheck.enabled
to true if you would like to try it.
rateLimit
(experimental): enable the rateLimiter middleware which throttles requests to the node when too many have been made from the same address over a period of time. Currently an experimental feature, see koa-ratelimit-lru for more. Set rateLimit.enabled
to true to experiment with it.
restore
: true to attempt and pull the latest backup from your provider on starting the node. false to ignore restoring and only backup.
backup.cronschedule
: set a schedule for when to stop the node and backup. See `cron format
options
is where you will specify either a gcloud
or mega
provider. NEO•ONE maintains a public google cloud repository of backups you may restore from using:
{
"backup": {
"options": {
"gcloud": {
"projectID": "neotracker-172901",
"bucket": "bucket-1.neo-one.io",
"prefix": "node_0",
"maxSizeBytes": 419430400
}
}
}
}
Where maxSizeBytes
is the maximum size for a chunk of uploaded data.
{
"monitor": {
"level": string
}
}
(???) What actually are these? I mean I know we can make them and I fixed them recently, still not clear what they do
(???) above