Enterview with Sasha regarding all the aspects and extension of the study.
I've received my testnet tokens and all the instructions.
I am going to use Digital Ocean since I am running already the other PRE node on it. I use to work with AWS and Google Cloud, but Digital Ocean is usually cheaper. Anyway I check the costs of running ithe node in different providers.
c5.large
is the instance recommended. This confuses me a little bit at first, since I see that the minimum recommended for a self hosted node would be 2 vCPU / 2 GB RAM / 1 GiB Persistent Storage, and although it is a shared vCpu, that can be achieved with a t4g.small
. The difference in price per month is big:
c5.large
: $69,84 (recommended by the specs)t4g.small
: $17.59 (2 shared vCPU)
We could find cheaper options:
n2-highcpu-2
: $53.96 (recommended by the specs)e2-highcpu-2
: $46.53 (other cheaper option with 2 dedicated vCPU)e2-small
: $15.76 (2 shared vCPU option)
Standard_F2s_v2
: $61,66 (recommended by the specs)Standard_B2s
: $30.37 (2 vCPU burstable option with better performance score)
There are 2 main options in Digital Ocean:
Basic with 2vCPU/2GB RAM
: $18 (Shared vCPU)CPU optimized 2vCPU/4GB RAM
: $42 (Dedicated vCPU)
At first I would suggest either changing the instance recommendation, adjusting the requirements or explaining why this is needed in order to get a better user experience. I will revisit this later when I have the node up and running and can see the average cpu utilization, that I suppose is the main reason for this.
I take a first look to the documentation to have an overview of the whole process. In this first look I see a lot of similarities in the way the Dashboards for the PRE and the TBTC staking work, although the requirements for the node are a little bit higher.
I create the Droplet in Digital Ocean (I choose the Frankfurt datacenter to have a better response time). Since there is no information in the documentation about which droplet to use in Digital Ocean, I am using the closest one to the requirements (2GB/2CPU, 60GB SSD).
I ssh into the node to start working on it.
First I install Geth in the machine and get the operator account with public key:
0x72c782C5dD5Db5518Cf888689518Cf312CABFEe2
I continue following the instructions and funding the operator address with some Goerli Eth that has been provided to me.
I check that the Eth is there and everything is fine.
I continue with the documentation and check that the ports are exposed. There is no firewall activated by default in the droplet, so there is no need to open any port in the firewall.
The next step seems to be authorizing the TBTC app and the random beacon app. Even though in the documentation gives you a screenshot regarding the authorization of the applications, in order to reach that you also need to go first through the step 1 of the staking process. I stake the minimum amount of 40.000T to check later the top up method.
I check both apps and choose the max amount of T.
I continue by mapping the operator address. The instructions are clear and easy to follow. Basically is the same process than with the PRE operator.
Now I am going through the "folder structure" section. I see that regular backups are crucial. Since this is a test environment, I am not going to use any backup functionality. No problems with the commands provided.
I copy the operator keystore file following the instructions.
Since it is a fresh droplet I don't have Docker installed, so I have to install it.
I move to creating the launch script. Although there is nothing related to where to place it, I place it inside the keep
folder to keep everything consistent. Pun intended.
I modify all the fields that have to be modified in this script. I use the infura provider. I realize that for this I have to use the websocket instead of the http.
After running the script, the docker image is pulled and run. I check the logs in the container.
Although there are some warnings, I have achieved to get the log that I am supposed to get:
Besides that, it confuses me a little bit that some of the logs seem to be flagged as Warnings, but look like errors:
I have followed step by step the documentation, changing what was needed and have had 0 problems. Tomorrow I will monitor the client logs and check if those warnings and errors were some temporary problem or they are there to stay.
The logs seem to be showing exactly the same than yesterday. And it seems to be repeating itself once and again.
It is almost 20:00 CET and there is no more logs since aproximately 17:00 CET.
The docker container is still running, but it looks like the process has stopped, since there is no logs and the cpu usage has gone to almost 0 from a constant average of around 75%:
I will contact Sasha in order to see if I should rerun the image.
I have been told to restart the service, so I proceed to stop the container, (docker stop 5b
in my case), pull the latest docker version, in case there is a new one. No new version available, so I run the script and the service is running again.
After some time I check the last logs again. Since I don't want to navigate through the whole logs but get the last ones, I use an extended version of the script provided in the documentation:
sudo docker logs bd >& log && tail -n 30 log
I can see that the warning are still showing, and the number of connected peers is now 9 (yesterday before the crash there was 12)
I see also that there are also some logs that are flagged as "info" but look like errors:
I also realize after restarting the service that there is almost no CPU usage (I restarted the service at 11:50 CET):
I am checking everything again and it looks like maybe it was my mistake to think that the service had crashed. The combination of the CPU usage dropping to 0 and the logs being printed in Zulu time (-2 to the local time here), may have caused this situation.
There is no updates. The logs continue being the same, almost no cpu usage. The connected peers are now 13.
No updates. Same logs, almost no cpu usage.
I have been asked about if the server ports are opened. I have checked again the firewall in Digital Ocean. There is no firewall active for my droplet:
I have also tested if my ports are opened to the outside world with a third party tool. I can see that ports 3919 and 9601 are opened:
There are now 15 peers. The warnings continue to appear.
I have checked again later and now it looks like only one peer is causing the warning (peer id: 16Uiu2HAm77eSvRq5ioD4J8VFPkq3bJHBEHkssCuiFkgAoABwjo2S
)
Creating a node is really easy when you have some knowledge and you follow the documentation.
Regarding the requirements, what I have seen up to now is that, beside the first 16 hours or so, with a cpu usage of around 75%, the rest of the time the peak cpu usage has been of 5%, and the max RAM used the whole time has not reached even 30%, that would mean that with these same conditions a node with 1vCPU and 1GB RAM would also meet the needs.
I think adjusting the requirements is important, because sometimes people only have a small amount of T to stake (lets say the minimum 40.000T). If we check the current price of T and the price of the recommended machine in AWS, that would mean that that person would have around $1020 in T, and would invest almost $70/month, so he would need a reward per month of almost 7% to end the month with the same liquidity that he started it. If, on the other hand, the requirements were 1vCPU/1GB RAM, in Digital Ocean, one of those can be run for $6/month, what would mean only a 0,6% reward per month to have the same balance. That could attract small investors to set up a node. Of course this will depend on the actual network load. So some load tests would be recommended in order to find the real minimum requirements.