Skip to content

Instantly share code, notes, and snippets.

@SomajitDey
Last active January 12, 2023 04:02
Show Gist options
  • Save SomajitDey/25f2f7f2aae8ef722f77a7e9ea40cc7c to your computer and use it in GitHub Desktop.
Save SomajitDey/25f2f7f2aae8ef722f77a7e9ea40cc7c to your computer and use it in GitHub Desktop.
Hosting IPFS node with ngrok

Hosting IPFS from behind NAT/Firewall using a free reverse proxy (ngrok)

  1. Expose localhost's port 4001 to public internet using ngrok: ngrok tcp 4001. Tip: Use -region= flag for lower latency.
  2. Note the hostname and port returned by ngrok in the form: tcp://hostname:port -> localhost:4001
  3. Open the ipfs config json file ~/.ipfs/config
  4. Edit as follows: Addresses.Announce=["/dns4/put-the-hostname-here/tcp/put-the-port-here"]
  5. Save the config file
  6. ipfs daemon
@SomajitDey
Copy link
Author

SomajitDey commented Jan 10, 2022

@avatar-lavventura

An easy way to make a node undiscoverable by other nodes is to not connect to the DHT at all. To do this either launch your daemon with --routing=none flag or change the ipfs config file. That is to say either

  1. ipfs daemon --routing=none or,
  2. ipfs config Routing.Type none followed by ipfs daemon

If a node is disconnected from DHT like this, it can still form outgoing connections to other nodes specified with ipfs swarm connect <fully-qualified-multiaddress>. It can also receive incoming connections from other nodes which know its own multiaddress - i.e. /ip4/<ip>/tcp/<port>/p2p/<peerID>

In case you need to connect to DHT but not publish your multiaddress to DHT, you may either

For further discussions feel free to ask the friendly broader community at https://discuss.ipfs.io/ or their corresponding subreddit.

P.S: Thanks a lot for starring ipfs-chat. You might also be interested in another project I started IPNS-Link for exposing dynamic sites with IPNS as opposed to static ones. Here are two blogs abt it - 1) medium ; 2) dev

@avatar-lavventura
Copy link

avatar-lavventura commented Jan 10, 2022

@SomajitDey I didn't know ipfs swarm connect /ip4/<ip>/tcp/<port>/p2p/<peerID> is able to connect nodes that are started as ipfs daemon --routing=none thanks for pointing it out. I did:

ipfs init --profile=server,badgerds
ipfs daemon --routing=none

Before ipfs swarm connect /ip4/<ip>/tcp/<port>/p2p/<peerID> I always do nc -v <ip> 4001 to check is the the ipfs running or not.

and I was able share file using ipfs swarm connect /ip4/<ip>/tcp/<port>/p2p/<peerID>, but I did not know I can do it with fully-qualified-multiaddress as well.

I hope ipfs daemon --routing=none will also help to improve slow transfer over LAN. I remember seeing it over here: ipfs/kubo#5037 (comment)

I did not get the In case you need to connect to DHT but not publish your multiaddress to DHT, you may either part. I was directly sharing /ip4/<ip>/tcp/<port>/p2p/<peerID> with the node over a public domain. But I have concerns having IP and port information publicly known info.

@SomajitDey
Copy link
Author

@avatar-lavventura

By fully qualified multiaddress, I meant addresses that contain the ip, port and peerID, e.g. /ip4/<ip>/tcp/<port>/p2p/peerID.

If your google node is connected to DHT, your laptop node can get the fully-qualified-multiaddress of the google node from the DHT by referring to its peerID. When google node is not connected to the DHT (--routing=none), the laptop node must be given the full address of the google node by some other means - e.g. you hand typed it into ipfs swarm connect ....

Your multiaddress only becomes public when it is published to the DHT. As long as you remain disconnected to the DHT, your google node is undiscoverable by other nodes. For added safety however you can always use the Private Networks feature. In that way, only your laptop node and google node can connect to each other using a common secret key - forming a private network.

The slow transfer speed beats me. AFAIK, its not a problem with IPFS.

@avatar-lavventura
Copy link

its not a problem with IPFS.

Like it is not problem for the IPFS developers? or slow transfer speed problem was always there for IPFS but accepted as it is?

@SomajitDey
Copy link
Author

@avatar-lavventura I meant that I suppose the slow speed is not due to ipfs per se. Because I and some of my colleagues who used ipfs didnt experience very slow speeds once the peers were swarm connected.

@avatar-lavventura
Copy link

Probably there was some issues in the Google instance I am using since 100 MB was downloaded in ~10 minutes even peers were swarm connected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment