Skip to content

Instantly share code, notes, and snippets.

@jbardin
Last active June 28, 2023 22:12
Show Gist options
  • Star 54 You must be signed in to star a gist
  • Fork 15 You must be signed in to fork a gist
  • Save jbardin/821d08cb64c01c84b81a to your computer and use it in GitHub Desktop.
Save jbardin/821d08cb64c01c84b81a to your computer and use it in GitHub Desktop.
Go TCP Proxy pattern
package proxy
import (
"io"
"log"
"net"
)
func Proxy(srvConn, cliConn *net.TCPConn) {
// channels to wait on the close event for each connection
serverClosed := make(chan struct{}, 1)
clientClosed := make(chan struct{}, 1)
go broker(srvConn, cliConn, clientClosed)
go broker(cliConn, srvConn, serverClosed)
// wait for one half of the proxy to exit, then trigger a shutdown of the
// other half by calling CloseRead(). This will break the read loop in the
// broker and allow us to fully close the connection cleanly without a
// "use of closed network connection" error.
var waitFor chan struct{}
select {
case <-clientClosed:
// the client closed first and any more packets from the server aren't
// useful, so we can optionally SetLinger(0) here to recycle the port
// faster.
srvConn.SetLinger(0)
srvConn.CloseRead()
waitFor = serverClosed
case <-serverClosed:
cliConn.CloseRead()
waitFor = clientClosed
}
// Wait for the other connection to close.
// This "waitFor" pattern isn't required, but gives us a way to track the
// connection and ensure all copies terminate correctly; we can trigger
// stats on entry and deferred exit of this function.
<-waitFor
}
// This does the actual data transfer.
// The broker only closes the Read side.
func broker(dst, src net.Conn, srcClosed chan struct{}) {
// We can handle errors in a finer-grained manner by inlining io.Copy (it's
// simple, and we drop the ReaderFrom or WriterTo checks for
// net.Conn->net.Conn transfers, which aren't needed). This would also let
// us adjust buffersize.
_, err := io.Copy(dst, src)
if err != nil {
log.Printf("Copy error: %s", err)
}
if err := src.Close(); err != nil {
log.Printf("Close error: %s", err)
}
srcClosed <- struct{}{}
}
@liudanking
Copy link

  1. When client is closed, I think srvConn should call CloseWrite, which means clientConn will NOT write to srvConn anymore.
  2. Both srvConn.CloseRead() and cliConn.CloseRead() are unnecessary, because src.Close() in first broker which will trigger an error in the other brocker that will call dst.Close().

@nhooyr
Copy link

nhooyr commented Oct 9, 2016

What do you guys think of what I've done here: https://github.com/nhooyr/tlswrapd/blob/965c34913d5c8635b07ec8fb4918174298fa4855/proxy.go#L87-L110 ?

I think it is better than this because it is more simple but accomplishes the same thing. It is also not specific to TCP, any net.Conn (such as tcp.Conn which is what I'm using in there) will work.

@mikegleasonjr
Copy link

mikegleasonjr commented Jun 3, 2017

@nhooyr To simplify the closing of the connections, we can use sync.Once:

close := sync.Once{}
cp := func(dst io.WriteCloser, src io.ReadCloser) {
	b := buffers.Get()
	defer buffers.Put(b)
	io.CopyBuffer(dst, src, b)
	close.Do(func() {
		dst.Close()
		src.Close()
	})
}
go cp(src, dst)
cp(dst, src)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment