Skip to content

Instantly share code, notes, and snippets.

@zrt
Created January 29, 2020 10:09
Show Gist options
  • Star 10 You must be signed in to star a gist
  • Fork 3 You must be signed in to fork a gist
  • Save zrt/9b16828d85a82d06c6bc85bf94e64d4c to your computer and use it in GitHub Desktop.
Save zrt/9b16828d85a82d06c6bc85bf94e64d4c to your computer and use it in GitHub Desktop.
用 Caddy 反代了一些网站
###################
# #
# SU.SG #
# #
###################
# https://mp.weixin.qq.com/s/H3OOqhFRr0YZGorIAlsCjA
# todo
# replace or remove content-security-policy header
###################
# wikipedia-zh begin
wiki.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
redir 302 {
if {path} not "/robots.txt"
if_op and
if {path} not "/access/from/{>CF-Connecting-IP}"
if_op and
if {path} not "/ip"
if_op and
if {~su-sg-x-access} not "temp-yes"
/ https://su.sg/?ref={hostonly}
}
redir /ip https://wq.apnic.net/static/search.html?query={>CF-Connecting-IP} 302
rewrite {
if {path} is "/access/from/{>CF-Connecting-IP}"
to /getcookie
}
proxy / https://zh.wikipedia.org {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream X-Forwarded-Proto {scheme}
header_upstream -Referer
header_upstream User-Agent {>User-Agent}
header_upstream Accept-Language zh-CN,zh;q=0.9
header_upstream Accept-Encoding identity
except /robots.txt /getcookie /ip /access/from/
insecure_skip_verify
}
# cookie filter
filter rule {
content_type text/.*
search_pattern (D|d)omain=\.wikipedia.org
replacement "Domain=.wiki.su.sg"
}
filter rule { # featured
content_type text/.*
search_pattern "<li id=\"n-indexpage\">"
replacement "<li><a href=\"//su.sg\">Proxied by su.sg</a></li><li id=\"n-indexpage\">"
}
filter rule { # featured
content_type text/.*
search_pattern document.head.appendChild(script);
replacement "script.src = script.src.replace(\"meta.wikimedia.org\", \"meta-wiki.su.sg:8443\");document.head.appendChild(script);"
}
# gist
filter rule {
path .*
search_pattern gist.github.com
replacement gist.su.sg:8443
}
# github
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
# google scholar
filter rule {
content_type text/.*
search_pattern http://scholar.google.com
replacement https://scholar.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern scholar.google.com
replacement scholar.su.sg:8443
}
# general
filter rule {
path .*
search_pattern zh\.wikipedia\.org
replacement wiki.su.sg:8443
}
filter rule {
path .*
search_pattern zh\.m\.wikipedia\.org
replacement m-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern upload.wikimedia.org
replacement up-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern meta\.wikimedia\.org
replacement meta-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.wikipedia\.org
replacement en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.m\.wikipedia\.org
replacement m-en-wiki.su.sg:8443
}
}
m-wiki.su.sg:8443 {
gzip
tls {
dns cloudflare
}
root ./public_su_sg
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
redir 302 {
if {path} not "/robots.txt"
if_op and
if {~su-sg-x-access} not "temp-yes"
/ https://su.sg/?ref={hostonly}
}
proxy / https://zh.m.wikipedia.org {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream X-Forwarded-Proto {scheme}
header_upstream -Referer
header_upstream User-Agent {>User-Agent}
header_upstream Accept-Language zh-CN,zh;q=0.9
header_upstream Accept-Encoding identity
except /robots.txt
insecure_skip_verify
}
# cookie filter
filter rule {
content_type text/.*
search_pattern (D|d)omain=\.wikipedia.org
replacement "Domain=.m-wiki.su.sg"
}
# gist
filter rule {
path .*
search_pattern gist.github.com
replacement gist.su.sg:8443
}
# github
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
filter rule { # featured
path .*
search_pattern "<ul class=\"footer-places hlist hlist-separated\">"
replacement "<ul class=\"footer-places hlist hlist-separated\"><li><a href=\"//su.sg\">Proxied by su.sg</a></li>"
}
filter rule { # featured
path .*
search_pattern document.head.appendChild(script);
replacement "script.src = script.src.replace(\"meta.wikimedia.org\", \"meta-wiki.su.sg:8443\");document.head.appendChild(script);"
}
# google scholar
filter rule {
content_type text/.*
search_pattern http://scholar.google.com
replacement https://scholar.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern scholar.google.com
replacement scholar.su.sg:8443
}
# general
filter rule {
path .*
search_pattern zh\.wikipedia\.org
replacement wiki.su.sg:8443
}
filter rule {
path .*
search_pattern zh\.m\.wikipedia\.org
replacement m-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern upload.wikimedia.org
replacement up-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern meta\.wikimedia\.org
replacement meta-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.wikipedia\.org
replacement en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.m\.wikipedia\.org
replacement m-en-wiki.su.sg:8443
}
}
up-wiki.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
root ./public_su_sg
proxy / https://upload.wikimedia.org {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream X-Forwarded-Proto {scheme}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
header_upstream Referer "https://zh.wikipedia.org/"
except /robots.txt
insecure_skip_verify
}
header / Access-Control-Allow-Origin *
}
meta-wiki.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
root ./public_su_sg
proxy / https://meta.wikimedia.org {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
header_upstream Accept-Encoding identity
header_upstream Referer "https://zh.wikipedia.org/"
except /robots.txt
insecure_skip_verify
}
filter rule { # featured
content_type text/.*
search_pattern document.head.appendChild(script);
replacement "script.src = script.src.replace(\"meta.wikimedia.org\", \"meta-wiki.su.sg:8443\");document.head.appendChild(script);"
}
# general
filter rule {
path .*
search_pattern zh\.wikipedia\.org
replacement wiki.su.sg:8443
}
filter rule {
path .*
search_pattern zh\.m\.wikipedia\.org
replacement m-wiki.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern upload.wikimedia.org
replacement up-wiki.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern meta\.wikimedia\.org
replacement meta-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.wikipedia\.org
replacement en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.m\.wikipedia\.org
replacement m-en-wiki.su.sg:8443
}
}
# wikipedia-zh end
####################################
# wikipedia-en begin
en-wiki.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
redir 302 {
if {path} not "/robots.txt"
if_op and
if {~su-sg-x-access} not "temp-yes"
/ https://su.sg/?ref={hostonly}
}
proxy / https://en.wikipedia.org {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream X-Forwarded-Proto {scheme}
header_upstream User-Agent {>User-Agent}
header_upstream Accept-Encoding identity
header_upstream -Referer
except /robots.txt
insecure_skip_verify
}
# cookie filter
filter rule {
content_type text/.*
search_pattern (D|d)omain=\.wikipedia.org
replacement "Domain=.en-wiki.su.sg"
}
# gist
filter rule {
path .*
search_pattern gist.github.com
replacement gist.su.sg:8443
}
# github
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
filter rule { # featured
content_type text/.*
search_pattern "<li id=\"n-contents\">"
replacement "<li><a href=\"//su.sg\">Proxied by su.sg</a></li><li id=\"n-contents\">"
}
filter rule { # featured
content_type text/.*
search_pattern document.head.appendChild(script);
replacement "script.src = script.src.replace(\"meta.wikimedia.org\", \"meta-wiki.su.sg:8443\");document.head.appendChild(script);"
}
# google scholar
filter rule {
content_type text/.*
search_pattern http://scholar.google.com
replacement https://scholar.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern scholar.google.com
replacement scholar.su.sg:8443
}
# general
filter rule {
path .*
search_pattern en\.wikipedia\.org
replacement en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.m\.wikipedia\.org
replacement m-en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern zh\.wikipedia\.org
replacement wiki.su.sg:8443
}
filter rule {
path .*
search_pattern zh\.m\.wikipedia\.org
replacement m-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern upload.wikimedia.org
replacement up-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern meta\.wikimedia\.org
replacement meta-wiki.su.sg:8443
}
}
m-en-wiki.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
redir 302 {
if {path} not "/robots.txt"
if_op and
if {~su-sg-x-access} not "temp-yes"
/ https://su.sg/?ref={hostonly}
}
proxy / https://en.m.wikipedia.org {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream X-Forwarded-Proto {scheme}
header_upstream User-Agent {>User-Agent}
header_upstream Accept-Encoding identity
header_upstream -Referer
except /robots.txt
insecure_skip_verify
}
# cookie filter
filter rule {
content_type text/.*
search_pattern (D|d)omain=\.wikipedia.org
replacement "Domain=.m-en-wiki.su.sg"
}
# gist
filter rule {
path .*
search_pattern gist.github.com
replacement gist.su.sg:8443
}
# github
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
filter rule { # featured
content_type text/.*
search_pattern "<li id=\"footer-places-terms-use\">"
replacement "<li><a href=\"//su.sg\">Proxied by su.sg</a></li><li id=\"footer-places-terms-use\">"
}
filter rule { # featured
content_type text/.*
search_pattern document.head.appendChild(script);
replacement "script.src = script.src.replace(\"meta.wikimedia.org\", \"meta-wiki.su.sg:8443\");document.head.appendChild(script);"
}
# google scholar
filter rule {
content_type text/.*
search_pattern http://scholar.google.com
replacement https://scholar.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern scholar.google.com
replacement scholar.su.sg:8443
}
# general
filter rule {
path .*
search_pattern en\.wikipedia\.org
replacement en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.m\.wikipedia\.org
replacement m-en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern zh\.wikipedia\.org
replacement wiki.su.sg:8443
}
filter rule {
path .*
search_pattern zh\.m\.wikipedia\.org
replacement m-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern upload.wikimedia.org
replacement up-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern meta\.wikimedia\.org
replacement meta-wiki.su.sg:8443
}
}
# wikipedia-en end
####################################
# google begin
gg.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
redir 302 {
if {path} not "/robots.txt"
if_op and
if {~su-sg-x-access} not "temp-yes"
/ https://su.sg/?ref={hostonly}
}
proxy / https://www.google.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
except /robots.txt
header_upstream Accept-Language zh-CN,zh;q=0.9
header_upstream -Referer
header_upstream Accept-Encoding identity
insecure_skip_verify
}
# cookie filter
filter rule {
content_type text/.*
search_pattern "domain=\.google\.com"
replacement "domain=.gg.su.sg"
}
# gist
filter rule {
path .*
search_pattern gist.github.com
replacement gist.su.sg:8443
}
# github
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
filter rule { #featured
content_type text/.*
search_pattern "<span id=\"fsl\">"
replacement "<span id=\"fsl\"><a href=\"https://su.sg/\">Proxied by su.sg</a>"
}
filter rule {
content_type text/.*
search_pattern encrypted-tbn\w.gstatic.com
replacement encrypted-gstatic-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern adservice.google.com
replacement localhost
}
filter rule {
content_type text/.*
search_pattern ogs.google.com
replacement localhost
}
filter rule {
content_type text/.*
search_pattern accounts.google.com
replacement localhost
}
# general
filter rule {
content_type text/.*
search_pattern id.google.com
replacement id-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern www.google.com
replacement gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern (www|ssl).gstatic.com
replacement gstatic-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern apis.google.com
replacement apis-gg.su.sg:8443
}
# wikipedia
filter rule {
content_type text/.*
search_pattern zh.wikipedia.org
replacement wiki.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern zh.m.wikipedia.org
replacement m-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.wikipedia\.org
replacement en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.m\.wikipedia\.org
replacement m-en-wiki.su.sg:8443
}
# google scholar
filter rule {
content_type text/.*
search_pattern http://scholar.google.com
replacement https://scholar.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern scholar.google.com
replacement scholar.su.sg:8443
}
# lastone
filter rule {
content_type text/.*
search_pattern \.appendChild\((\w*?)\)
replacement ".appendChild(function(){if({1}.src){{1}.src={1}.src.replace('www.google.com', 'gg.su.sg:8443');}if({1}.href){{1}.href={1}.href.replace('ogs.google.com','localhost')}return {1}}())"
}
}
id-gg.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
root ./public_su_sg
proxy / https://id.google.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
header_upstream Referer "https://www.google.com/"
except /robots.txt
insecure_skip_verify
}
filter rule {
content_type text/.*
search_pattern id.google.com
replacement id-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern www.google.com
replacement gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern (www|ssl).gstatic.com
replacement gstatic-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern apis.google.com
replacement apis-gg.su.sg:8443
}
}
gstatic-gg.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
root ./public_su_sg
proxy / https://www.gstatic.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
header_upstream Referer "https://www.google.com/"
except /robots.txt
insecure_skip_verify
}
filter rule {
content_type text/.*
search_pattern adservice.google.com
replacement localhost
}
filter rule {
content_type text/.*
search_pattern ogs.google.com
replacement localhost
}
filter rule {
content_type text/.*
search_pattern accounts.google.com
replacement localhost
}
filter rule {
content_type text/.*
search_pattern www.google.com
replacement gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern (www|ssl).gstatic.com
replacement gstatic-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern apis.google.com
replacement apis-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern encrypted-tbn\w.gstatic.com
replacement encrypted-gstatic-gg.su.sg:8443
}
# lastone
filter rule {
content_type text/.*
search_pattern \.appendChild\((\w*?)\)
replacement ".appendChild(function(){if({1}.src){{1}.src={1}.src.replace('www.google.com', 'gg.su.sg:8443');}if({1}.href){{1}.href={1}.href.replace('ogs.google.com','localhost')}return {1}}())"
}
}
encrypted-gstatic-gg.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
root ./public_su_sg
proxy / https://encrypted-tbn0.gstatic.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
header_upstream Referer "https://www.google.com/"
except /robots.txt
insecure_skip_verify
}
}
apis-gg.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
root ./public_su_sg
proxy / https://apis.google.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
header_upstream Referer "https://www.google.com/"
except /robots.txt
insecure_skip_verify
}
}
# google end
#################################
# google scholar begin
scholar.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
redir 302 {
if {path} not "/robots.txt"
if_op and
if {~su-sg-x-access} not "temp-yes"
/ https://su.sg/?ref={hostonly}
}
proxy / https://scholar.google.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
except /robots.txt
header_upstream Accept-Language zh-CN,zh;q=0.9
header_upstream Accept-Encoding identity
insecure_skip_verify
}
filter rule {
content_type text/.*
search_pattern scholar\.googleusercontent\.com
replacement scholar-ggusercontent.su.sg:8443
}
# cookie filter
filter rule {
content_type text/.*
search_pattern domain=\.google\.com
replacement domain=.scholar.su.sg
}
filter rule {
content_type text/.*
search_pattern domain=scholar\.google\.com
replacement domain=.scholar.su.sg
}
# gist
filter rule {
path .*
search_pattern gist.github.com
replacement gist.su.sg:8443
}
# github
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
filter rule { # featured
content_type text/.*
search_pattern "<div id=\"gs_ftr_rt\">"
replacement "<div id=\"gs_ftr_rt\"><a href=\"https://su.sg/\">Proxied by su.sg</a>"
}
# general
filter rule {
content_type text/.*
search_pattern http://scholar.google.com
replacement https://scholar.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern scholar.google.com
replacement scholar.su.sg:8443
}
filter rule { # featured
content_type text/.*
search_pattern :scholar\.su\.sg:8443\/
replacement :scholar.google.com/
}
filter rule {
content_type text/.*
search_pattern www.google.com
replacement gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern id.google.com
replacement id-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern (www|ssl).gstatic.com
replacement gstatic-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern encrypted-tbn\w.gstatic.com
replacement encrypted-gstatic-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern apis.google.com
replacement apis-gg.su.sg:8443
}
# wikipedia
filter rule {
content_type text/.*
search_pattern zh.wikipedia.org
replacement wiki.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern zh.m.wikipedia.org
replacement m-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.wikipedia\.org
replacement en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.m\.wikipedia\.org
replacement m-en-wiki.su.sg:8443
}
}
scholar-ggusercontent.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
redir 302 {
if {path} not "/robots.txt"
if_op and
if {~su-sg-x-access} not "temp-yes"
/ https://su.sg/?ref={hostonly}
}
proxy / https://scholar.googleusercontent.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
except /robots.txt
header_upstream Accept-Language zh-CN,zh;q=0.9
header_upstream Accept-Encoding identity
insecure_skip_verify
}
# cookie filter
filter rule {
content_type text/.*
search_pattern domain=\.google\.com
replacement domain=.scholar-ggusercontent.su.sg
}
filter rule {
content_type text/.*
search_pattern domain=scholar\.google\.com
replacement domain=.scholar-ggusercontent.su.sg
}
filter rule {
content_type text/.*
search_pattern domain=scholar\.googleusercontent\.com
replacement domain=.scholar-ggusercontent.su.sg
}
filter rule {
content_type text/.*
search_pattern domain=\.googleusercontent\.com
replacement domain=.scholar-ggusercontent.su.sg
}
# gist
filter rule {
path .*
search_pattern gist.github.com
replacement gist.su.sg:8443
}
# github
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
# general
filter rule {
content_type text/.*
search_pattern http://scholar.google.com
replacement https://scholar.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern scholar.google.com
replacement scholar.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern www.google.com
replacement gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern (www|ssl).gstatic.com
replacement gstatic-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern encrypted-tbn\w.gstatic.com
replacement encrypted-gstatic-gg.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern apis.google.com
replacement apis-gg.su.sg:8443
}
# wikipedia
filter rule {
content_type text/.*
search_pattern zh.wikipedia.org
replacement wiki.su.sg:8443
}
filter rule {
content_type text/.*
search_pattern zh.m.wikipedia.org
replacement m-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.wikipedia\.org
replacement en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.m\.wikipedia\.org
replacement m-en-wiki.su.sg:8443
}
}
# google scholar end
#################################
# duckduckgo begin
duck.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
redir 302 {
if {path} not "/robots.txt"
if_op and
if {~su-sg-x-access} not "temp-yes"
/ https://su.sg/?ref={hostonly}
}
proxy / https://duckduckgo.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
except /robots.txt
header_upstream Accept-Language zh-CN,zh;q=0.9
header_upstream Accept-Encoding identity
insecure_skip_verify
}
# gist
filter rule {
path .*
search_pattern gist.github.com
replacement gist.su.sg:8443
}
# github
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
filter rule {
path .*
search_pattern duckduckgo.com
replacement duck.su.sg:8443
}
filter rule {
path .*
search_pattern \.sub\+\"\.\"
replacement ".sub+\"-\""
}
# wikipedia
filter rule {
path .*
search_pattern zh.wikipedia.org
replacement wiki.su.sg:8443
}
filter rule {
path .*
search_pattern zh.m.wikipedia.org
replacement m-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.wikipedia\.org
replacement en-wiki.su.sg:8443
}
filter rule {
path .*
search_pattern en\.m\.wikipedia\.org
replacement m-en-wiki.su.sg:8443
}
}
proxy-duck.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
root ./public_su_sg
proxy / https://proxy.duckduckgo.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
header_upstream Referer "https://duckduckgo.com/"
except /robots.txt
insecure_skip_verify
}
filter rule {
content_type text/.*
search_pattern duckduckgo.com
replacement duck.su.sg:8443
}
}
improving-duck.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
root ./public_su_sg
proxy / https://improving.duckduckgo.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
header_upstream Referer "https://duckduckgo.com/"
except /robots.txt
insecure_skip_verify
}
filter rule {
content_type text/.*
search_pattern duckduckgo.com
replacement duck.su.sg:8443
}
}
# duckduckgo end
########################
# github gist begin
gist.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
redir 302 {
if {path} not "/robots.txt"
if_op and
if {~su-sg-x-access} not "temp-yes"
/ https://su.sg/?ref={hostonly}
}
proxy / https://gist.github.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
except /robots.txt
header_upstream Accept-Language zh-CN,zh;q=0.9
header_upstream Accept-Encoding identity
insecure_skip_verify
}
header / -Content-Security-Policy
filter rule {
path .*
search_pattern gist.github.com
replacement gist.su.sg:8443
}
# github
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
}
# github gist end
########################
# github begin
github.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
redir 302 {
if {path} not "/robots.txt"
if_op and
if {~su-sg-x-access} not "temp-yes"
/ https://su.sg/?ref={hostonly}
}
proxy / https://github.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
except /robots.txt
header_upstream Accept-Language zh-CN,zh;q=0.9
header_upstream Accept-Encoding identity
insecure_skip_verify
}
header / -Content-Security-Policy
filter rule {
path .*
search_pattern integrity="sha512-
replacement integriti="sha512-
}
filter rule {
path .*
search_pattern github.githubassets.com
replacement github-assets.su.sg:8443
}
filter rule {
path .*
search_pattern raw.githubusercontent.com
replacement github-raw.su.sg:8443
}
# gist
filter rule {
path .*
search_pattern gist.github.com
replacement gist.su.sg:8443
}
# github
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
# inject js disable pjax
filter rule {
path .*
search_pattern "</head>"
replacement "<script src=\"//cdnjs.cloudflare.com/ajax/libs/jquery/3.4.0/jquery.min.js\"></script> <script src=\"https://lepuslab.com/js/inject_github.js\"></script></head>"
}
}
github-assets.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
proxy / https://github.githubassets.com {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
header_upstream Referer "https://github.com/"
except /robots.txt
header_upstream Accept-Language zh-CN,zh;q=0.9
header_upstream Accept-Encoding identity
insecure_skip_verify
}
header / Access-Control-Allow-Origin *
header / -Content-Security-Policy
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
}
github-raw.su.sg:8443 {
gzip
cache
tls {
dns cloudflare
}
log / ./su.sg-access.log "{>CF-Connecting-IP} - {combined} - {host}"
root ./public_su_sg
proxy / https://raw.githubusercontent.com/ {
header_upstream X-Real-IP {>CF-Connecting-IP}
header_upstream X-Forwarded-For {>CF-Connecting-IP}
header_upstream User-Agent {>User-Agent}
header_upstream -Referer
header_upstream Referer "https://github.com/"
except /robots.txt
header_upstream Accept-Language zh-CN,zh;q=0.9
header_upstream Accept-Encoding identity
insecure_skip_verify
}
header / Access-Control-Allow-Origin *
header / -Content-Security-Policy
filter rule {
path .*
search_pattern github.com
replacement github.su.sg:8443
}
}
# github end
#########################
# telegram begin
# telegram end
########################
# reddit begin
# reddit end
########################
# twitter begin
# twitter end
########################
@phlinhng
Copy link

phlinhng commented Mar 18, 2020

您好,我最近在尝试自己架设谷歌学术反代站,还有很多不懂的地方。首先感谢您的代码和公众号文章,解决了我的反代站因为没回传资料给谷歌而被Google重导向到404的问题。有几个问题想冒眛请教:

  1. 您在谷歌学术的代码块里也加入了针对gistgithub的filter规则,是有什么特殊用义吗
  2. scholar-ggusercontent.su.sg一段中,为什么google.com,scholar\.googleusercontent\.com,scholar\.google\.com都可以重导向到scholar-ggusercontent.su.sg,不需要重导向到三个不同的地址吗
  3. 使用8443端口的用意

@zrt
Copy link
Author

zrt commented Mar 18, 2020

您好,我最近在尝试自己架设谷歌学术反代站,还有很多不懂的地方。首先感谢您的代码和公众号文章,解决了我的反代站因为没回传资料给谷歌而被Google重导向到404的问题。有几个问题想冒眛请教:

  1. 您在谷歌学术的代码块里也加入了针对gistgithub的filter规则,是有什么特殊用义吗
  2. scholar-ggusercontent.su.sg一段中,为什么google.com,scholar\.googleusercontent\.com,scholar\.google\.com都可以重导向到scholar-ggusercontent.su.sg,不需要重导向到三个不同的地址吗
  3. 使用8443端口的用意

@phlinhng

  1. 没有什么特殊用意,之前是想替换google的搜索结果,就一起添加了。
  2. 我记不清楚了,但是有两个可能:(1)只是为了替换cookie, (2)我在某次批量替换的时候,失误替换错了..
  3. 我那个服务器443在跑着其他服务,因为我想在外面套个cloudflare,所以只能用8443了。

其实为了用cloudflare有很多妥协,比如只支持二级域名的证书,如果抛弃cloudflare可以简化一些的..有些多级域名不用弄的这么复杂。

@phlinhng
Copy link

感谢您的答覆!请问你都是怎么得知每个反代站要进行什么设置的?例如github-raw要加header_upstream Referer "https://github.com/",还有资源替换、cookie替换等

@zrt
Copy link
Author

zrt commented Mar 18, 2020

感谢您的答覆!请问你都是怎么得知每个反代站要进行什么设置的?例如github-raw要加header_upstream Referer "https://github.com/",还有资源替换、cookie替换等

@phlinhng
可以先加一些简单规则,然后看一下Chrome调试器里的Console和Network tab失败的请求,然后一个一个改就行.. (Referer其实加不加都一样,随手加上的..)
github其实挺不完善的,只能应急。
wiki这种就特别“反代友好”,所有资源都从固定几个地址加载。

@phlinhng
Copy link

@zrt 这一段是什么用意呢(抱歉问题有点多)

redir 302 {
        if {path} not "/robots.txt"
        if_op and
        if {~su-sg-x-access} not "temp-yes"
        / https://su.sg/?ref={hostonly}
    }

@zrt
Copy link
Author

zrt commented Mar 18, 2020

@zrt 这一段是什么用意呢(抱歉问题有点多)

redir 302 {
        if {path} not "/robots.txt"
        if_op and
        if {~su-sg-x-access} not "temp-yes"
        / https://su.sg/?ref={hostonly}
    }

@phlinhng
这个是加了个cookie的验证.. 如果访问的不是/robots.txt 就需要有一个 su-sg-x-access=temp-yes 的cookie,否则跳转。
不想在分享链接的时候被各种爬虫抓取..

@phlinhng
Copy link

phlinhng commented Mar 20, 2020

我发现反代的github站没法登入,由于github在国内还是可以用的,所以我加了两个跳转,只要试图登入或建立新帐号就跳回原本的github,这样反代站的行为会显得自然一点。

    proxy / https://github.com {
        except /robots.txt
        except /login /join
        header_upstream X-Real-IP {>CF-Connecting-IP}
        header_upstream X-Forwarded-For {>CF-Connecting-IP}
        header_upstream X-Forwarded-Proto {scheme}
        header_upstream -Referer
        header_upstream User-Agent {>User-Agent}
        header_upstream Accept-Encoding identity
    }

    header / -Content-Security-Policy

    redir 301 {
        if {path} is /login
        https://github.com/login
    }

    redir 301 {
        if {path} is /join
        https://github.com/join
    }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment