-
-
Save huksley/bc3cb046157a99cd9d1517b32f91a99e to your computer and use it in GitHub Desktop.
/** | |
* Google News started generate encoded, internal URLs for RSS items | |
* https://news.google.com/rss/search?q=New%20York%20when%3A30d&hl=en-US&gl=US&ceid=US:en | |
* | |
* This script decodes URLs into original one, for example URL | |
* https://news.google.com/__i/rss/rd/articles/CBMiSGh0dHBzOi8vdGVjaGNydW5jaC5jb20vMjAyMi8xMC8yNy9uZXcteW9yay1wb3N0LWhhY2tlZC1vZmZlbnNpdmUtdHdlZXRzL9IBAA?oc=5 | |
* | |
* contains this | |
* https://techcrunch.com/2022/10/27/new-york-post-hacked-offensive-tweets/ | |
* | |
* In path after articles/ goes Base64 encoded binary data | |
* | |
* Format is the following: | |
* <prefix> <len bytes> <URL bytes> <len bytes> <amp URL bytes> [<suffix>] | |
* | |
* <prefix> - 0x08, 0x13, 0x22 | |
* <suffix> - 0xd2, 0x01, 0x00 (sometimes missing??) | |
* <len bytes> - formatted as 0x40 or 0x81 0x01 sometimes | |
* | |
* FIXME: What will happen if URL more than 255 bytes?? | |
* | |
* Licensed under: MIT License | |
* | |
* Copyright (c) 2022 Ruslan Gainutdinov | |
* | |
* Permission is hereby granted, free of charge, to any person obtaining a copy | |
* of this software and associated documentation files (the "Software"), to deal | |
* in the Software without restriction, including without limitation the rights | |
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | |
* copies of the Software, and to permit persons to whom the Software is | |
* furnished to do so, subject to the following conditions: | |
* | |
* The above copyright notice and this permission notice shall be included | |
* in all copies or substantial portions of the Software. | |
* | |
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | |
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | |
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | |
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | |
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | |
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | |
* SOFTWARE. | |
*/ | |
const decodeGoogleNewsUrl = (sourceUrl: string) => { | |
const url = new URL(sourceUrl); | |
const path = url.pathname.split("/"); | |
if (url.hostname === "news.google.com" && path.length > 1 && path[path.length - 2] === "articles") { | |
const base64 = path[path.length - 1]; | |
let str = atob(base64); | |
const prefix = Buffer.from([0x08, 0x13, 0x22]).toString("binary"); | |
if (str.startsWith(prefix)) { | |
str = str.substring(prefix.length); | |
} | |
const suffix = Buffer.from([0xd2, 0x01, 0x00]).toString("binary"); | |
if (str.endsWith(suffix)) { | |
str = str.substring(0, str.length - suffix.length); | |
} | |
// One or two bytes to skip | |
const bytes = Uint8Array.from(str, (c) => c.charCodeAt(0)); | |
const len = bytes.at(0)!; | |
if (len >= 0x80) { | |
str = str.substring(2, len + 1); | |
} else { | |
str = str.substring(1, len + 1); | |
} | |
return str; | |
} else { | |
return sourceUrl; | |
} | |
}; |
Huksley for President!!
Thank you for providing the solution to decode Google News URLs using the batchexecute protocol. Your workaround is brilliant and has helped me a lot!
I have converted your Node.js code into Python. Here is the Python version of the script:
import requests
import base64
def fetch_decoded_batch_execute(id):
s = (
'[[["Fbv4je","[\\"garturlreq\\",[[\\"en-US\\",\\"US\\",[\\"FINANCE_TOP_INDICES\\",\\"WEB_TEST_1_0_0\\"],'
'null,null,1,1,\\"US:en\\",null,180,null,null,null,null,null,0,null,null,[1608992183,723341000]],'
'\\"en-US\\",\\"US\\",1,[2,3,4,8],1,0,\\"655000234\\",0,0,null,0],\\"' +
id +
'\\"]",null,"generic"]]]'
)
headers = {
"Content-Type": "application/x-www-form-urlencoded;charset=utf-8",
"Referer": "https://news.google.com/"
}
response = requests.post(
"https://news.google.com/_/DotsSplashUi/data/batchexecute?rpcids=Fbv4je",
headers=headers,
data={"f.req": s}
)
if response.status_code != 200:
raise Exception("Failed to fetch data from Google.")
text = response.text
header = '[\\"garturlres\\",\\"'
footer = '\\",'
if header not in text:
raise Exception(f"Header not found in response: {text}")
start = text.split(header, 1)[1]
if footer not in start:
raise Exception("Footer not found in response.")
url = start.split(footer, 1)[0]
return url
def decode_google_news_url(source_url):
url = requests.utils.urlparse(source_url)
path = url.path.split("/")
if url.hostname == "news.google.com" and len(path) > 1 and path[-2] == "articles":
base64_str = path[-1]
decoded_bytes = base64.urlsafe_b64decode(base64_str + '==')
decoded_str = decoded_bytes.decode('latin1')
prefix = b'\x08\x13\x22'.decode('latin1')
if decoded_str.startswith(prefix):
decoded_str = decoded_str[len(prefix):]
suffix = b'\xd2\x01\x00'.decode('latin1')
if decoded_str.endswith(suffix):
decoded_str = decoded_str[:-len(suffix)]
bytes_array = bytearray(decoded_str, 'latin1')
length = bytes_array[0]
if length >= 0x80:
decoded_str = decoded_str[2:length+1]
else:
decoded_str = decoded_str[1:length+1]
if decoded_str.startswith("AU_yqL"):
return fetch_decoded_batch_execute(base64_str)
return decoded_str
else:
return source_url
# Example usage
if __name__ == "__main__":
source_url = 'https://news.google.com/rss/articles/CBMiLmh0dHBzOi8vd3d3LmJiYy5jb20vbmV3cy9hcnRpY2xlcy9jampqbnhkdjE4OG_SATJodHRwczovL3d3dy5iYmMuY29tL25ld3MvYXJ0aWNsZXMvY2pqam54ZHYxODhvLmFtcA?oc=5'
decoded_url = decode_google_news_url(source_url)
print(decoded_url)
I hope this can help others who are looking for a solution in Python.
Thanks again for your great work!
@huksley Hi, Thank you for the solution. Is there any QPS for this server ?
Thanks this has been useful.
Thanks! It has been very helpful but I am noticing some URL still fails, i.e when the source URL contains special characters such as https://news.google.com/rss/articles/CBMi-gFBVV95cUxPbVl2TmVsekg2WDRYSlhmTUxIZEllcVg3Z09NNTNneEVuRkZhaXowN01iWExNaF9QTUhhdnlrTHhiTDVVTHhXRUc5M3VmTVFVREIxaWhfRnJjekxEcWZVbGp2MVc4UVl2SmQwenMyZnBLUU5vWFJGMUpmLXY0Ynlsei1RR2JpVEJkVlVPMFVlLTV2NmROMnBla3pkMEEwY055cEJkc0pCRTFudVNUdXVKT0cwMUxhZzgyeTMtSlZURmd5ZWJqamYybVBWdnVuSnh4SWdLZUc0b3lKTXQxWklSZWpMcG12eGxxb2ZTRHlmZ0NFOEVYQTFBcGRR0gH_AUFVX3lxTE9HU1o5ZzFfaHhRY3lheXlJQXdqMENVMGVRT3NGMHFLRlJYVFc4YnBwNTJTYmZma3lUYWJhWllxN1Nja2lYVEJKOUZaZlF0YUVFS3pQeU1KWjFTVVlzQWU3bjA5VXRrQXBNRGk5b2ROWndGQW05NmozQlRLY2FKeUlyMUpxUzduMHZtbjhsY2JsMEhfMVNDOEhraEp2NU5PQk9ZUzhGdlE4ZVFlcGR1eDYtVHJaRy13VmgxcUVBbTNYUEY2VFljd2x1MkdTeHRhbjRyNTNMazlORHJaMDRzN2NWMmpiNHk2MU9Fd1I0T1hqSXgxMFg3MVc1XzVwTDZIUQ?oc=5
My workaround is still send to Google for decoding regardless which kinda works for now.
Thoughts?
@huksley Hi, Thank you for the solution. Is there any QPS for this server ?
What is QPS? Which server? @ebinezerp
@huksley more than 100 request , we get ban
please find way to slove this bro ...
Hi @huksley ! Firstly I would like to thank you for the solution.
And @Glyphosate69 for the version in Python.
I get the following output with decode_google_news_url
:
https://www.rondoniadinamica.com/noticias/2024/07/deputado-alan-queiroz-destina-emenda-parlamentar-para-microrevestimento-asfaltico-em-alto-alegre-dos-parecis,195210.shtm
TLDR: I was not able to decode offline, so you need to fetch it from servers for every URL.
At the redirect page (i.e., https://news.google.com/rss/articles/...), Google uses batchexecute protocol to decode the URL on the backend. It is something called "garturl", because there is garturlreq => garturlres exchange going on.
Here, I hacked this together, which fetches this URL with minimum headers; this is barely stable; use at your own risk. Obviously nodejs only.