Skip to content

Instantly share code, notes, and snippets.

@huksley
Last active October 31, 2024 14:02
Show Gist options
  • Save huksley/bc3cb046157a99cd9d1517b32f91a99e to your computer and use it in GitHub Desktop.
Save huksley/bc3cb046157a99cd9d1517b32f91a99e to your computer and use it in GitHub Desktop.
This script decodes Google News generated encoded, internal URLs for RSS items
/**
* This magically uses batchexecute protocol. It's not documented, but it works.
*
* Licensed under: MIT License
*
* Copyright (c) 2024 Ruslan Gainutdinov
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
const fetchDecodedBatchExecute = (id: string) => {
const s =
'[[["Fbv4je","[\\"garturlreq\\",[[\\"en-US\\",\\"US\\",[\\"FINANCE_TOP_INDICES\\",\\"WEB_TEST_1_0_0\\"],null,null,1,1,\\"US:en\\",null,180,null,null,null,null,null,0,null,null,[1608992183,723341000]],\\"en-US\\",\\"US\\",1,[2,3,4,8],1,0,\\"655000234\\",0,0,null,0],\\"' +
id +
'\\"]",null,"generic"]]]';
return fetch("https://news.google.com/_/DotsSplashUi/data/batchexecute?" + "rpcids=Fbv4je", {
headers: {
"Content-Type": "application/x-www-form-urlencoded;charset=utf-8",
Referrer: "https://news.google.com/"
},
body: "f.req=" + encodeURIComponent(s),
method: "POST"
})
.then(e => e.text())
.then(s => {
const header = '[\\"garturlres\\",\\"';
const footer = '\\",';
if (!s.includes(header)) {
throw new Error("header not found: " + s);
}
const start = s.substring(s.indexOf(header) + header.length);
if (!start.includes(footer)) {
throw new Error("footer not found");
}
const url = start.substring(0, start.indexOf(footer));
return url;
});
};
/**
* Google News started generate encoded, internal URLs for RSS items
* https://news.google.com/rss/search?q=New%20York%20when%3A30d&hl=en-US&gl=US&ceid=US:en
*
* This script decodes URLs into original one, for example URL
* https://news.google.com/__i/rss/rd/articles/CBMiSGh0dHBzOi8vdGVjaGNydW5jaC5jb20vMjAyMi8xMC8yNy9uZXcteW9yay1wb3N0LWhhY2tlZC1vZmZlbnNpdmUtdHdlZXRzL9IBAA?oc=5
*
* contains this
* https://techcrunch.com/2022/10/27/new-york-post-hacked-offensive-tweets/
*
* In path after articles/ goes Base64 encoded binary data
*
* Format is the following:
* <prefix> <len bytes> <URL bytes> <len bytes> <amp URL bytes> [<suffix>]
*
* <prefix> - 0x08, 0x13, 0x22
* <suffix> - 0xd2, 0x01, 0x00 (sometimes missing??)
* <len bytes> - formatted as 0x40 or 0x81 0x01 sometimes
*
*
* https://news.google.com/rss/articles/CBMiqwFBVV95cUxNMTRqdUZpNl9hQldXbGo2YVVLOGFQdkFLYldlMUxUVlNEaElsYjRRODVUMkF3R1RYdWxvT1NoVzdUYS0xSHg3eVdpTjdVODQ5cVJJLWt4dk9vZFBScVp2ZmpzQXZZRy1ncDM5c2tRbXBVVHVrQnpmMGVrQXNkQVItV3h4dVQ1V1BTbjhnM3k2ZUdPdnhVOFk1NmllNTZkdGJTbW9NX0k5U3E2Tkk?oc=5
* https://news.google.com/rss/articles/CBMidkFVX3lxTFB1QmFsSi1Zc3dLQkpNLThKTXExWXBGWlE0eERJQ2hLRENIOFJzRTlsRnM1NS1Hc2FlbjdIMlZ3eWNQa0JqeVYzZGs1Y0hKaUtTUko2dmJabUtVMWZob0lNSFNCa3NLQ05ROGh4cVZfVTYyUDVxc2c?oc=5
* https://news.google.com/rss/articles/CBMiqwFBVV95cUxNMTRqdUZpNl9hQldXbGo2YVVLOGFQdkFLYldlMUxUVlNEaElsYjRRODVUMkF3R1RYdWxvT1NoVzdUYS0xSHg3eVdpTjdVODQ5cVJJLWt4dk9vZFBScVp2ZmpzQXZZRy1ncDM5c2tRbXBVVHVrQnpmMGVrQXNkQVItV3h4dVQ1V1BTbjhnM3k2ZUdPdnhVOFk1NmllNTZkdGJTbW9NX0k5U3E2Tkk?oc=5
*
* FIXME: What will happen if URL more than 255 bytes??
*
* Licensed under: MIT License
*
* Copyright (c) 2022 Ruslan Gainutdinov
*
* Permission is hereby granted, free of charge, to any person obtaining a copy
* of this software and associated documentation files (the "Software"), to deal
* in the Software without restriction, including without limitation the rights
* to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
* copies of the Software, and to permit persons to whom the Software is
* furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included
* in all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
* AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
* OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
* SOFTWARE.
*/
export const decodeGoogleNewsUrl = async (sourceUrl: string) => {
const url = new URL(sourceUrl);
const path = url.pathname.split("/");
if (
url.hostname === "news.google.com" &&
path.length > 1 &&
path[path.length - 2] === "articles"
) {
const base64 = path[path.length - 1];
let str = atob(base64);
const prefix = Buffer.from([0x08, 0x13, 0x22]).toString("binary");
if (str.startsWith(prefix)) {
str = str.substring(prefix.length);
}
const suffix = Buffer.from([0xd2, 0x01, 0x00]).toString("binary");
if (str.endsWith(suffix)) {
str = str.substring(0, str.length - suffix.length);
}
// One or two bytes to skip
const bytes = Uint8Array.from(str, c => c.charCodeAt(0));
const len = bytes.at(0)!;
if (len >= 0x80) {
str = str.substring(2, len + 2);
} else {
str = str.substring(1, len + 1);
}
if (str.startsWith("AU_yqL")) {
// New style encoding, introduced in July 2024. Not yet known how to decode offline.
const url = await fetchDecodedBatchExecute(base64);
return url;
}
return str;
} else {
return sourceUrl;
}
};
@Glyphosate69
Copy link

Glyphosate69 commented Aug 30, 2024

Unfortunately, the solution that @huksley proposed no longer works. It produces errors like this: [["wrb.fr","Fbv4je",null,null,null,[3],"generic"],["di",10],["af.httprm",10,"2111786207358723693",9]].

@nabihahmohdnor
Copy link

@huksley surprisingly the request solution is not working. Can help to fix.

@przdev
Copy link

przdev commented Aug 30, 2024

yes, they have added few more parameters at end of payload, looks like some encrypted values
,1725004324,\"ATR1dL9qAQrN8uy3dkKVSj-G9RHc\"]",null,"generic"]]]&at=AEtveWhZ98E6YWfBFsXKcv6oDg_O:1725004324622&
below attached compare image, here right side contains previous payload and left side contains current payload

image

@hy-atharv
Copy link

Any solutions?

@przdev
Copy link

przdev commented Aug 30, 2024

Any solutions?

Tried to decode it, but no luck

@SSujitX
Copy link

SSujitX commented Aug 30, 2024

payload = (
'[[["Fbv4je","[\"garturlreq\",[[\"en-US\",\"US\",[\"FINANCE_TOP_INDICES\",\"WEB_TEST_1_0_0\"],null,null,1,1,\"US:en\",null,360,null,null,null,null,null,0,null,null,[1677434405,738601000]],\"en-US\",\"US\",1,[2,3,4,8],1,0,\"668194412\",0,0,null,0],\"'
+ code
+ '\",1725016444,\"ATR1dL9R_yt7riBAiulU9qaZcXAJ\"]",null,"generic"]]]'
)

For 1725016444 it's a Unix timestamp. Finding solution for ATR1dL9R_yt7riBAiulU9qaZcXAJ

@hy-atharv
Copy link

payload = ( '[[["Fbv4je","["garturlreq",[["en-US","US",["FINANCE_TOP_INDICES","WEB_TEST_1_0_0"],null,null,1,1,"US:en",null,360,null,null,null,null,null,0,null,null,[1677434405,738601000]],"en-US","US",1,[2,3,4,8],1,0,"668194412",0,0,null,0],"' + code + '",1725016444,"ATR1dL9R_yt7riBAiulU9qaZcXAJ"]",null,"generic"]]]' )

For 1725016444 it's a Unix timestamp. Finding solution for ATR1dL9R_yt7riBAiulU9qaZcXAJ

Will you update your python package as well if you find the solution?

@SSujitX
Copy link

SSujitX commented Aug 30, 2024

payload = ( '[[["Fbv4je","["garturlreq",[["en-US","US",["FINANCE_TOP_INDICES","WEB_TEST_1_0_0"],null,null,1,1,"US:en",null,360,null,null,null,null,null,0,null,null,[1677434405,738601000]],"en-US","US",1,[2,3,4,8],1,0,"668194412",0,0,null,0],"' + code + '",1725016444,"ATR1dL9R_yt7riBAiulU9qaZcXAJ"]",null,"generic"]]]' )
For 1725016444 it's a Unix timestamp. Finding solution for ATR1dL9R_yt7riBAiulU9qaZcXAJ

Will you update your python package as well if you find the solution?

trying and waiting for @huksley

@jacoboisaza
Copy link

jacoboisaza commented Aug 30, 2024

PYTHON SOLUTION

I don't know the exact algorithm used by Google to encode/decode the URLs, but I found a way to decode them using reverse engineering by inspecting the requests made by the browser in the redirection chain.

pip install beautifulsoup4 lxml

import json
from urllib.parse import quote, urlparse

import requests
from bs4 import BeautifulSoup


def get_decoding_params(gn_art_id):
    response = requests.get(f"https://news.google.com/articles/{gn_art_id}")
    response.raise_for_status()
    soup = BeautifulSoup(response.text, "lxml")
    div = soup.select_one("c-wiz > div")
    return {
        "signature": div.get("data-n-a-sg"),
        "timestamp": div.get("data-n-a-ts"),
        "gn_art_id": gn_art_id,
    }


def decode_urls(articles):
    articles_reqs = [
        [
            "Fbv4je",
            f'["garturlreq",[["X","X",["X","X"],null,null,1,1,"US:en",null,1,null,null,null,null,null,0,1],"X","X",1,[1,1,1],1,1,null,0,0,null,0],"{art["gn_art_id"]}",{art["timestamp"]},"{art["signature"]}"]',
        ]
        for art in articles
    ]
    payload = f"f.req={quote(json.dumps([articles_reqs]))}"
    headers = {"content-type": "application/x-www-form-urlencoded;charset=UTF-8"}
    response = requests.post(
        url="https://news.google.com/_/DotsSplashUi/data/batchexecute",
        headers=headers,
        data=payload,
    )
    response.raise_for_status()
    return [json.loads(res[2])[1] for res in json.loads(response.text.split("\n\n")[1])[:-2]]

# Example usage
encoded_urls = [
    "https://news.google.com/rss/articles/CBMipgFBVV95cUxPWV9fTEI4cjh1RndwanpzNVliMUh6czg2X1RjeEN0YUctUmlZb0FyeV9oT3RWM1JrMGRodGtqTk1zV3pkNEpmdGNxc2lfd0c4LVpGVENvUDFMOEJqc0FCVVExSlRrQmI3TWZ2NUc4dy1EVXF4YnBLaGZ4cTFMQXFFM2JpanhDR3hoRmthUjVjdm1najZsaFh4a3lBbDladDZtVS1FMHFn?oc=5",
    "https://news.google.com/rss/articles/CBMi3AFBVV95cUxOX01TWDZZN2J5LWlmU3hudGZaRDh6a1dxUHMtalBEY1c0TlJSNlpieWxaUkxUU19MVTN3Y1BqaUZael83d1ctNXhaQUtPM0IyMFc4R3VydEtoMmFYMWpMU1Rtc3BjYmY4d3gxZHlMZG5NX0s1RmR2ZXI5YllvdzNSd2xkOFNCUTZTaEp3b0IxZEJZdVFLUDBNMC1wNGgwMGhjRG9HRFpRZU5BMFVIYjZCOWdWcHI1YzdoVHFWYnZSOEFwQ0NubGx3Rzd0SHN6OENKMXZUcHUxazA5WTIw?hl=en-US&gl=US&ceid=US%3Aen",
]
articles_params = [get_decoding_params(urlparse(url).path.split("/")[-1]) for url in encoded_urls]
decoded_urls = decode_urls(articles_params)
print(decoded_urls)

@SSujitX
Copy link

SSujitX commented Aug 30, 2024

PYTHON SOLUTION

I don't know the exact algorithm used by Google to encode/decode the URLs, but I found a way to decode them using reverse engineering by inspecting the requests made by the browser in the redirection chain.

pip install beautifulsoup4 lxml

import json
from urllib.parse import quote, urlparse

import requests
from bs4 import BeautifulSoup


def get_decoding_params(gn_art_id):
    response = requests.get(f"https://news.google.com/articles/{gn_art_id}")
    response.raise_for_status()
    soup = BeautifulSoup(response.text, "lxml")
    div = soup.select_one("c-wiz > div")
    return {
        "signature": div.get("data-n-a-sg"),
        "timestamp": div.get("data-n-a-ts"),
        "gn_art_id": gn_art_id,
    }


def decode_urls(articles):
    articles_reqs = [
        [
            "Fbv4je",
            f'["garturlreq",[["X","X",["X","X"],null,null,1,1,"US:en",null,1,null,null,null,null,null,0,1],"X","X",1,[1,1,1],1,1,null,0,0,null,0],"{art["gn_art_id"]}",{art["timestamp"]},"{art["signature"]}"]',
        ]
        for art in articles
    ]
    payload = f"f.req={quote(json.dumps([articles_reqs]))}"
    headers = {"content-type": "application/x-www-form-urlencoded;charset=UTF-8"}
    response = requests.post(
        url="https://news.google.com/_/DotsSplashUi/data/batchexecute",
        headers=headers,
        data=payload,
    )
    response.raise_for_status()
    return [json.loads(res[2])[1] for res in json.loads(response.text.split("\n\n")[1])[:-2]]

# Example usage
encoded_urls = [
    "https://news.google.com/rss/articles/CBMipgFBVV95cUxPWV9fTEI4cjh1RndwanpzNVliMUh6czg2X1RjeEN0YUctUmlZb0FyeV9oT3RWM1JrMGRodGtqTk1zV3pkNEpmdGNxc2lfd0c4LVpGVENvUDFMOEJqc0FCVVExSlRrQmI3TWZ2NUc4dy1EVXF4YnBLaGZ4cTFMQXFFM2JpanhDR3hoRmthUjVjdm1najZsaFh4a3lBbDladDZtVS1FMHFn?oc=5",
    "https://news.google.com/rss/articles/CBMi3AFBVV95cUxOX01TWDZZN2J5LWlmU3hudGZaRDh6a1dxUHMtalBEY1c0TlJSNlpieWxaUkxUU19MVTN3Y1BqaUZael83d1ctNXhaQUtPM0IyMFc4R3VydEtoMmFYMWpMU1Rtc3BjYmY4d3gxZHlMZG5NX0s1RmR2ZXI5YllvdzNSd2xkOFNCUTZTaEp3b0IxZEJZdVFLUDBNMC1wNGgwMGhjRG9HRFpRZU5BMFVIYjZCOWdWcHI1YzdoVHFWYnZSOEFwQ0NubGx3Rzd0SHN6OENKMXZUcHUxazA5WTIw?hl=en-US&gl=US&ceid=US%3Aen",
]
articles_params = [get_decoding_params(urlparse(url).path.split("/")[-1]) for url in encoded_urls]
decoded_urls = decode_urls(articles_params)
print(decoded_urls)

Man, you are a genius. I never thought that way. I was checking using httptoolkit, my bad i never notice. which tool do you use? httpdebugger? Can I implement it in my module? https://github.com/SSujitX/google-news-url-decoder

@mccabe-david
Copy link

mccabe-david commented Aug 30, 2024

def get_decoding_params(gn_art_id):
    response = requests.get(f"https://news.google.com/articles/{gn_art_id}")
    response.raise_for_status()
    soup = BeautifulSoup(response.text, "lxml")
    div = soup.select_one("c-wiz > div")
    return {
        "signature": div.get("data-n-a-sg"),
        "timestamp": div.get("data-n-a-ts"),
        "gn_art_id": gn_art_id,
    }

def decode_google_news_url(source_url):
  article = get_decoding_params(urlparse(source_url).path.split("/")[-1])

  articles_req = [
    "Fbv4je",
    f'["garturlreq",[["X","X",["X","X"],null,null,1,1,"US:en",null,1,null,null,null,null,null,0,1],"X","X",1,[1,1,1],1,1,null,0,0,null,0],"{article["gn_art_id"]}",{article["timestamp"]},"{article["signature"]}"]',
  ]

  response = requests.post(
      url="https://news.google.com/_/DotsSplashUi/data/batchexecute",
      headers={"content-type": "application/x-www-form-urlencoded;charset=UTF-8"},
      data=f"f.req={quote(json.dumps([[articles_req]]))}",
  )
  response.raise_for_status()

  return json.loads(json.loads(response.text.split("\n\n")[1])[:-2][0][2])[1]

# Example usage
encoded = "https://news.google.com/rss/articles/CBMiwwFBVV95cUxPdEpINnp6em8wMkZnSndsLTlmUkRlWjRyeDlGS1E1WHRmX0E2QXo0S0ZxZ2FCeUkzMnRYRm9wZEE4RGE5bzZnZGdFZUw2VWRSQ0pfcG9WQ1JyWDg3cGVZMFd2Vk4zUDhWSF8tMm45TTdsLVJLdGtLUjB6QlJSWlNfU0gwOEdzY3RtakJFTDB2bzdrUXdnZVRaWGZhVUZjWmdiNXdXX1FyODY5RnBXYTVLUFRHYUJvY25TQzhRQWNydHctR1E?oc=5&hl=en-US&gl=US&ceid=US:en"
decoded_url = decode_google_news_url(encoded)
# prints https://www.forbes.com/sites/digital-assets/2024/08/29/from-polymarket-predictions-to-press-on-nails-crypto-moves-mainstream/
print(decoded_url)

TY @jacoboisaza !! For anyone who wants a solution that takes in a single url instead of an array of URLs...

@techyderm
Copy link

so it’s making multiple requests? seems like google really doesn’t want us to decode urls. what are the chances they block us?

@SSujitX
Copy link

SSujitX commented Aug 30, 2024

Updated on Repo

pip install googlenewsdecoder --upgrade

from googlenewsdecoder import new_decoderv1

def main():

    interval_time = 5 # default interval is 1 sec, if not specified

    source_url = "https://news.google.com/read/CBMi2AFBVV95cUxPd1ZCc1loODVVNHpnbFFTVHFkTG94eWh1NWhTeE9yT1RyNTRXMVV2S1VIUFM3ZlVkVjl6UHh3RkJ0bXdaTVRlcHBjMWFWTkhvZWVuM3pBMEtEdlllRDBveGdIUm9GUnJ4ajd1YWR5cWs3VFA5V2dsZnY1RDZhVDdORHRSSE9EalF2TndWdlh4bkJOWU5UMTdIV2RCc285Q2p3MFA4WnpodUNqN1RNREMwa3d5T2ZHS0JlX0MySGZLc01kWDNtUEkzemtkbWhTZXdQTmdfU1JJaXY?hl=en-US&gl=US&ceid=US%3Aen"

    try:
        decoded_url = new_decoderv1(source_url, interval=interval_time)
        if decoded_url.get("status"):
            print("Decoded URL:", decoded_url["decoded_url"])
        else:
            print("Error:", decoded_url["message"])
    except Exception as e:
        print(f"Error occurred: {e}")

    # Output: decoded_urls - [{'status': True, 'decoded_url': 'https://healthdatamanagement.com/articles/empowering-the-quintuple-aim-embracing-an-essential-architecture/'}]

if __name__ == "__main__":
    main()

@jacoboisaza
Copy link

Yes, the new challenge is to hack the requests limits...

so it’s making multiple requests? seems like google really doesn’t want us to decode urls. what are the chances they block us?

@jacoboisaza
Copy link

jacoboisaza commented Aug 31, 2024

PYTHON SOLUTION

I don't know the exact algorithm used by Google to encode/decode the URLs, but I found a way to decode them using reverse engineering by inspecting the requests made by the browser in the redirection chain.

pip install beautifulsoup4 lxml

import json
from urllib.parse import quote, urlparse

import requests
from bs4 import BeautifulSoup


def get_decoding_params(gn_art_id):
    response = requests.get(f"https://news.google.com/articles/{gn_art_id}")
    response.raise_for_status()
    soup = BeautifulSoup(response.text, "lxml")
    div = soup.select_one("c-wiz > div")
    return {
        "signature": div.get("data-n-a-sg"),
        "timestamp": div.get("data-n-a-ts"),
        "gn_art_id": gn_art_id,
    }


def decode_urls(articles):
    articles_reqs = [
        [
            "Fbv4je",
            f'["garturlreq",[["X","X",["X","X"],null,null,1,1,"US:en",null,1,null,null,null,null,null,0,1],"X","X",1,[1,1,1],1,1,null,0,0,null,0],"{art["gn_art_id"]}",{art["timestamp"]},"{art["signature"]}"]',
        ]
        for art in articles
    ]
    payload = f"f.req={quote(json.dumps([articles_reqs]))}"
    headers = {"content-type": "application/x-www-form-urlencoded;charset=UTF-8"}
    response = requests.post(
        url="https://news.google.com/_/DotsSplashUi/data/batchexecute",
        headers=headers,
        data=payload,
    )
    response.raise_for_status()
    return [json.loads(res[2])[1] for res in json.loads(response.text.split("\n\n")[1])[:-2]]

# Example usage
encoded_urls = [
    "https://news.google.com/rss/articles/CBMipgFBVV95cUxPWV9fTEI4cjh1RndwanpzNVliMUh6czg2X1RjeEN0YUctUmlZb0FyeV9oT3RWM1JrMGRodGtqTk1zV3pkNEpmdGNxc2lfd0c4LVpGVENvUDFMOEJqc0FCVVExSlRrQmI3TWZ2NUc4dy1EVXF4YnBLaGZ4cTFMQXFFM2JpanhDR3hoRmthUjVjdm1najZsaFh4a3lBbDladDZtVS1FMHFn?oc=5",
    "https://news.google.com/rss/articles/CBMi3AFBVV95cUxOX01TWDZZN2J5LWlmU3hudGZaRDh6a1dxUHMtalBEY1c0TlJSNlpieWxaUkxUU19MVTN3Y1BqaUZael83d1ctNXhaQUtPM0IyMFc4R3VydEtoMmFYMWpMU1Rtc3BjYmY4d3gxZHlMZG5NX0s1RmR2ZXI5YllvdzNSd2xkOFNCUTZTaEp3b0IxZEJZdVFLUDBNMC1wNGgwMGhjRG9HRFpRZU5BMFVIYjZCOWdWcHI1YzdoVHFWYnZSOEFwQ0NubGx3Rzd0SHN6OENKMXZUcHUxazA5WTIw?hl=en-US&gl=US&ceid=US%3Aen",
]
articles_params = [get_decoding_params(urlparse(url).path.split("/")[-1]) for url in encoded_urls]
decoded_urls = decode_urls(articles_params)
print(decoded_urls)

Man, you are a genius. I never thought that way. I was checking using httptoolkit, my bad i never notice. which tool do you use? httpdebugger? Can I implement it in my module? https://github.com/SSujitX/google-news-url-decoder

it's open source so take it, mix it, and share it

I used Chrome DevTools to track the redirections and inspect the payloads and responses. After many attempts at cleaning and simplifying the process, I discovered this minimal approach that works well in Python.

@maks-outsource
Copy link

maks-outsource commented Aug 31, 2024

Is there a PHP solution?
I converted all the requests using GPT, licked the code, but nothing works :( I tested the test.php fome with those function through a browser.

I'm already thinking - maybe I shouldn't run this code from the browser?

But I don't see any point in it.

@vrwallace
Copy link

I converted this to PHP and got it working but goggle throttles it then blocks you. So I signed up for bing news api and pull the news articles once a day then insert them into a db and query the db when the user pulls up the page.

@huksley
Copy link
Author

huksley commented Aug 31, 2024

Can someone convert the new solution to javascript? I will update the gist afterwards, thank you!

@svenimeni
Copy link

When I call _/DotsSplashUi/data/batchexecute?rpcids=Fbv4je the answer does not include "garturlres". Anyone else having this problem? Was there already an update in July managing this?

@iamatef
Copy link

iamatef commented Sep 2, 2024 via email

@vincenzon
Copy link

Has anyone figured out how to play nice with the rate limits?

@sviatoslav-lebediev
Copy link

This js version works for me (I've just converted mccabe-david python example). + I use a socks-proxy-agent package and https://webshare.io/ proxies to prevent 429 response.

import axios from 'axios';
import { SocksProxyAgent } from 'socks-proxy-agent';

async function getGoogleNewsRssFinalUrl(url){
    const parsedUrl = new URL(url);
    const gnArticleId = parsedUrl.pathname.split('/').pop();
    const httpsAgent = new SocksProxyAgent(`socks5://user:pass@192.168.0.1:5987`);
    const { data: gnData } = await axios.get(`https://news.google.com/articles/${gnArticleId}`, {
      httpAgent: httpsAgent,
      httpsAgent: httpsAgent,
    });

    const $ = cheerio.load(gnData);
    const div = $('c-wiz > div').first();
    const article = {
      signature: div.attr('data-n-a-sg'),
      timestamp: div.attr('data-n-a-ts'),
      gn_art_id: gnArticleId,
    };

    const articlesReq = [
      'Fbv4je',
      `["garturlreq",[["X","X",["X","X"],null,null,1,1,"US:en",null,1,null,null,null,null,null,0,1],"X","X",1,[1,1,1],1,1,null,0,0,null,0],"${article.gn_art_id}",${article.timestamp},"${article.signature}"]`,
    ];

    const response = await axios.post(
      'https://news.google.com/_/DotsSplashUi/data/batchexecute',
      new URLSearchParams({ 'f.req': JSON.stringify([[articlesReq]]) }).toString(),
      {
        headers: { 'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8' },
        httpAgent: httpsAgent,
        httpsAgent: httpsAgent,
      },
    );

    return JSON.parse(JSON.parse(response.data.split('\n\n')[1].slice(0))[0][2])[1];
  }

@jacoboisaza
Copy link

PYTHON SOLUTION

I don't know the exact algorithm used by Google to encode/decode the URLs, but I found a way to decode them using reverse engineering by inspecting the requests made by the browser in the redirection chain.

pip install beautifulsoup4 lxml

import json
from urllib.parse import quote, urlparse

import requests
from bs4 import BeautifulSoup


def get_decoding_params(gn_art_id):
    response = requests.get(f"https://news.google.com/articles/{gn_art_id}")
    response.raise_for_status()
    soup = BeautifulSoup(response.text, "lxml")
    div = soup.select_one("c-wiz > div")
    return {
        "signature": div.get("data-n-a-sg"),
        "timestamp": div.get("data-n-a-ts"),
        "gn_art_id": gn_art_id,
    }


def decode_urls(articles):
    articles_reqs = [
        [
            "Fbv4je",
            f'["garturlreq",[["X","X",["X","X"],null,null,1,1,"US:en",null,1,null,null,null,null,null,0,1],"X","X",1,[1,1,1],1,1,null,0,0,null,0],"{art["gn_art_id"]}",{art["timestamp"]},"{art["signature"]}"]',
        ]
        for art in articles
    ]
    payload = f"f.req={quote(json.dumps([articles_reqs]))}"
    headers = {"content-type": "application/x-www-form-urlencoded;charset=UTF-8"}
    response = requests.post(
        url="https://news.google.com/_/DotsSplashUi/data/batchexecute",
        headers=headers,
        data=payload,
    )
    response.raise_for_status()
    return [json.loads(res[2])[1] for res in json.loads(response.text.split("\n\n")[1])[:-2]]

# Example usage
encoded_urls = [
    "https://news.google.com/rss/articles/CBMipgFBVV95cUxPWV9fTEI4cjh1RndwanpzNVliMUh6czg2X1RjeEN0YUctUmlZb0FyeV9oT3RWM1JrMGRodGtqTk1zV3pkNEpmdGNxc2lfd0c4LVpGVENvUDFMOEJqc0FCVVExSlRrQmI3TWZ2NUc4dy1EVXF4YnBLaGZ4cTFMQXFFM2JpanhDR3hoRmthUjVjdm1najZsaFh4a3lBbDladDZtVS1FMHFn?oc=5",
    "https://news.google.com/rss/articles/CBMi3AFBVV95cUxOX01TWDZZN2J5LWlmU3hudGZaRDh6a1dxUHMtalBEY1c0TlJSNlpieWxaUkxUU19MVTN3Y1BqaUZael83d1ctNXhaQUtPM0IyMFc4R3VydEtoMmFYMWpMU1Rtc3BjYmY4d3gxZHlMZG5NX0s1RmR2ZXI5YllvdzNSd2xkOFNCUTZTaEp3b0IxZEJZdVFLUDBNMC1wNGgwMGhjRG9HRFpRZU5BMFVIYjZCOWdWcHI1YzdoVHFWYnZSOEFwQ0NubGx3Rzd0SHN6OENKMXZUcHUxazA5WTIw?hl=en-US&gl=US&ceid=US%3Aen",
]
articles_params = [get_decoding_params(urlparse(url).path.split("/")[-1]) for url in encoded_urls]
decoded_urls = decode_urls(articles_params)
print(decoded_urls)

Hi guys.

Remember that although I found a way to decode the URL with two requests, I also discovered that the second request can be done in batch to optimize latency, rate limits, and the cost of a potential proxy.

@kurokawamomo
Copy link

Downloading hundreds of pages that are around 6MB each is a bit too much, so I'm hoping to find a request that fetches the signature starting with ATR in future

@jacoboisaza
Copy link

Downloading hundreds of pages that are around 6MB each is a bit too much, so I'm hoping to find a request that fetches the signature starting with ATR in future

I was referring to the possibility of making batch request to https://news.google.com/_/DotsSplashUi/data/batchexecute

@techyderm
Copy link

yea. this is starting to feel iffy to me. I'm just gonna use the news.google.com/rss/articles url without all the hackery. maybe there will be a way to decode the id like before again

@kurokawamomo
Copy link

@jacoboisaza I'm sorry, I didn't understand the code properly before. This is an awesome hack, thank you!

@nabihahmohdnor
Copy link

Hi, I have tried @jacoboisaza @SSujitX solutions. But i faced 429 Client Error: Too Many Requests for url. Anyone facing the same error? how to fix it?

@hotszhin223
Copy link

hotszhin223 commented Sep 3, 2024

PYTHON SOLUTION

I don't know the exact algorithm used by Google to encode/decode the URLs, but I found a way to decode them using reverse engineering by inspecting the requests made by the browser in the redirection chain.

pip install beautifulsoup4 lxml

import json
from urllib.parse import quote, urlparse

import requests
from bs4 import BeautifulSoup


def get_decoding_params(gn_art_id):
    response = requests.get(f"https://news.google.com/articles/{gn_art_id}")
    response.raise_for_status()
    soup = BeautifulSoup(response.text, "lxml")
    div = soup.select_one("c-wiz > div")
    return {
        "signature": div.get("data-n-a-sg"),
        "timestamp": div.get("data-n-a-ts"),
        "gn_art_id": gn_art_id,
    }


def decode_urls(articles):
    articles_reqs = [
        [
            "Fbv4je",
            f'["garturlreq",[["X","X",["X","X"],null,null,1,1,"US:en",null,1,null,null,null,null,null,0,1],"X","X",1,[1,1,1],1,1,null,0,0,null,0],"{art["gn_art_id"]}",{art["timestamp"]},"{art["signature"]}"]',
        ]
        for art in articles
    ]
    payload = f"f.req={quote(json.dumps([articles_reqs]))}"
    headers = {"content-type": "application/x-www-form-urlencoded;charset=UTF-8"}
    response = requests.post(
        url="https://news.google.com/_/DotsSplashUi/data/batchexecute",
        headers=headers,
        data=payload,
    )
    response.raise_for_status()
    return [json.loads(res[2])[1] for res in json.loads(response.text.split("\n\n")[1])[:-2]]

# Example usage
encoded_urls = [
    "https://news.google.com/rss/articles/CBMipgFBVV95cUxPWV9fTEI4cjh1RndwanpzNVliMUh6czg2X1RjeEN0YUctUmlZb0FyeV9oT3RWM1JrMGRodGtqTk1zV3pkNEpmdGNxc2lfd0c4LVpGVENvUDFMOEJqc0FCVVExSlRrQmI3TWZ2NUc4dy1EVXF4YnBLaGZ4cTFMQXFFM2JpanhDR3hoRmthUjVjdm1najZsaFh4a3lBbDladDZtVS1FMHFn?oc=5",
    "https://news.google.com/rss/articles/CBMi3AFBVV95cUxOX01TWDZZN2J5LWlmU3hudGZaRDh6a1dxUHMtalBEY1c0TlJSNlpieWxaUkxUU19MVTN3Y1BqaUZael83d1ctNXhaQUtPM0IyMFc4R3VydEtoMmFYMWpMU1Rtc3BjYmY4d3gxZHlMZG5NX0s1RmR2ZXI5YllvdzNSd2xkOFNCUTZTaEp3b0IxZEJZdVFLUDBNMC1wNGgwMGhjRG9HRFpRZU5BMFVIYjZCOWdWcHI1YzdoVHFWYnZSOEFwQ0NubGx3Rzd0SHN6OENKMXZUcHUxazA5WTIw?hl=en-US&gl=US&ceid=US%3Aen",
]
articles_params = [get_decoding_params(urlparse(url).path.split("/")[-1]) for url in encoded_urls]
decoded_urls = decode_urls(articles_params)
print(decoded_urls)

Man, you are a genius. I never thought that way. I was checking using httptoolkit, my bad i never notice. which tool do you use? httpdebugger? Can I implement it in my module? https://github.com/SSujitX/google-news-url-decoder

Good to hear your alternative work from both of you, the function that you have updated work well for me. But if I extract a large number of articles per run like 100 newspaper articles, I will face the 429 Client Error: Too Many Requests for url

@jacoboisaza
Copy link

This js version works for me (I've just converted mccabe-david python example). + I use a socks-proxy-agent package and https://webshare.io/ proxies to prevent 429 response.

import axios from 'axios';
import { SocksProxyAgent } from 'socks-proxy-agent';

async function getGoogleNewsRssFinalUrl(url){
    const parsedUrl = new URL(url);
    const gnArticleId = parsedUrl.pathname.split('/').pop();
    const httpsAgent = new SocksProxyAgent(`socks5://user:pass@192.168.0.1:5987`);
    const { data: gnData } = await axios.get(`https://news.google.com/articles/${gnArticleId}`, {
      httpAgent: httpsAgent,
      httpsAgent: httpsAgent,
    });

    const $ = cheerio.load(gnData);
    const div = $('c-wiz > div').first();
    const article = {
      signature: div.attr('data-n-a-sg'),
      timestamp: div.attr('data-n-a-ts'),
      gn_art_id: gnArticleId,
    };

    const articlesReq = [
      'Fbv4je',
      `["garturlreq",[["X","X",["X","X"],null,null,1,1,"US:en",null,1,null,null,null,null,null,0,1],"X","X",1,[1,1,1],1,1,null,0,0,null,0],"${article.gn_art_id}",${article.timestamp},"${article.signature}"]`,
    ];

    const response = await axios.post(
      'https://news.google.com/_/DotsSplashUi/data/batchexecute',
      new URLSearchParams({ 'f.req': JSON.stringify([[articlesReq]]) }).toString(),
      {
        headers: { 'Content-Type': 'application/x-www-form-urlencoded;charset=UTF-8' },
        httpAgent: httpsAgent,
        httpsAgent: httpsAgent,
      },
    );

    return JSON.parse(JSON.parse(response.data.split('\n\n')[1].slice(0))[0][2])[1];
  }

Could you let me know which webshare.io plan you chose and what the success rate for the requests was?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment