-
Star
1,056
You must be signed in to star a gist -
Fork
294
You must be signed in to fork a gist
-
-
Save paulgambill/cacd19da95a1421d3164 to your computer and use it in GitHub Desktop.
/** | |
* Retrieves all the rows in the active spreadsheet that contain data and logs the | |
* values for each row. | |
* For more information on using the Spreadsheet API, see | |
* https://developers.google.com/apps-script/service_spreadsheet | |
*/ | |
function readRows() { | |
var sheet = SpreadsheetApp.getActiveSheet(); | |
var rows = sheet.getDataRange(); | |
var numRows = rows.getNumRows(); | |
var values = rows.getValues(); | |
for (var i = 0; i <= numRows - 1; i++) { | |
var row = values[i]; | |
Logger.log(row); | |
} | |
}; | |
/** | |
* Adds a custom menu to the active spreadsheet, containing a single menu item | |
* for invoking the readRows() function specified above. | |
* The onOpen() function, when defined, is automatically invoked whenever the | |
* spreadsheet is opened. | |
* For more information on using the Spreadsheet API, see | |
* https://developers.google.com/apps-script/service_spreadsheet | |
*/ | |
function onOpen() { | |
var sheet = SpreadsheetApp.getActiveSpreadsheet(); | |
var entries = [{ | |
name : "Read Data", | |
functionName : "readRows" | |
}]; | |
sheet.addMenu("Script Center Menu", entries); | |
}; | |
/*====================================================================================================================================* | |
ImportJSON by Trevor Lohrbeer (@FastFedora) | |
==================================================================================================================================== | |
Version: 1.1 | |
Project Page: http://blog.fastfedora.com/projects/import-json | |
Copyright: (c) 2012 by Trevor Lohrbeer | |
License: GNU General Public License, version 3 (GPL-3.0) | |
http://www.opensource.org/licenses/gpl-3.0.html | |
------------------------------------------------------------------------------------------------------------------------------------ | |
A library for importing JSON feeds into Google spreadsheets. Functions include: | |
ImportJSON For use by end users to import a JSON feed from a URL | |
ImportJSONAdvanced For use by script developers to easily extend the functionality of this library | |
Future enhancements may include: | |
- Support for a real XPath like syntax similar to ImportXML for the query parameter | |
- Support for OAuth authenticated APIs | |
Or feel free to write these and add on to the library yourself! | |
------------------------------------------------------------------------------------------------------------------------------------ | |
Changelog: | |
1.1 Added support for the noHeaders option | |
1.0 Initial release | |
*====================================================================================================================================*/ | |
/** | |
* Imports a JSON feed and returns the results to be inserted into a Google Spreadsheet. The JSON feed is flattened to create | |
* a two-dimensional array. The first row contains the headers, with each column header indicating the path to that data in | |
* the JSON feed. The remaining rows contain the data. | |
* | |
* By default, data gets transformed so it looks more like a normal data import. Specifically: | |
* | |
* - Data from parent JSON elements gets inherited to their child elements, so rows representing child elements contain the values | |
* of the rows representing their parent elements. | |
* - Values longer than 256 characters get truncated. | |
* - Headers have slashes converted to spaces, common prefixes removed and the resulting text converted to title case. | |
* | |
* To change this behavior, pass in one of these values in the options parameter: | |
* | |
* noInherit: Don't inherit values from parent elements | |
* noTruncate: Don't truncate values | |
* rawHeaders: Don't prettify headers | |
* noHeaders: Don't include headers, only the data | |
* debugLocation: Prepend each value with the row & column it belongs in | |
* | |
* For example: | |
* | |
* =ImportJSON("http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?v=2&alt=json", "/feed/entry/title,/feed/entry/content", | |
* "noInherit,noTruncate,rawHeaders") | |
* | |
* @param {url} the URL to a public JSON feed | |
* @param {query} a comma-separated lists of paths to import. Any path starting with one of these paths gets imported. | |
* @param {options} a comma-separated list of options that alter processing of the data | |
* | |
* @return a two-dimensional array containing the data, with the first row containing headers | |
* @customfunction | |
**/ | |
function ImportJSON(url, query, options) { | |
return ImportJSONAdvanced(url, query, options, includeXPath_, defaultTransform_); | |
} | |
/** | |
* An advanced version of ImportJSON designed to be easily extended by a script. This version cannot be called from within a | |
* spreadsheet. | |
* | |
* Imports a JSON feed and returns the results to be inserted into a Google Spreadsheet. The JSON feed is flattened to create | |
* a two-dimensional array. The first row contains the headers, with each column header indicating the path to that data in | |
* the JSON feed. The remaining rows contain the data. | |
* | |
* Use the include and transformation functions to determine what to include in the import and how to transform the data after it is | |
* imported. | |
* | |
* For example: | |
* | |
* =ImportJSON("http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?v=2&alt=json", | |
* "/feed/entry", | |
* function (query, path) { return path.indexOf(query) == 0; }, | |
* function (data, row, column) { data[row][column] = data[row][column].toString().substr(0, 100); } ) | |
* | |
* In this example, the import function checks to see if the path to the data being imported starts with the query. The transform | |
* function takes the data and truncates it. For more robust versions of these functions, see the internal code of this library. | |
* | |
* @param {url} the URL to a public JSON feed | |
* @param {query} the query passed to the include function | |
* @param {options} a comma-separated list of options that may alter processing of the data | |
* @param {includeFunc} a function with the signature func(query, path, options) that returns true if the data element at the given path | |
* should be included or false otherwise. | |
* @param {transformFunc} a function with the signature func(data, row, column, options) where data is a 2-dimensional array of the data | |
* and row & column are the current row and column being processed. Any return value is ignored. Note that row 0 | |
* contains the headers for the data, so test for row==0 to process headers only. | |
* | |
* @return a two-dimensional array containing the data, with the first row containing headers | |
**/ | |
function ImportJSONAdvanced(url, query, options, includeFunc, transformFunc) { | |
var jsondata = UrlFetchApp.fetch(url); | |
var object = JSON.parse(jsondata.getContentText()); | |
return parseJSONObject_(object, query, options, includeFunc, transformFunc); | |
} | |
/** | |
* Encodes the given value to use within a URL. | |
* | |
* @param {value} the value to be encoded | |
* | |
* @return the value encoded using URL percent-encoding | |
*/ | |
function URLEncode(value) { | |
return encodeURIComponent(value.toString()); | |
} | |
/** | |
* Parses a JSON object and returns a two-dimensional array containing the data of that object. | |
*/ | |
function parseJSONObject_(object, query, options, includeFunc, transformFunc) { | |
var headers = new Array(); | |
var data = new Array(); | |
if (query && !Array.isArray(query) && query.toString().indexOf(",") != -1) { | |
query = query.toString().split(","); | |
} | |
if (options) { | |
options = options.toString().split(","); | |
} | |
parseData_(headers, data, "", 1, object, query, options, includeFunc); | |
parseHeaders_(headers, data); | |
transformData_(data, options, transformFunc); | |
return hasOption_(options, "noHeaders") ? (data.length > 1 ? data.slice(1) : new Array()) : data; | |
} | |
/** | |
* Parses the data contained within the given value and inserts it into the data two-dimensional array starting at the rowIndex. | |
* If the data is to be inserted into a new column, a new header is added to the headers array. The value can be an object, | |
* array or scalar value. | |
* | |
* If the value is an object, it's properties are iterated through and passed back into this function with the name of each | |
* property extending the path. For instance, if the object contains the property "entry" and the path passed in was "/feed", | |
* this function is called with the value of the entry property and the path "/feed/entry". | |
* | |
* If the value is an array containing other arrays or objects, each element in the array is passed into this function with | |
* the rowIndex incremeneted for each element. | |
* | |
* If the value is an array containing only scalar values, those values are joined together and inserted into the data array as | |
* a single value. | |
* | |
* If the value is a scalar, the value is inserted directly into the data array. | |
*/ | |
function parseData_(headers, data, path, rowIndex, value, query, options, includeFunc) { | |
var dataInserted = false; | |
if (isObject_(value)) { | |
for (key in value) { | |
if (parseData_(headers, data, path + "/" + key, rowIndex, value[key], query, options, includeFunc)) { | |
dataInserted = true; | |
} | |
} | |
} else if (Array.isArray(value) && isObjectArray_(value)) { | |
for (var i = 0; i < value.length; i++) { | |
if (parseData_(headers, data, path, rowIndex, value[i], query, options, includeFunc)) { | |
dataInserted = true; | |
rowIndex++; | |
} | |
} | |
} else if (!includeFunc || includeFunc(query, path, options)) { | |
// Handle arrays containing only scalar values | |
if (Array.isArray(value)) { | |
value = value.join(); | |
} | |
// Insert new row if one doesn't already exist | |
if (!data[rowIndex]) { | |
data[rowIndex] = new Array(); | |
} | |
// Add a new header if one doesn't exist | |
if (!headers[path] && headers[path] != 0) { | |
headers[path] = Object.keys(headers).length; | |
} | |
// Insert the data | |
data[rowIndex][headers[path]] = value; | |
dataInserted = true; | |
} | |
return dataInserted; | |
} | |
/** | |
* Parses the headers array and inserts it into the first row of the data array. | |
*/ | |
function parseHeaders_(headers, data) { | |
data[0] = new Array(); | |
for (key in headers) { | |
data[0][headers[key]] = key; | |
} | |
} | |
/** | |
* Applies the transform function for each element in the data array, going through each column of each row. | |
*/ | |
function transformData_(data, options, transformFunc) { | |
for (var i = 0; i < data.length; i++) { | |
for (var j = 0; j < data[i].length; j++) { | |
transformFunc(data, i, j, options); | |
} | |
} | |
} | |
/** | |
* Returns true if the given test value is an object; false otherwise. | |
*/ | |
function isObject_(test) { | |
return Object.prototype.toString.call(test) === '[object Object]'; | |
} | |
/** | |
* Returns true if the given test value is an array containing at least one object; false otherwise. | |
*/ | |
function isObjectArray_(test) { | |
for (var i = 0; i < test.length; i++) { | |
if (isObject_(test[i])) { | |
return true; | |
} | |
} | |
return false; | |
} | |
/** | |
* Returns true if the given query applies to the given path. | |
*/ | |
function includeXPath_(query, path, options) { | |
if (!query) { | |
return true; | |
} else if (Array.isArray(query)) { | |
for (var i = 0; i < query.length; i++) { | |
if (applyXPathRule_(query[i], path, options)) { | |
return true; | |
} | |
} | |
} else { | |
return applyXPathRule_(query, path, options); | |
} | |
return false; | |
}; | |
/** | |
* Returns true if the rule applies to the given path. | |
*/ | |
function applyXPathRule_(rule, path, options) { | |
return path.indexOf(rule) == 0; | |
} | |
/** | |
* By default, this function transforms the value at the given row & column so it looks more like a normal data import. Specifically: | |
* | |
* - Data from parent JSON elements gets inherited to their child elements, so rows representing child elements contain the values | |
* of the rows representing their parent elements. | |
* - Values longer than 256 characters get truncated. | |
* - Values in row 0 (headers) have slashes converted to spaces, common prefixes removed and the resulting text converted to title | |
* case. | |
* | |
* To change this behavior, pass in one of these values in the options parameter: | |
* | |
* noInherit: Don't inherit values from parent elements | |
* noTruncate: Don't truncate values | |
* rawHeaders: Don't prettify headers | |
* debugLocation: Prepend each value with the row & column it belongs in | |
*/ | |
function defaultTransform_(data, row, column, options) { | |
if (!data[row][column]) { | |
if (row < 2 || hasOption_(options, "noInherit")) { | |
data[row][column] = ""; | |
} else { | |
data[row][column] = data[row-1][column]; | |
} | |
} | |
if (!hasOption_(options, "rawHeaders") && row == 0) { | |
if (column == 0 && data[row].length > 1) { | |
removeCommonPrefixes_(data, row); | |
} | |
data[row][column] = toTitleCase_(data[row][column].toString().replace(/[\/\_]/g, " ")); | |
} | |
if (!hasOption_(options, "noTruncate") && data[row][column]) { | |
data[row][column] = data[row][column].toString().substr(0, 256); | |
} | |
if (hasOption_(options, "debugLocation")) { | |
data[row][column] = "[" + row + "," + column + "]" + data[row][column]; | |
} | |
} | |
/** | |
* If all the values in the given row share the same prefix, remove that prefix. | |
*/ | |
function removeCommonPrefixes_(data, row) { | |
var matchIndex = data[row][0].length; | |
for (var i = 1; i < data[row].length; i++) { | |
matchIndex = findEqualityEndpoint_(data[row][i-1], data[row][i], matchIndex); | |
if (matchIndex == 0) { | |
return; | |
} | |
} | |
for (var i = 0; i < data[row].length; i++) { | |
data[row][i] = data[row][i].substring(matchIndex, data[row][i].length); | |
} | |
} | |
/** | |
* Locates the index where the two strings values stop being equal, stopping automatically at the stopAt index. | |
*/ | |
function findEqualityEndpoint_(string1, string2, stopAt) { | |
if (!string1 || !string2) { | |
return -1; | |
} | |
var maxEndpoint = Math.min(stopAt, string1.length, string2.length); | |
for (var i = 0; i < maxEndpoint; i++) { | |
if (string1.charAt(i) != string2.charAt(i)) { | |
return i; | |
} | |
} | |
return maxEndpoint; | |
} | |
/** | |
* Converts the text to title case. | |
*/ | |
function toTitleCase_(text) { | |
if (text == null) { | |
return null; | |
} | |
return text.replace(/\w\S*/g, function(word) { return word.charAt(0).toUpperCase() + word.substr(1).toLowerCase(); }); | |
} | |
/** | |
* Returns true if the given set of options contains the given option. | |
*/ | |
function hasOption_(options, option) { | |
return options && options.indexOf(option) >= 0; | |
} |
Hello.
Pre-Warning: I don't know what I am really doing but copy & pasting magnets and using the result with my excel/sheets wisdom.
I set up a test sheet with this script, and it worked wonderful. I could get api data and organize according to my needs.
I then created a duplicate and began the proper work. All worked as wished.
But this morning when opening the project file I the cell with the JSON commands remain in "Loading..." state ("Error Loading Data" when hovering above them)
I just doublechecked the old test sheet, and it still works there. I can even copy and paste pages from the now no more loading sheet in to the old test one and it loads successfully there.
I didn't really change anything in the Apps Script, when I open that one via Tools I can still see the pasted code. I only worked with sheet commands to shift around the api generated data to my needs. So why did it possibly stop working? Any hints?
I keep getting the following error 75% of the time (the rest of the time it works great):
"Error SyntaxError: Unexpected token: < (line 132)".
I believe this is because the url I'm fetching goes to an html page for half a second before continuing to the JSON. How can I account for this?
Thanks for the script!
Now the script parses json and the result is placed on the cell below the formula ↧
Tell me what you need to do so that the result is saved to the right of the cell from the formula ↦
Try like this =ImportJSON("http://date.jsontest.com", "/date", "noHeaders")
Regarding: "Error SyntaxError: Unexpected token: < ..."
This error is generated when the string returned by the site is HTML instead of JSON. I have encountered this error under the following conditions:
- An error page was returned by the site instead of a JSON string. For example "Forbidden" when authentication was required and none was given.
- The path was not found and a 404 was generated.
- if the JSON is wrapped in HTML for some reason
- if a redirect was used such that an HTML "Redirecting.." message was displayed before being forwarded to the JSON page.
To troubleshoot your specific issue I suggest using a tool like Postmaster to first verify that what the ImportJSON function is getting as input from the site is actually JSON.
Hope that helps,
Susanne
Hi, here's my modified version https://gist.github.com/allenyllee/c764c86ed722417948fc256b7a5077c4
Changelog:
- Add the ability to query array elements by using xpath like "/array[n]/member" where "n" is array index
- Fixed Issue: when the path and the rule partially matched but not under the same xpath, it will still print that value under that xpath.
For example: when path = /data/asset_longname and rule = /data/asset, it will print both /data/asset_longname and /data/asset rather then just print /data/asset.
Solution: check the first remaining charater, if it's empty "" or slash "/" the function should return true. The slash "/" means that there are members under the same xpath, so do not remove it.
THIS SHOULD BE THE ANSWER!!! WORKS WITH ARRAYS!
Hi there! This is super helpful and works very well in google sheets. I had a noob question that I am not able to figure out. I tried googling it but couldn't find an answer. I am using this with Google Sheets to pull in data from Keepa. By default this truncates to 256 chars and I am wondering if there is a way to increase the limit to a customer number e.g. 1000 chars or 1200 chars? If I use noTRuncate, it brings more than 50K chars for a particular column that I need so I cannot use noTruncate. I need to get data for a particular columns for up to 1000 chars and not able to figure out how to set the limit to 1000.
I don't know coding so piecing things together slowly and appreciate any help.
Very useful! Thanks!
Greetings!
Guys, help me figure it out, I did not find information ...
When the script is requested, it returns data in one line, it looks like this (applied the TRANSPOSE formula to make it easier to watch, and marked where the columns should end):
With a similar query, the data is displayed normally, as it should be, in columns and rows:
The data comes in this form:
I roughly understand what the problem might be, but, unfortunately, I cannot solve it, I do not have enough knowledge. Maybe someone has already encountered a similar problem and knows its solution, I ask for help ...
Thanks for sharing this, @paulgambill! I found the following issue with the script:
The JSON
[
{ "id": 1, "items": [ { "itemId": 11 }, { "itemId": 12 } ] },
{ "id": 2, "items": [ { "itemId": 21 }, { "itemId": 22 } ] }
]
is imported as
/id | /items/itemId |
---|---|
1 | 11 |
2 | 21 |
2 | 22 |
with itemId 12 missing.
I'm posting this here in case anyone has time to investigate this, especially those requiring the script to work reliably for their use case. Or maybe there's already a fork where this has been fixed, which could be pulled in to this gist.
The same issue is also present in @allenyllee's fork.
Hello. I try to make a binance wallet data stats. But i need to refresh many API request. Is here any posibility to do that for whole sheet ?
I love this script, thank you so much for posting it! I have run into an issue though where only part of the data is being populated. There is a field in the data that can contain multiple options, but those options are not being separated out into separate rows.
"outcomes":[{"name":"Buffalo Bills","price":2.12},{"name":"Kansas City Chiefs","price":1.79}]
Any ideas how to get both of these outcomes to populate in separate rows?
Thanks!!!
Same for me since yesterday and solved opening the GSheet from GDrive instead clicking in a saved bookmark. It may sound a bit simple but I found everything working fine when opening my sheet from App Script > Executions > Open container (three vertical dots), and realized the URL was different, so I tried to open it from GDrive and it works perfectly now.
BTW, in Executions I have every execution with status Completed.
I had the same issue last evening, but when I reopened my sheet this morning it worked again.
I have no technical answer, but it seems to me there were some connection issues yesterday.
If you look above, I had the same trouble for a day or two last year. And it ressolved without doing anything.
Same issue here, script has been working for months and now it just stopped working. I have tried what @frettor suggested without success.
guys, it started working again by itself without making any adjustments to the codes!!!
wait that yours should also work again
Same problem here!
Same issue here, script has been working for months and now it just stopped working. I have tried what @frettor suggested without success.
On Friday 11/3 the issue solved by itself, now its been several days working fine. I did nothing.
Hi there, is there a way to insert "token" header? The API I am trying to consume is not public, therefore I need to insert a token key.
Awesome piece of code, works perfectly. But, when I'm trying to set a trigger I get this a message saying "This app is blocked. This app tried to access sensitive info in your Google Account. To keep your account safe, Google blocked this process". What's going on here? Any suggestions gratefully accepted. thanks.
Hi there, testing this great piece of code. Just experiencing one issue, my JSON format has a dynamic path, for example {product-1}, {product-2}... I can't find a way of implementing some kind of "contains" or "starts with" or whatever so it correctly recognizes each JSON entry, for example {product-*}
Any suggestions?
Thanks!!
enhancement: read list of URLs from cell range
/**
* Read list of URLs from range
* Ex1: ImportJSON_RASG("sheet9!C2:C7")
* Ex2: ImportJSON_RASG("myNamedRange")
*/
function ImportJSON_RASG(cellRange, query, parseOptions) {
var arr = [];
SpreadsheetApp.getActiveSheet().getRange(1,1).activate();
const arrURL = SpreadsheetApp.getActiveSheet().getRange(cellRange).getValues().join().split(',');
for (var url of arrURL) {
try { arr.push(ImportJSON(url, query, parseOptions)); }
catch { SpreadsheetApp.getActiveSpreadsheet().toast(url, 'Error fetching data', 3); }
}
return arr.flat();
}
Hi, how to extract just one detail from the JSON?
Like I having this link: https://api-mainnet.magiceden.dev/v2/collections/cowboys/stats/
And when using the import_json_appscripts.js, it shows like this:
But i just want to have this information:
How to do that?
Sorry if this is a stupid question but I'm new with this.
Thanks!
Hi there, sorry if this is a total noob question but the script is working great on the whole but I am having issues passing the noHeaders option. I am running the function against a cell ref (which contains the full URL) and when I pass no options it returns the data fine but when adding the option I get a reference error. Below is the cell content. Any help gratefully received!!
=ImportJSON(G4,"noHeaders")
`function ImportJSONAdvanced(url, query, options, includeFunc, transformFunc) {
var jsondata = UrlFetchApp.fetch(url);
var object = JSON.parse(jsondata.getContentText());
(https://k.top4top.io/p_2423x6i9b1.png)
How can fix that ?
`
Is there any way to prevent the script from automatically updating the data? That is the disadvantage of the whole =IMPORTXXX functions in Google Sheets, they update themselves every hour.
How to adjust the script so that it starts only manually?
Can we use this to access API with API key?
Is there a way to use a google drive file? I tried to create a share url and use the url but it errors out.
Error
SyntaxError: Unexpected token '<', " <!DOCTYPE "... is not valid JSON (line 132).
Is there a way to use a google drive file? I tried to create a share url and use the url but it errors out.
Error
SyntaxError: Unexpected token '<', " <!DOCTYPE "... is not valid JSON (line 132).
I have the same question. Can we import from a Google drive file?
Is there any way to prevent the script from automatically updating the data? That is the disadvantage of the whole =IMPORTXXX functions in Google Sheets, they update themselves every hour. How to adjust the script so that it starts only manually?
Hello, I want to do exactly that. That the data is updated automatically, without having to do it manually.
I'm trying to get the data from a chaturbate.com stats API where it shows me the number of tokens (tips) that each profile has. So I need it to update automatically. I have been reviewing the code but I do not understand much, I look at https://developers.google.com/apps-script/reference/spreadsheet as indicated by Mr. Paul Gambill. But I can't make it update the information automatically.
Yesterday I used the following formula in cell A1
=ImportJSON("https://chaturbate.com/statsapi/?username=usernamemodel&token=********token*********")
But since yesterday the amount of tokens in the google sheet is not updated.
At this moment in the account there are 13153 tokens. But in the google sheet only 13039 appears.
I have used this is google sheets yet the imported value does not refresh. I changed the json file in the server, but I have literally no idea how to refresh the value in the cell (other than deleting it and rewriting it). refreshing the page doesn't do it.
I have the same question, how do you do to automatically update the data?
Di you ever figure this out? I have the same issue and nothing seems to trigger it that I've tried.
To refresh the link programmatically, you have to clear the contents of the cell and reset the value of the cell as the link. You can set this to kick off on open, or create a menu item and run it manually.
Example:
function updateLink { var ss = SpreadsheetApp.getActive() var sheetData = ss.getSheetByName('SHEET_NAME'); var sourceValue = '=ImportJSON("URL_TO_JSON")'; var targetRange = sheetData.getRange('A1'); targetRange.clearContent(); targetRange.setValue(sourceValue); }
Hi @IronKnight36 How do I do the "create a menu item and run it manually" thing, can you please teach me.
I would even be interested in learning how to do this, so that it is done in a loop, and thus it is not necessary to open the file but the same way is "reloaded and updated" from time to time. Is that possible?
If it isn't, then I'd be interested to know how to make it create a menu and make it run manually.
Hi, any idea on how to update the cell contents automatically? Only way I can do it is to delete the contents of the cell and enter the formula again. Any way to set a trigger or something similar?
Hey @ShyamBhagat2004 check this answer: https://gist.github.com/paulgambill/cacd19da95a1421d3164?permalink_comment_id=3449521#gistcomment-3449521
Thanks for sharing this, @paulgambill! I found the following issue with the script:
The JSON
[ { "id": 1, "items": [ { "itemId": 11 }, { "itemId": 12 } ] }, { "id": 2, "items": [ { "itemId": 21 }, { "itemId": 22 } ] } ]is imported as
/id /items/itemId
1 11
2 21
2 22
with itemId 12 missing.I'm posting this here in case anyone has time to investigate this, especially those requiring the script to work reliably for their use case. Or maybe there's already a fork where this has been fixed, which could be pulled in to this gist.
The same issue is also present in @allenyllee's fork.
@renehamburger I'm having the same issue, have you found any solution?
If anyone knows a solution, please advise.
Thank you very much.
Thanks for sharing this, @paulgambill! I found the following issue with the script:
The JSON[ { "id": 1, "items": [ { "itemId": 11 }, { "itemId": 12 } ] }, { "id": 2, "items": [ { "itemId": 21 }, { "itemId": 22 } ] } ]is imported as
/id /items/itemId
1 11
2 21
2 22
with itemId 12 missing.
I'm posting this here in case anyone has time to investigate this, especially those requiring the script to work reliably for their use case. Or maybe there's already a fork where this has been fixed, which could be pulled in to this gist.
The same issue is also present in @allenyllee's fork.@renehamburger I'm having the same issue, have you found any solution? If anyone knows a solution, please advise. Thank you very much.
The issue is fixed with this one, https://github.com/qeet/IMPORTJSONAPI
Thanks
Hi there, is there a way to insert "token" header? The API I am trying to consume is not public, therefore I need to insert a token key.
Yes. The quick and dirty way is to add yourself on line 131
const headers = {
'Authorization': 'Bearer PUTYOURTOKENHERE'
};
var jsondata = UrlFetchApp.fetch(url, { headers } );
The long way is to add another param in the function and commit it here.
Hi,
i'm trying to load json data. However, "num" array is shorter than expected with 12 items with importJson() function.
The json response should be (web browser):
{"startTime":1676505600000,"endTime":1679356800000,"period":"1d","data":[{"etype":"svcusg","name":"fond_carto_web","num":[["1676505600000","4452"],["1676592000000","2418"],["1676678400000","1086"],["1676764800000","2116"],["1676851200000","3650"],["1676937600000","3389"],["1677024000000","3590"],["1677110400000","3158"],["1677196800000","2261"],["1677283200000","1045"],["1677369600000","2278"],["1677456000000","4094"],["1677542400000","4502"],["1677628800000","4367"],["1677715200000","5402"],["1677801600000","2430"],["1677888000000","2396"],["1677974400000","1567"],["1678060800000","3208"],["1678147200000","4047"],["1678233600000","6532"],["1678320000000","7101"],["1678406400000","3167"],["1678492800000","1530"],["1678579200000","1891"],["1678665600000","4254"],["1678752000000","6002"],["1678838400000","9666"],["1678924800000","6765"],["1679011200000","3799"],["1679097600000","1068"],["1679184000000","2576"],["1679270400000","7471"]]}]}
But i have inline value in "Data num" field:
1676505600000,4452,1676592000000,2418,1676678400000,1086,1676764800000,2116,1676851200000,3650,1676937600000,3389,1677024000000,3590,1677110400000,3158,1677196800000,2261,1677283200000,1045,1677369600000,2278,1677456000000,4094,1677542400000,4502,167762880
Could you help me please?
I added cache to it:
https://gist.github.com/GoulartNogueira/92adc400e4d33b04f4d001022d7179e6
I change the script with cache management but i have an issue:
Erreur
Exception: Argument too large: key (ligne 140)
For example, the url used:
https://dtsi-sgt.maps.arcgis.com/sharing/rest/portals/TZcrgU6CIbqWt9Qv/usage?f=json&startTime=1672617600000&endTime=1679616000000&period=1d&vars=num&groupby=name&etype=svcusg&name=fond_imagerie&stype=tiles&token=ak7uYXmez2EG5UuSZElL0f-yq3L0pEfJ0R_KPB-Sq7mEDjUfNWWY9M0fpmbEgiiRpR9U97046TRADCVIlQuNAw_PIS298B9eBwyFjBcoiChkPYyFNfX9ePt5vv2_TBM-7WbZph-qrvCUnrimlSqLFspbB-CJqSOcz2ov4PEZhkucHc10DslwHmVkwP7s66w33VOstfbMHFMZIbSjxR3tjgVA..
The expected json response:
{"startTime":1672617600000,"endTime":1679616000000,"period":"1d","data":[{"etype":"svcusg","name":"fond_imagerie","stype":"tiles","num":[["1672617600000","396023"],["1672704000000","428838"],["1672790400000","460974"],["1672876800000","430197"],["1672963200000","306874"],["1673049600000","135516"],["1673136000000","261488"],["1673222400000","614081"],["1673308800000","584964"],["1673395200000","610408"],["1673481600000","561258"],["1673568000000","354553"],["1673654400000","103260"],["1673740800000","336553"],["1673827200000","613818"],["1673913600000","583007"],["1674000000000","538678"],["1674086400000","554631"],["1674172800000","348267"],["1674259200000","153781"],["1674345600000","363059"],["1674432000000","605328"],["1674518400000","621936"],["1674604800000","576575"],["1674691200000","573488"],["1674777600000","513266"],["1674864000000","151126"],["1674950400000","362445"],["1675036800000","569111"],["1675123200000","548183"],["1675209600000","530607"],["1675296000000","569066"],["1675382400000","426739"],["1675468800000","134728"],["1675555200000","330876"],["1675641600000","702489"],["1675728000000","639878"],["1675814400000","653730"],["1675900800000","565299"],["1675987200000","403625"],["1676073600000","139082"],["1676160000000","307145"],["1676246400000","632137"],["1676332800000","670219"],["1676419200000","587963"],["1676505600000","596131"],["1676592000000","401583"],["1676678400000","202737"],["1676764800000","388278"],["1676851200000","660619"],["1676937600000","698606"],["1677024000000","592860"],["1677110400000","625674"],["1677196800000","371479"],["1677283200000","157656"],["1677369600000","341120"],["1677456000000","588309"],["1677542400000","635246"],["1677628800000","553207"],["1677715200000","549798"],["1677801600000","328714"],["1677888000000","130668"],["1677974400000","309346"],["1678060800000","664441"],["1678147200000","600165"],["1678233600000","587164"],["1678320000000","544190"],["1678406400000","365945"],["1678492800000","151963"],["1678579200000","362521"],["1678665600000","617546"],["1678752000000","635451"],["1678838400000","618452"],["1678924800000","686794"],["1679011200000","480834"],["1679097600000","151097"],["1679184000000","354555"],["1679270400000","641218"],["1679356800000","663374"],["1679443200000","601240"],["1679529600000","508374"]]}]}
Hello all,
@paulgambill : Many thanks for posting the script.
What do I do if there is an error thrown "Result is too large" . Is there a way to truncate the number of rows of data?
Any assistance would be appreciated.
Thanks and regards,
Niranjan
Hi,
Changed the function to accommodate headers:
function ImportJSON(url, headerAccept, headerContentType, headerAPIKey, query, options) {
return ImportJSONAdvanced(url, headerAccept, headerContentType, headerAPIKey, query, options, includeXPath_, defaultTransform_);
}
function ImportJSONAdvanced(url, headerAccept, headerContentType, headerAPIKey, query, options, includeFunc, transformFunc) {
var jsondata = UrlFetchApp.fetch(url, {
method : 'get',
headers : {
'accept' : headerAccept,
'content-type' : headerContentType,
'x-smarttoken' : headerAPIKey
}
});
var object = JSON.parse(jsondata.getContentText());
return parseJSONObject_(object, query, options, includeFunc, transformFunc);
}
However, I am getting an error when calling if from gsheets:
#REF!: Reference does not exist
Here is the call I am making: =ImportJSON("https://api.smartrecruiters.com/user-api/v201804/users?limit=100","application/json","application/json", "xxxxxxxxxxxxxxxxxxxx","users?limit=100","noTruncate")
Any idea?
Hi,
Changed the function to accommodate headers:
function ImportJSON(url, headerAccept, headerContentType, headerAPIKey, query, options) { return ImportJSONAdvanced(url, headerAccept, headerContentType, headerAPIKey, query, options, includeXPath_, defaultTransform_); } function ImportJSONAdvanced(url, headerAccept, headerContentType, headerAPIKey, query, options, includeFunc, transformFunc) { var jsondata = UrlFetchApp.fetch(url, { method : 'get', headers : { 'accept' : headerAccept, 'content-type' : headerContentType, 'x-smarttoken' : headerAPIKey } }); var object = JSON.parse(jsondata.getContentText()); return parseJSONObject_(object, query, options, includeFunc, transformFunc); }
However, I am getting an error when calling if from gsheets: #REF!: Reference does not exist
Here is the call I am making:
=ImportJSON("https://api.smartrecruiters.com/user-api/v201804/users?limit=100","application/json","application/json", "xxxxxxxxxxxxxxxxxxxx","users?limit=100","noTruncate")
Any idea?
https://gist.github.com/endersonmenezes/5db33503bf2499b33d914966f562b0fa
if anybody want to grab a specific key from json, and dont want any other similar keys return together, you can change the function in Line 288
/**
* Returns true if the rule applies to the given path.
*/
function applyXPathRule_(rule, path, options) {
return path === rule;
}
eg in my case:
the json contain two keys:
- lastNav
- lastNavDate
if I set the query to /lastNav
it will return both data.
I'm trying to import Runescape API data, for example:
https://services.runescape.com/m=itemdb_rs/api/catalogue/detail.json?item=44844
I can throw it in a browser and see the json without a problem, but the script reports:
SyntaxError: Unexpected end of JSON input (line 128).
All my numbers are being converted to text when importing JSON. Is there something I'm doing wrong?
For anyone that's using Google Drive link and getting the following error:
SyntaxError: Unexpected token '<', "<!DOCTYPE "... is not valid JSON (line 143).
What I'd experienced is I have copied the file link to my JSON file, however, that page is actually an HTML.
The workaround would be passing ImportJSON with direct download link
My solution:
function ImportJSONPOST(url, query, options, headers, data, contentType) {
if (headers == undefined) {
headers = "";
}
if (data == undefined) {
data = "";
}
if (contentType == undefined) {
contentType = 'application/json';
}
Logger.log('El header solo es: ' + headers)
return ImportJSONPOSTAdvanced(url, query, options, headers, data, contentType, includeXPath_, defaultTransform_);
}
function ImportJSONPOSTAdvanced(url, query, options, headers, data, contentType, includeFunc, transformFunc) {
//Logger.log('Llegó el header: ' + headers)
//Logger.log('Llegó el data body: ' + data)
var requestOptions = {
'method': 'post'
};
if (headers !== undefined) {
requestOptions.headers = JSON.parse(headers);
};
Logger.log('El contentType es: ' + contentType)
if (contentType !== 'application/x-www-form-urlencoded') {
if (data !== undefined) {
payload = JSON.parse(data);
requestOptions.payload = JSON.stringify(payload);
}
}
else {
if (data !== undefined) {
payload = data;
Logger.log('Entro en urlencoded: ' + payload);
requestOptions.payload = payload;
}
};
//Logger.log('requestOptions: ' + JSON.stringify(requestOptions, null, 2));
var jsondata = UrlFetchApp.fetch(url, requestOptions);
//Logger.log('jsondata: ' + jsondata);
var object = JSON.parse(jsondata.getContentText());
return parseJSONObject_(object, query, options, includeFunc, transformFunc);
}
In this case, my api would require a POST with a header and also a payload
Header (D2): {"Content-Type": "application/json", "Cookie": "JSESSIONID=99999999999999999"}
Body (E2): {"T2":false,"T1":false,"T0":true,"Content-Type":"application/json"}
Content-Type (F2): "Content-Type": "application/json"
=ImportJSONPOST("https://sarasa.com.ar","","",D2,E2,F2)
Hello, I really like this code, and its really useful, however, I can't find the place in the API where it tells you what kind of info you can get. All I've found is this https://www.thebluealliance.com/apidocs/v3 link and all the information there is kinda useless, where can I find the codes for the information such as average points, autonomous points, etc, which I had heard you can get from the TBA API?
Thanks for sharing this, @paulgambill! I found the following issue with the script:
The JSON[ { "id": 1, "items": [ { "itemId": 11 }, { "itemId": 12 } ] }, { "id": 2, "items": [ { "itemId": 21 }, { "itemId": 22 } ] } ]is imported as
/id /items/itemId
1 11
2 21
2 22
with itemId 12 missing.
I'm posting this here in case anyone has time to investigate this, especially those requiring the script to work reliably for their use case. Or maybe there's already a fork where this has been fixed, which could be pulled in to this gist.
The same issue is also present in @allenyllee's fork.@renehamburger I'm having the same issue, have you found any solution? If anyone knows a solution, please advise. Thank you very much.
I declare a global variable initialized to 1 and in the method parseData_ I replace the uses of parameter rowIndex by myRowIndex (except in the signature)
`var myRowIndex = 1
function parseData_(headers, data, path, rowIndex, value, query, options, includeFunc) {
var dataInserted = false;
if (isObject_(value)) {
for (key in value) {
if (parseData_(headers, data, path + "/" + key, myRowIndex, value[key], query, options, includeFunc)) {
dataInserted = true;
}
}
} else if (Array.isArray(value) && isObjectArray_(value)) {
for (var i = 0; i < value.length; i++) {
if (parseData_(headers, data, path, myRowIndex, value[i], query, options, includeFunc)) {
dataInserted = true;
myRowIndex++;
}
}
--myRowIndex;
} else if (!includeFunc || includeFunc(query, path, options)) {
// Handle arrays containing only scalar values
if (Array.isArray(value)) {
value = value.join();
}
// Insert new row if one doesn't already exist
if (!data[myRowIndex]) {
data[myRowIndex] = new Array();
}
// Add a new header if one doesn't exist
if (!headers[path] && headers[path] != 0) {
headers[path] = Object.keys(headers).length;
}
// Insert the data
data[myRowIndex][headers[path]] = value;
dataInserted = true;
}
return dataInserted;
}
`
Hi,
Thanks for the great script.
Just an issue with columns containing numbers.
The JSON output in Google sheets is set as strings instead of numbers.
Therefore we can't perform QUERY filter by max value for instance:
Col15 = (string type)
Stringvalues
137.06
137.76
138.59
141.91
143.49
143.42
=QUERY(IMPORTJSON("myURL"),
"SELECT MAX(Col15) label MAX(Col15) ''",0)
How would you make the Col15 values as numbers?
Thanks again!
Hi @paulgambill
I'm getting this error when trying to use it with unix timestamp conversion in the json url.
For example with this
=((NOW()-DATE(1970,1,1)+time(0,0,0))*86400)-(60*60)
Error
This function is not allowed to reference a cell with NOW(), RAND(), RANDARRAY(), or RANDBETWEEN()
Can you enable the use with NOW(), RAND(), RANDARRAY(), or RANDBETWEEN() ?
Thanks again!
Found this also that might be of use
//Mike Steelson
let result = [];
function getAllDataJSON(url) {
if (url.match(/^http/)){var data = JSON.parse(UrlFetchApp.fetch(url).getContentText())}
else{var data = JSON.parse(url)}
getAllData(1,eval(data),'data')
return result
}
function getAllData(niv,obj,id) {
const regex = new RegExp('[^0-9]+');
for (let p in obj) {
var newid = (regex.test(p)) ? id + '.' + p : id + '[' + p + ']';
if (obj[p]!=null){
if (typeof obj[p] != 'object' && typeof obj[p] != 'function'){
result.push([niv, (newid), p, obj[p]]);
}
if (typeof obj[p] == 'object') {
if (obj[p].length){
result.push([niv, (newid), p + '[0-' +(obj[p].length-1)+ ']', 'tableau']);
}else{
//result.push([niv, (newid), p, 'parent']);
}
niv+=1;
getAllData(niv, obj[p], newid );
niv-=1
}
}
}
}
From this post:
[Google App Script; how to push data from nested JSON into array then to sheets - Mike Steelson' Answer](https://stackoverflow.com/a/68826533/10789707)
Thx