Create a gist now

Instantly share code, notes, and snippets.

What would you like to do?
/**
* Retrieves all the rows in the active spreadsheet that contain data and logs the
* values for each row.
* For more information on using the Spreadsheet API, see
* https://developers.google.com/apps-script/service_spreadsheet
*/
function readRows() {
var sheet = SpreadsheetApp.getActiveSheet();
var rows = sheet.getDataRange();
var numRows = rows.getNumRows();
var values = rows.getValues();
for (var i = 0; i <= numRows - 1; i++) {
var row = values[i];
Logger.log(row);
}
};
/**
* Adds a custom menu to the active spreadsheet, containing a single menu item
* for invoking the readRows() function specified above.
* The onOpen() function, when defined, is automatically invoked whenever the
* spreadsheet is opened.
* For more information on using the Spreadsheet API, see
* https://developers.google.com/apps-script/service_spreadsheet
*/
function onOpen() {
var sheet = SpreadsheetApp.getActiveSpreadsheet();
var entries = [{
name : "Read Data",
functionName : "readRows"
}];
sheet.addMenu("Script Center Menu", entries);
};
/*====================================================================================================================================*
ImportJSON by Trevor Lohrbeer (@FastFedora)
====================================================================================================================================
Version: 1.1
Project Page: http://blog.fastfedora.com/projects/import-json
Copyright: (c) 2012 by Trevor Lohrbeer
License: GNU General Public License, version 3 (GPL-3.0)
http://www.opensource.org/licenses/gpl-3.0.html
------------------------------------------------------------------------------------------------------------------------------------
A library for importing JSON feeds into Google spreadsheets. Functions include:
ImportJSON For use by end users to import a JSON feed from a URL
ImportJSONAdvanced For use by script developers to easily extend the functionality of this library
Future enhancements may include:
- Support for a real XPath like syntax similar to ImportXML for the query parameter
- Support for OAuth authenticated APIs
Or feel free to write these and add on to the library yourself!
------------------------------------------------------------------------------------------------------------------------------------
Changelog:
1.1 Added support for the noHeaders option
1.0 Initial release
*====================================================================================================================================*/
/**
* Imports a JSON feed and returns the results to be inserted into a Google Spreadsheet. The JSON feed is flattened to create
* a two-dimensional array. The first row contains the headers, with each column header indicating the path to that data in
* the JSON feed. The remaining rows contain the data.
*
* By default, data gets transformed so it looks more like a normal data import. Specifically:
*
* - Data from parent JSON elements gets inherited to their child elements, so rows representing child elements contain the values
* of the rows representing their parent elements.
* - Values longer than 256 characters get truncated.
* - Headers have slashes converted to spaces, common prefixes removed and the resulting text converted to title case.
*
* To change this behavior, pass in one of these values in the options parameter:
*
* noInherit: Don't inherit values from parent elements
* noTruncate: Don't truncate values
* rawHeaders: Don't prettify headers
* noHeaders: Don't include headers, only the data
* debugLocation: Prepend each value with the row & column it belongs in
*
* For example:
*
* =ImportJSON("http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?v=2&alt=json", "/feed/entry/title,/feed/entry/content",
* "noInherit,noTruncate,rawHeaders")
*
* @param {url} the URL to a public JSON feed
* @param {query} a comma-separated lists of paths to import. Any path starting with one of these paths gets imported.
* @param {options} a comma-separated list of options that alter processing of the data
*
* @return a two-dimensional array containing the data, with the first row containing headers
* @customfunction
**/
function ImportJSON(url, query, options) {
return ImportJSONAdvanced(url, query, options, includeXPath_, defaultTransform_);
}
/**
* An advanced version of ImportJSON designed to be easily extended by a script. This version cannot be called from within a
* spreadsheet.
*
* Imports a JSON feed and returns the results to be inserted into a Google Spreadsheet. The JSON feed is flattened to create
* a two-dimensional array. The first row contains the headers, with each column header indicating the path to that data in
* the JSON feed. The remaining rows contain the data.
*
* Use the include and transformation functions to determine what to include in the import and how to transform the data after it is
* imported.
*
* For example:
*
* =ImportJSON("http://gdata.youtube.com/feeds/api/standardfeeds/most_popular?v=2&alt=json",
* "/feed/entry",
* function (query, path) { return path.indexOf(query) == 0; },
* function (data, row, column) { data[row][column] = data[row][column].toString().substr(0, 100); } )
*
* In this example, the import function checks to see if the path to the data being imported starts with the query. The transform
* function takes the data and truncates it. For more robust versions of these functions, see the internal code of this library.
*
* @param {url} the URL to a public JSON feed
* @param {query} the query passed to the include function
* @param {options} a comma-separated list of options that may alter processing of the data
* @param {includeFunc} a function with the signature func(query, path, options) that returns true if the data element at the given path
* should be included or false otherwise.
* @param {transformFunc} a function with the signature func(data, row, column, options) where data is a 2-dimensional array of the data
* and row & column are the current row and column being processed. Any return value is ignored. Note that row 0
* contains the headers for the data, so test for row==0 to process headers only.
*
* @return a two-dimensional array containing the data, with the first row containing headers
**/
function ImportJSONAdvanced(url, query, options, includeFunc, transformFunc) {
var jsondata = UrlFetchApp.fetch(url);
var object = JSON.parse(jsondata.getContentText());
return parseJSONObject_(object, query, options, includeFunc, transformFunc);
}
/**
* Encodes the given value to use within a URL.
*
* @param {value} the value to be encoded
*
* @return the value encoded using URL percent-encoding
*/
function URLEncode(value) {
return encodeURIComponent(value.toString());
}
/**
* Parses a JSON object and returns a two-dimensional array containing the data of that object.
*/
function parseJSONObject_(object, query, options, includeFunc, transformFunc) {
var headers = new Array();
var data = new Array();
if (query && !Array.isArray(query) && query.toString().indexOf(",") != -1) {
query = query.toString().split(",");
}
if (options) {
options = options.toString().split(",");
}
parseData_(headers, data, "", 1, object, query, options, includeFunc);
parseHeaders_(headers, data);
transformData_(data, options, transformFunc);
return hasOption_(options, "noHeaders") ? (data.length > 1 ? data.slice(1) : new Array()) : data;
}
/**
* Parses the data contained within the given value and inserts it into the data two-dimensional array starting at the rowIndex.
* If the data is to be inserted into a new column, a new header is added to the headers array. The value can be an object,
* array or scalar value.
*
* If the value is an object, it's properties are iterated through and passed back into this function with the name of each
* property extending the path. For instance, if the object contains the property "entry" and the path passed in was "/feed",
* this function is called with the value of the entry property and the path "/feed/entry".
*
* If the value is an array containing other arrays or objects, each element in the array is passed into this function with
* the rowIndex incremeneted for each element.
*
* If the value is an array containing only scalar values, those values are joined together and inserted into the data array as
* a single value.
*
* If the value is a scalar, the value is inserted directly into the data array.
*/
function parseData_(headers, data, path, rowIndex, value, query, options, includeFunc) {
var dataInserted = false;
if (isObject_(value)) {
for (key in value) {
if (parseData_(headers, data, path + "/" + key, rowIndex, value[key], query, options, includeFunc)) {
dataInserted = true;
}
}
} else if (Array.isArray(value) && isObjectArray_(value)) {
for (var i = 0; i < value.length; i++) {
if (parseData_(headers, data, path, rowIndex, value[i], query, options, includeFunc)) {
dataInserted = true;
rowIndex++;
}
}
} else if (!includeFunc || includeFunc(query, path, options)) {
// Handle arrays containing only scalar values
if (Array.isArray(value)) {
value = value.join();
}
// Insert new row if one doesn't already exist
if (!data[rowIndex]) {
data[rowIndex] = new Array();
}
// Add a new header if one doesn't exist
if (!headers[path] && headers[path] != 0) {
headers[path] = Object.keys(headers).length;
}
// Insert the data
data[rowIndex][headers[path]] = value;
dataInserted = true;
}
return dataInserted;
}
/**
* Parses the headers array and inserts it into the first row of the data array.
*/
function parseHeaders_(headers, data) {
data[0] = new Array();
for (key in headers) {
data[0][headers[key]] = key;
}
}
/**
* Applies the transform function for each element in the data array, going through each column of each row.
*/
function transformData_(data, options, transformFunc) {
for (var i = 0; i < data.length; i++) {
for (var j = 0; j < data[i].length; j++) {
transformFunc(data, i, j, options);
}
}
}
/**
* Returns true if the given test value is an object; false otherwise.
*/
function isObject_(test) {
return Object.prototype.toString.call(test) === '[object Object]';
}
/**
* Returns true if the given test value is an array containing at least one object; false otherwise.
*/
function isObjectArray_(test) {
for (var i = 0; i < test.length; i++) {
if (isObject_(test[i])) {
return true;
}
}
return false;
}
/**
* Returns true if the given query applies to the given path.
*/
function includeXPath_(query, path, options) {
if (!query) {
return true;
} else if (Array.isArray(query)) {
for (var i = 0; i < query.length; i++) {
if (applyXPathRule_(query[i], path, options)) {
return true;
}
}
} else {
return applyXPathRule_(query, path, options);
}
return false;
};
/**
* Returns true if the rule applies to the given path.
*/
function applyXPathRule_(rule, path, options) {
return path.indexOf(rule) == 0;
}
/**
* By default, this function transforms the value at the given row & column so it looks more like a normal data import. Specifically:
*
* - Data from parent JSON elements gets inherited to their child elements, so rows representing child elements contain the values
* of the rows representing their parent elements.
* - Values longer than 256 characters get truncated.
* - Values in row 0 (headers) have slashes converted to spaces, common prefixes removed and the resulting text converted to title
* case.
*
* To change this behavior, pass in one of these values in the options parameter:
*
* noInherit: Don't inherit values from parent elements
* noTruncate: Don't truncate values
* rawHeaders: Don't prettify headers
* debugLocation: Prepend each value with the row & column it belongs in
*/
function defaultTransform_(data, row, column, options) {
if (!data[row][column]) {
if (row < 2 || hasOption_(options, "noInherit")) {
data[row][column] = "";
} else {
data[row][column] = data[row-1][column];
}
}
if (!hasOption_(options, "rawHeaders") && row == 0) {
if (column == 0 && data[row].length > 1) {
removeCommonPrefixes_(data, row);
}
data[row][column] = toTitleCase_(data[row][column].toString().replace(/[\/\_]/g, " "));
}
if (!hasOption_(options, "noTruncate") && data[row][column]) {
data[row][column] = data[row][column].toString().substr(0, 256);
}
if (hasOption_(options, "debugLocation")) {
data[row][column] = "[" + row + "," + column + "]" + data[row][column];
}
}
/**
* If all the values in the given row share the same prefix, remove that prefix.
*/
function removeCommonPrefixes_(data, row) {
var matchIndex = data[row][0].length;
for (var i = 1; i < data[row].length; i++) {
matchIndex = findEqualityEndpoint_(data[row][i-1], data[row][i], matchIndex);
if (matchIndex == 0) {
return;
}
}
for (var i = 0; i < data[row].length; i++) {
data[row][i] = data[row][i].substring(matchIndex, data[row][i].length);
}
}
/**
* Locates the index where the two strings values stop being equal, stopping automatically at the stopAt index.
*/
function findEqualityEndpoint_(string1, string2, stopAt) {
if (!string1 || !string2) {
return -1;
}
var maxEndpoint = Math.min(stopAt, string1.length, string2.length);
for (var i = 0; i < maxEndpoint; i++) {
if (string1.charAt(i) != string2.charAt(i)) {
return i;
}
}
return maxEndpoint;
}
/**
* Converts the text to title case.
*/
function toTitleCase_(text) {
if (text == null) {
return null;
}
return text.replace(/\w\S*/g, function(word) { return word.charAt(0).toUpperCase() + word.substr(1).toLowerCase(); });
}
/**
* Returns true if the given set of options contains the given option.
*/
function hasOption_(options, option) {
return options && options.indexOf(option) >= 0;
}

Hi Paul,

thx for your awesome script,

Just wondering, if I understood your script right. noTruncate keeps old records and is not deleting them? So basically I'm getting an list of records, with the newest at top? Are changed rows updated?

So basically I'm trying to build a small dashboard for facebook post performance pulling in records, categorizing them and than aggregating stats across my own categories.

I'm using it with: =ImportJSON("https://graph.facebook.com/pagename/posts?access_token=xxxx";""; "noInherit, noTruncate").

Is this clever or would you rather run a cron script to check for new posts and another for pulling in performance data for each post?

thx, I really appreciate your work!
Manu

cesckat commented Dec 7, 2014

Hi @paulgambill, thanks for sharing the ImportJSON.gs tutorial with us in your site.

I just want to say something in case anyone have problems with it. If you for example write:
=ImportJSON(“http://date.jsontest.com", “/date”, “noInherit, noTruncate”)

Maybe you're going to have problems with Google Spreadsheet (while it's reading the code that you've entered) and that's because of the comma. Try to separate it with ";" instead of "," like that one:
=ImportJSON("http://date.jsontest.com"; "/date"; "noInherit,noTruncate")

Hope to help anyone with problems :)

Thanks Paul. This is a cool script and quick tutorial!

kenorb commented Jan 29, 2015

Check out: https://github.com/fastfedora/google-docs for the latest version.

Hi Paul,
is there a specific reason why I am getting an "Error - Formula parse error" ? I followed your tutorial, but can't seem to get past this issue.
Thanks for help.

same here Randall94

@Randall94 / @KatVonKlone,
I had the same issue.

Check out @cesckat's answer. It corrects it!

qm3ster commented Apr 11, 2016

Does anyone know how to access array elements inside the json? ("example.json", "/data/list/0") does not work

g5codyswartz commented May 16, 2016

Is there a way to prevent it from crawling into multi-dimensional arrays?
For instance, the object has a property which contains an array. It seems that the array gets crawled out on the last object, I would prefer it to not even crawl into there unless I designated it to do so. I know as stated in the todo/future about here being better querying, but it'd be awesome if it had an option to not crawl children collections.

I'm thinking I will not be able to accomplish what I would like to with this though : (
I have an object with a property of "locations" so my query is "/locations"
but then each location has 3 collections which only expand on the last location; I would rather them not try to expand at all.
Then after having the collections I would want to have a separate query that queries the "locations[i]/pages"

Overall I think the massive object I'm looking at is too complex for this.

Hi, I was wondering if I could get the score and the pass values
Since I am not a coder I just gave it a go, but the script only gives me "kind", "id" or "responseCode".
I believe its because the script doesnt go to the second array?

can anybody please help?

Thanks a lot!

{
"kind": "pagespeedonline#result",
"id": "http://www.123.de/",
"responseCode": 200,
"ruleGroups": {
"USABILITY": {
"score": 64,
"pass": false
}
}

Hi, somehow the first result of my JSON feed is ignored. The header shows up perfectly, first result does not appear, second to last show up again. Any idea? Thanks, Flo

How best to import if you don't have a list of the paths?

Leooo commented Oct 24, 2016

For deeply nested arrays compatibility, use:

function parseData_(headers, data, path, rowIndex, value, query, options, includeFunc) {
       ...
       // line 200
       rowIndex = rowIndex + Math.max(0, (data.length - rowIndex));
      ...
}

How I can post data to a url in this script?

Wow You sir saved me, and I am sure many others a lot of time. Thank you very much for sharing this!

JHRDM commented Jan 12, 2017

(Using updated fastFedora version) getting this:

"Error
SyntaxError: Unexpected token: < (line 165)."

There are no "<" on line 165 of any of the files associated, unless there is one in Google.

Really looking forward to making this work.

Thanks, worked well with the edit you suggested!

First of all, thanks to @paulgambill for the work.
I did exactly as the tutorial said, but I'm getting a "#ERROR!" message on the Google sheet. I also tried using semicolons instead of commas as suggested by @cesckat
Appreciate any assistance on this, please. I'm not a developer, but just trying to put some self-updating stock prices on an excel sheet for my grandpa. The prices are in a JSON feed from an API.

openjck commented Jun 14, 2017

These lines can be added at the very bottom of defaultTransform_ (between lines 334 and 335) to make the script return the values as true numbers when appropriate, so that the number formats (Scientific, Accounting, Currency, etc.) work as expected.

  if (parseFloat(data[row][column])) {
    data[row][column] = parseFloat(data[row][column]);
  }

There shouldn't be any harm in adding these lines, so I'd suggest they be considered for the official script.

Rjrajiv commented Jun 20, 2017

I am using " =ImportJSON("https://api.example.com/feed/json","/keyword","noInherit, noTruncate") I am importing all data successfully.
But i want only certain header to be called. how to add filter to get only required header?
or is there any way to import only selected row with a fixed header and corresponding column?
Any help would be appreciated.

Hi,
Sorry a complete newb here. My API requires authentication to provide response when hit on its URL, can someone let me know how to pass username and password for the same through this script or maybe directly in the URL.
Thanks in advance for help

radres commented Nov 30, 2017

I have used this is google sheets yet the imported value does not refresh. I changed the json file in the server, but I have literally no idea how to refresh the value in the cell (other than deleting it and rewriting it). refreshing the page doesn't do it.

Hi, here's my modified version https://gist.github.com/allenyllee/c764c86ed722417948fc256b7a5077c4

Changelog:

  1. Add the ability to query array elements by using xpath like "/array[n]/member" where "n" is array index
  2. Fixed Issue: when the path and the rule partially matched but not under the same xpath, it will still print that value under that xpath.
    For example: when path = /data/asset_longname and rule = /data/asset, it will print both /data/asset_longname and /data/asset rather then just print /data/asset.
    Solution: check the first remaining charater, if it's empty "" or slash "/" the function should return true. The slash "/" means that there are members under the same xpath, so do not remove it.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment