Skip to content

Instantly share code, notes, and snippets.

View JanethL's full-sized avatar

Janeth Graziani JanethL

View GitHub Profile
@JanethL
JanethL / __main__.js
Last active September 21, 2018 20:01
Airtable-Typeform- Clearbit
const Person = require('clearbit')(process.env.CLEARBIT_API_KEY).Person;
const Airtable = require('airtable');
const base = new Airtable({ apiKey: process.env.AIRTABLE_API_KEY }).base(process.env.AIRTABLE_BASE_KEY);
// Maps typeform answers to columns in airtable
const typeformFieldsToAirtableFields = {
short_text: 'name',
email: 'email',
multiple_choice: 'message_type',
@JanethL
JanethL / received.js
Created December 8, 2019 21:21
FItness Planner + tracker with Airtable + Twilio
if (isNaN(event.Body)) {
return;
}
// Prepare workflow object to store API responses
let result = {};
// [Workflow Step 1]
console.log(`Running airtable.query[@0.4.2].select()...`);
result.step1 = {};
result.step1.selectQueryResult = await lib.airtable.query['@0.4.2'].select({
table: `Workout`,
@JanethL
JanethL / received.js
Last active December 9, 2019 03:30
FItness Planner + tracker with Airtable + Twilio
if (isNaN(event.Body)) {
return;
}
// Prepare workflow object to store API responses
let result = {};
// [Workflow Step 1]
@JanethL
JanethL / member_joined_channel.js
Created March 12, 2020 05:35
welcome message when a new member joins a channel
const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN});
/**
* An HTTP endpoint that acts as a webhook for Slack member_joined_channel event
* @param {object} event
* @returns {object} result Your return value
*/
module.exports = async (event) => {
// Store API Responses
const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN});
/**
* An HTTP endpoint that acts as a webhook for Slack member_joined_channel event
* @param {object} event
* @returns {object} result Your return value
*/
module.exports = async (event) => {
// Store API Responses
@JanethL
JanethL / webscraper.js
Created March 18, 2020 16:49
Web Scraper
const lib = require('lib')({token: process.env.STDLIB_SECRET_TOKEN});
const r = require('request');
const request = require('request-promise-native');
const cheerio = require('cheerio');
const parseAll = require('html-metadata').parseAll;
/**
* A simple and powerful scraper
* @param {string} url Url to fetch
* @param {string} userAgent Request's User Agent
@JanethL
JanethL / textResolver.js
Created March 26, 2020 17:45
Sample using text resolver with crawler.api on Autocode
// Store API Responses
const result = {crawler: {}};
console.log(`Running [Crawler → Query (scrape) a provided URL based on CSS selectors]...`);
result.crawler.pageData = await lib.crawler.query['@0.0.1'].selectors({
url: `https://news.ycombinator.com/`,
userAgent: `stdlib/crawler/query`,
includeMetadata: false,
selectorQueries: [
{
'selector': `a.storylink`,
@JanethL
JanethL / textResolver.js
Last active September 30, 2020 08:27
Sample using text resolver with crawler.api on Autocode
// Store API Responses
const result = {crawler: {}};
console.log(`Running [Crawler → Query (scrape) a provided URL based on CSS selectors]...`);
result.crawler.pageData = await lib.crawler.query['@0.0.1'].selectors({
url: `https://www.economist.com/`,
userAgent: `stdlib/crawler/query`,
includeMetadata: false,
selectorQueries: [
{
'selector': `a.headline-link`,
@JanethL
JanethL / attrResolver.js
Last active March 26, 2020 20:55
Sample using the "attr" value in crawler.api on Autocode
result.crawler.pageData = await lib.crawler.query['@0.0.1'].selectors({
url: `https://www.economist.com/`,
userAgent: `stdlib/crawler/query`,
includeMetadata: false,
selectorQueries: [
{
'selector': `a.headline-link`,
'resolver': `attr`,
'attr': `href`
}
@JanethL
JanethL / mapQueries.js
Last active March 27, 2020 15:57
We can use map to make subqueries (called mapQueries) against a selector to parse data in parallel. For example, if we want to combine the above two queries (get both title and URL simultaneously)...
// Store API Responses
const result = {crawler: {}};
console.log(`Running [Crawler → Query (scrape) a provided URL based on CSS selectors]...`);
result.crawler.pageData = await lib.crawler.query['@0.0.1'].selectors({
url: `https://www.economist.com/`,
userAgent: `stdlib/crawler/query`,
includeMetadata: false,
selectorQueries: [
{