- You can pin frequently used services to top bar in AWS Console
- By default there's a limit of 1000 concurrent lambda executions, this can be raised with a support ticket. There are companies that have this limit raised up to tens of thousands concurrent lambda executions.
- By default you get 75GB of code storage (so up to 10 React apps, lol) which can also be raised
- Looking at Throttles graph is useful, we don't want our functions to be throttled
ConcurrentExecutions
graph is useful as well - to understand if we're not approaching a limit- You can search for lambda functions using function name (adding prefixed help!) or using tags, which are really useful
- It's possible to use custom runtimes for Lambda (apart from Node,.NET, Python etc.) so if you really want to use Haskell you can do that
/* B2B. Getting daily or monthly retention for free users | |
Input tables: | |
signups - user_id, s.signup_date | |
activity - user_id, activity_date | |
*/ | |
-- CREATE VIEW cohort_user_retention AS |
You will need the requests
and authlib
packages. Just run :
$ pip install requests authlib
Then you need to generate an API Key from the App Store Connect portal (https://developer.apple.com/documentation/appstoreconnectapi/creating_api_keys_for_app_store_connect_api).
Resources that go beyond the official documentation. To learn Svelte or to find things to use in your own projects. If you have nice resources to add, please comment below.
- Main tutorial https://svelte.dev/tutorial/basics
- API https://svelte.dev/docs
- Main repo: https://github.com/sveltejs/svelte
Assuming you don't want to statically export
a Sapper app, most of the parts to build a simple SSG for Svelte already exist. The only thing that is missing is the tooling ('only').
However, you don't need a lot to get things going: just a couple of rollup builds and a config file will get you most of the way there. Just some glue.
What follows is a bunch of rambling, half thought out thoughts on how I would probably go about this. Most of the stuff discussed here is stuff I've actually done or half done or am in the process of doing with varying degrees of success. It is something I'll be spending more time on in the future. There are other things I have done, want to do, or think would be a good idea that are not listed here as they don't fall into the scope of a simple SSG.
*Dislaimer: This is how I would build an SSG, this isn't the only way, but I like this approach as there are a bunch of compile-time optimisations you can per
CREATE EXTENSION IF NOT EXISTS "unaccent" | |
CREATE OR REPLACE FUNCTION slugify("value" TEXT) | |
RETURNS TEXT AS $$ | |
-- removes accents (diacritic signs) from a given string -- | |
WITH "unaccented" AS ( | |
SELECT unaccent("value") AS "value" | |
), | |
-- lowercases the string | |
"lowercase" AS ( |
provider "aws" { | |
region = "${var.region}" | |
} | |
### VPC | |
# Fetch AZs in the current region | |
data "aws_availability_zones" "available" {} | |
resource "aws_vpc" "datastore" { | |
cidr_block = "172.17.0.0/16" |
The MIT License (MIT) | |
Copyright (c) Plotly, Inc | |
Permission is hereby granted, free of charge, to any person obtaining a copy | |
of this software and associated documentation files (the "Software"), to deal | |
in the Software without restriction, including without limitation the rights | |
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | |
copies of the Software, and to permit persons to whom the Software is | |
furnished to do so, subject to the following conditions: |
These scripts import the entire Bureau of Labor Statistics Quarterly Census of Employement and Wages (from 1990 to latest) into one giant PostgreSQL database.
The database created by this process will use about 100GB of disk space. Make sure you have enough space available before you start!
Database name, table name, and more can be configured via config.sh
.