-
You will first have to download the gist to a file and then upload it to S3 in a bucket of your choice.
-
Using the AWS EMR Console create a cluster and choose advanced options.
-
In Step 1 make sure you check the Spark x.x.x checkbox if you want to make use of the sparklyr library in RStudio. You can customize the Spark version by choosing a different emr Release version.
-
In Step 3 you can configure your bootstraps. Choose to Configure and add a Custom action
- For the Name you can fill something like Install RStudio Server
- For the Script location you will need to point to where you have uploaded the gist (Eg.
s3://my-bucket/emr/bootstrap/install-rstudio-server.sh
) - As Optional arguments you can add the following:
-
--sd-version
- optional, default is 1.0.110. The script downloads the artefact from the daily builds bucket, You can use a CLI command likeaws s3 ls s3://rstudio-dailybuilds/rstudio-
to check what versions are available. -
--sd-user
- optional, defaults to drwho. RStudio Server needs a real system user. The script creates one as part of the bootstrap process. -
--sd-pass
- optional, default to tardis. The password for the above specified user. If you're going to use the defaults for the credentials, make sure the EMR cluster is not Internet accessible, as this could be a serious security vunerability. -
--spark-version
- optional, defaults to 2.0.0. sparklyr which is installed as part of the bootstrap process, needs a locally downloaded version of Spark. You should make sure that this version matches the Spark version installed on the cluster. This is only relevant if you are actually going to use the sparklyr capabilities.EMR release --spark-version
4.0.0 1.4.1 4.1.0 1.5.0 4.2.0 1.5.2 4.3.0 1.6.0 4.4.0 1.6.0 4.5.0 1.6.1 4.6.0 1.6.1 4.7.0 1.6.1 4.7.1 1.6.1 4.7.2 1.6.2 4.8.0 1.6.2 4.8.2 1.6.2 5.0.0 2.0.0 (default) 5.0.3 2.0.1
-
-
After the cluster has started, you will need to access your cluster's master address and specify port 8787. RStudio Server is only available on the master instance. Depending on where you cluster is launched, you might need to establish a tunnel/proxy connection.
-
After logging in using the default/custom credentials provided, you can connect to the Spark cluster with the following script:
library(sparklyr)
library(dplyr)
sc <- spark_connect(master = "yarn-client")