-
Go to dynamo db
- Create dynamoDB table
- Table Name: posts
- Primary key: id
- Click create
- Create dynamoDB table
-
Go to S3
-
Create s3 bucket
val categoricalVariables = if(useCategorical){ | |
Array("Origin", "Dest") | |
}else{ | |
null | |
} | |
val categoricalIndexers = if(useCategorical){ | |
categoricalVariables.map(i => new StringIndexer().setInputCol(i).setOutputCol(i+"Index").setHandleInvalid("skip")) | |
}else{ | |
null |
val lr = new LinearRegression() | |
.setLabelCol("DelayOutputVar") | |
.setFeaturesCol("features") | |
val paramGrid = new ParamGridBuilder() | |
.addGrid(lr.regParam, Array(0.1, 0.01)) | |
.addGrid(lr.fitIntercept) | |
.addGrid(lr.elasticNetParam, Array(0.0, 1.0)) | |
.build() | |
val steps:Array[org.apache.spark.ml.PipelineStage] = if(useCategorical){ |
val conf = new SparkConf().setAppName("predictor") | |
val sc = new SparkContext(conf) | |
val sqlContext = new SQLContext(sc) | |
val rawData = sqlContext.read.format("com.databricks.spark.csv") | |
.option("header", "true") | |
.option("inferSchema", "true") | |
.load(dataPath) | |
.withColumn("DelayOutputVar", col("ArrDelay").cast("double")) | |
.withColumn("DepDelayDouble", col("DepDelay").cast("double")) |
# Sprint1 | |
#### Start-date: 30/03/18 | |
#### End-date: 06/04/18 | |
# Development Team | |
- Duarte Brandão | |
- Pedro Costa | |
- ... |
- Implement CI/CD pipelines
- Gitlab CI
- Use Runners
- Github
- Use TravisCI or others
- Necessary stages
- Build Code
- Run Tests
- Code quality
- Gitlab CI
- Unit testing
In this tutorial we're going to build a NodeJS website with the following features:
- User authentication
- Back office
- Blog (optional)
The first to parts of the tutorial are meant to quickly set you up and running. The third part is a more in depth explanation of some points I find important, so feel free to read them later as you need.
2. | |
1 2 3 4 5 | |
TA,TB,TC,TD,TE | |
a. | |
1 tarefa -> 5min | |
i) 10 01 11 10 11 | |
ii) 11 11 11 10 11 | |
iii) 01 01 01 10 10 |
-
Use encryption for data identifying users and sensitive data like access tokens, email addresses or billing details.
-
If your database supports low cost encryption at rest (like AWS Aurora), then enable that to secure data on disk. Make sure all backups are stored encrypted as well.
-
Use minimal privilege for the database access user account. Don’t use the database root account.
-
Store and distribute secrets using a key store designed for the purpose. Don’t hard code in your applications.