Navigation Menu

Skip to content

Instantly share code, notes, and snippets.

View gupta-himanshu's full-sized avatar

Himanshu Gupta gupta-himanshu

View GitHub Profile
<!-- akka discovery (dns) -->
<dependency>
<groupId>com.lightbend.akka.discovery</groupId>
<artifactId>akka-discovery-dns_2.12</artifactId>
<version>0.18.0</version>
</dependency>
<!-- akka management (cluster formation) -->
<dependency>
<groupId>com.lightbend.akka.management</groupId>
<artifactId>akka-management_2.12</artifactId>
<version>0.18.0</version>
</dependency>
<dependency>
<groupId>com.lightbend.akka.management</groupId>
<artifactId>akka-management-cluster-bootstrap_2.12</artifactId>
  1. Prefer static factory methods over constructors

    • Advantage - They are not required to create a new object each time they’re invoked. This allows immutable classes to use preconstructed instances, or to cache instances as they’re constructed, and dispense them repeatedly to avoid creating unnecessary duplicate objects.

    • Disadvantage - The classes without public or protected constructors cannot be subclassed. For example, it is impossible to subclass any of the convenience implementation classes in the Collections Framework. Arguably this can be a blessing in disguise because it encourages programmers to use composition instead of inheritance, and is required for immutable types.

  2. Prefer a builder when there are too many Constructor parameters

  • Advantage - It combines the safety of the telescoping constructor pattern with the readability of the JavaBeans pattern. It is a form of the Builder pattern. Instead of making the desired object directly, the client calls a c
import org.apache.spark.sql.SparkSession
object StructuredNetworkWordCount extends App {
val spark = SparkSession
.builder
.appName("StructuredNetworkWordCount")
.master("local")
.config("spark.sql.shuffle.partitions", 8)
.getOrCreate()
Table table = tableEnv.sqlQuery("SELECT * FROM employee");
DataSet<WC> result = tEnv.toDataSet(table, WC.class);
result.print();
spark.sql("SELECT * FROM employee").show()
// +----+-------+
// | age| name|
// +----+-------+
// |null|Michael|
// | 30| Andy|
// | 19| Justin|
// +----+-------+
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5000); // create a checkpoint every 5 seconds
env.getConfig().setGlobalJobParameters(parameterTool); // make parameters available in the web interface
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
val sparkConf = new SparkConf().setAppName("RecoverableNetworkWordCount")
// Create the context with a 1 second batch size
val ssc = new StreamingContext(sparkConf, Seconds(1))
ssc.checkpoint("/path/to/checkpoint")
#!/bin/bash
set -e
echo "Going to App directory"
APP_DIR=/path/to/lagom-service
cd $APP_DIR
echo "Building Lagom dist"
sbt "project lagom-impl" "clean" "dist"
akka.cluster.seed-nodes = [
"akka.tcp://MyService@host1:2552",
"akka.tcp://MyService@host2:2552"]