Skip to content

Instantly share code, notes, and snippets.

View MoustafaAMahmoud's full-sized avatar

Mostafa Mahmoud MoustafaAMahmoud

View GitHub Profile
@MoustafaAMahmoud
MoustafaAMahmoud / progfun04
Created August 19, 2017 14:27 — forked from nicokosi/progfun04
My notes from Coursera course "Functional Programming Principles in Scala" (https://class.coursera.org/progfun-004).
Notes from Coursera course 'Functional Programming Principles in Scala":
https://class.coursera.org/progfun-004
✔ Week 1: Functions & Evaluations @done (14-05-01 17:20)
✔ Lecture 1.1 - Programming Paradigms (14:32) @done (14-04-27 17:54)
3 paradigms: imperative, functional, logic
OO: orthogonal
imperative:

This cheat sheet originated from the forum, credits to Laurent Poulain. We copied it and changed or added a few things.

Evaluation Rules

  • Call by value: evaluates the function arguments before calling the function
  • Call by name: evaluates the function first, and then evaluates the arguments if need be
    def example = 2      // evaluated when called
    val example = 2      // evaluated immediately
package test
import org.apache.spark.rdd.RDD
import org.apache.spark.sql.SparkSession
import scala.collection.mutable
class DisjointSet() extends Serializable {
private val parentMap = mutable.Map[Int, Int]()
@MoustafaAMahmoud
MoustafaAMahmoud / notes-on-funprog2.md
Created September 21, 2017 23:06 — forked from nicokosi/notes-on-funprog2.md
Notes on Coursera course "Functional Program Design in Scala" (https://www.coursera.org/learn/progfun2)

Week 1

lecture "Recap: Functions and Pattern Matching"

recursive function + case classes + pattern matching: example JSON printer (Scala -> JSON) pattern matching in Scala SDK: PartialFunction is a Trait with def isDefinedAt(x: A): Boolean

lecture "Recap: Collections"

  • Iterable (base class)
    • Seq
@MoustafaAMahmoud
MoustafaAMahmoud / SparkCopyPostgres.scala
Created January 22, 2018 14:57 — forked from longcao/SparkCopyPostgres.scala
COPY Spark DataFrame rows to PostgreSQL (via JDBC)
import java.io.InputStream
import org.apache.spark.sql.execution.datasources.jdbc.JdbcUtils
import org.apache.spark.sql.{ DataFrame, Row }
import org.postgresql.copy.CopyManager
import org.postgresql.core.BaseConnection
val jdbcUrl = s"jdbc:postgresql://..." // db credentials elided
val connectionProperties = {