This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
object Day3RucksaackReorganisation extends App { | |
val itemCodes = (('a' to 'z').toList zip (1 to 26).toList) ::: (('A' to 'Z').toList zip (27 to 52).toList) | |
val itemCodesMap = itemCodes.map { case(i,c) => i -> c }.toMap | |
val rucksackContent = scala.io.Source.fromFile("./Sources/Day3RucksaackReorganisation.txt").getLines().toList | |
def midSplit(s: String): (String, String) = (s.substring(0, s.length / 2), s.substring(s.length / 2, s.length)) | |
val rucksackContentSplits = rucksackContent.map(midSplit(_)) |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import io.circe.Decoder | |
import io.circe.parser.decode | |
import org.joda.time.DateTime | |
object WeatherParser extends Serializable { | |
val emptyNumber = -1 // should have used None here | |
val emptyString = "" // should have used None here | |
case class WeatherSchema ( |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
/** | |
* Created by janis on 09/01/2017. | |
*/ | |
object TheCoinChangeProblem2 extends App { | |
val requiredCoinSum = io.StdIn.readLine().split(" ")(0).toInt | |
val coins = io.StdIn.readLine().split(" ").toList map (_.toInt) | |
//val requiredCoinSum = 4 | |
//val coins = List(1,2,3) | |
// 1111 112 22 13 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package rpd.load | |
import rpd.extract._ | |
/** | |
* Created by janis on 04/10/2016. | |
*/ | |
case class DataStore ( | |
// Business Layer | |
businessModels: List[BusinessModel], |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package rpd.load | |
/** | |
* Created by janis on 01/10/2016. | |
*/ | |
// todo: create type for id references. perhaps separate for primary and foreign keys | |
trait AddDataStore { | |
var dataStore: DataStore = null |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package rpd.extract | |
import rpd.load._ | |
import scala.xml.{Elem, Node} | |
/** | |
* Created by janis on 01/10/2016. | |
*/ | |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
package rpd.common | |
import rpd.extract._ | |
import rpd.html.BusinessModelListHtml | |
import rpd.load._ | |
/** | |
* Created by janis on 01/10/2016. | |
*/ | |
object NewTest extends App { |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import org.apache.log4j.{Level, Logger} | |
import org.apache.spark.ml.classification.RandomForestClassifier | |
import org.apache.spark.ml.feature.{Bucketizer, StringIndexer, VectorAssembler, VectorIndexer} | |
import org.apache.spark.ml.{Pipeline, PipelineStage} | |
import org.apache.spark.sql.functions._ | |
import org.apache.spark.sql.{DataFrame, SparkSession} | |
/** | |
* Created by Janis Rumnieks on 19/08/2016. | |
*/ |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import org.apache.log4j.{Level, Logger} | |
import org.apache.spark.ml.Pipeline | |
import org.apache.spark.ml.classification.RandomForestClassifier | |
import org.apache.spark.ml.feature.{StringIndexer, VectorAssembler, VectorIndexer} | |
import org.apache.spark.sql.SparkSession | |
/** | |
* Created by Janis Rumnieks on 19/08/2016. | |
*/ | |
object Titanic { |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import org.apache.log4j.{Level, Logger} | |
import org.apache.spark.ml.classification.RandomForestClassifier | |
import org.apache.spark.ml.linalg.Vectors | |
import org.apache.spark.sql.{DataFrame, SparkSession} | |
/** | |
* Created by Janis Rumnieks on 15/08/2016. | |
*/ | |
object DigitRecognizer4 { |