Skip to content

Instantly share code, notes, and snippets.

@aappddeevv
aappddeevv / composing service layers in scala.md
Last active March 15, 2024 02:20
scala, cake pattern, service, DAO, Reader, scalaz, spring, dependency inejection (DI)

#The Problem We just described standard design issues you have when you start creating layers of services, DAOs and other components to implement an application. That blog/gist is here.

The goal is to think through some designs in order to develop something useful for an application.

#Working through Layers If you compose services and DAOs the normal way, you typically get imperative style objects. For example, imagine the following:

  object DomainObjects {
@aappddeevv
aappddeevv / multiple cake-patterns.md
Last active January 27, 2024 16:01
scala, cake patterns, path-dependent types and composition (and a little bit of slick)

Scala and Cake Patterns and the Problem

Standard design patterns in scala recommend the cake pattern to help compose larger programs from smaller ones. Generally, for simple cake layers, this works okay. Boner's article suggests using it to compose repository and service layers and his focus is on DI-type composition. As you abstract more of your IO layers however, you realize that you the cake pattern as described does not abstract easily and usage becomes challenging. As the dependencies mount, you create mixin traits that express those dependence and perhaps they use self-types to ensure they are mixed in correctly.

Then at the end of the world, you have to mix in many different traits to get all the components. In addition, perhaps you have used existential types and now you must have a val/object somewhere (i.e. a well defined path) in order to import the types within the service so you can write your program. Existential

@aappddeevv
aappddeevv / idrivebackend.py
Created January 4, 2016 02:50
duplicity, idrive, backend
# -*- Mode:Python; indent-tabs-mode:nil; tab-width:4 -*-
#
# Copyright 2015 aappddeevv <aappddeevv@gmail.com>
#
# This file is part of duplicity.
#
# Duplicity is free software; you can redistribute it and/or modify it
# under the terms of the GNU General Public License as published by the
# Free Software Foundation; either version 2 of the License, or (at your
# option) any later version.
@aappddeevv
aappddeevv / convert scala map to tuple or case class.md
Last active March 2, 2023 05:07
You often need to flatten a map into a tuple or case class object. This describes how to do it generically. When using a nosql database, for example, you often need to convert maps to tuple/case classes. That's because nosql databases have implied schemas and we have to help the statically typed code by un-implying the schema.

#The Problem: The Need to Flatten a Map When dealing with nosql databases or even traditional RDBMS, you often need to flatten a map to a tuple or create a case class. The question is, how do you do this?

While a map can be flattened to a list of (key, value) pairs fairly easily, it does not create a tuple of values.

scala> m
res39: scala.collection.immutable.Map[String,Any] = Map(blah -> 10, hah -> nah)

scala> m.flatMap { case (k,v)=>k::v::Nil}
res42: scala.collection.immutable.Iterable[Any] = List(blah, 10, hah, nah)
package test
import zio._
import zio.console._
type FakeExchange = Has[FakeExchange.Service]
object FakeExchange:
trait Service:
def blah(): UIO[Unit]

Hi!

My background:

  • things: ai/machine learning, data management (big, streaming, complex), blockchain, front-end
  • management consulting: multiple industries, health, fs, telco, m&e
  • business areas: sales, marketing and service, fintech/regtech
  • program management (i.e. $10m+ usd)
  • startup management (partner, busdev, CTO, etc.): product and proserv
  • hands-on or delegation-based depending on need
@aappddeevv
aappddeevv / date dimension.md
Last active July 1, 2019 04:42
scala program to create a date time dimension CSV file for a data mart or data warehouse

You may need from time to time to grab a day-level granularity date dimension. You can find some wizards in some database packages or some PL/SQL commands to generate it, but if you just need something simple, try the below:

package datedim

import collection.JavaConverters._
import au.com.bytecode.opencsv.CSVReader
import java.io.FileReader
import scala.collection.mutable.ListBuffer
import au.com.bytecode.opencsv.CSVWriter

As I was working through some content on the Reader monoid, I realized I wanted to lift a function to use it. If you read the Scala In Depth book, which as a really great book, you know that lifting a function to be monoidal could be useful.

The classic example is to lift the function so that the arguments as well as the return value are all Option types. Then, if any of the parameters to the new lifted function are None, the function returns None. Otherwise it would return Some.

The key thought is that you do not have to write your own, scalaz already supports lift.

It turns out that scalaz has support for lifting functions using your monoid of choice. Below is a transcript using scala REPL with a -cp set to the scalaz core library.

The easiest approach is to use optionInstance.lift(func2) where func2 takes 2 arguments.

@aappddeevv
aappddeevv / slick sub-select with max value.md
Last active October 12, 2018 06:57
scala, slick, sub-select select, max column

I am learning slick. A bit of tough road initially. The blogs and other posts out there really help. This tutorial was especially helpful.

My domain was storing note objects in database and tracking revisions. Here's the table structure:

 class Notes(tag: Tag) extends Table[(Int, Int, java.sql.Timestamp, Boolean, Option[String])](tag, "Notes") {
    // some dbs cannot return compound primary key, so use a standard int
    def id = column[Int]("id", O.AutoInc, O.PrimaryKey)
    def docId = column[Int]("docId")
    def createdOn = column[java.sql.Timestamp]("createdOn")
    def content = column[Option[String]]("content")

#scala, Map[String, Any] and scalaz Validation

#The Problem

I seem to encounter alot of Map[String, Any] in my programming probably because I am using graph databases alot that store key-value pairs in the nodes and relationships (think neo4j).

Because of this, I encounter alot of map-like processing. Being able to fluently handle these map structures fluently during data import processing or just general processing is very important.

The classic problem I ran into a lot was how to use the Map object more fluently and easily in my data import or query-like processing. I usually have a UI with my application and the UI needs to be able almost any data structure, so it is usually setup to be fairly robust to not knowing the exact types of values in the map or deriving those from the data itself. However, for data import processing as well as querying, I typically do need to know and count on a few well known types for values that are guaranteed to be in my objects like a name (a String) or some other proper