Skip to content

Instantly share code, notes, and snippets.

@aappddeevv
aappddeevv / workflow-mess-for-web-apps.md
Last active January 12, 2016 11:02
how to create a web app manually using popular web app workflow and build tools

Web App Development Issues

Web app development and workflow is pretty much a messy exercise that has few standards. I am trying to document the messiness and suggest one way around it that smoothly scales from a small project to a large project without using yet another workflow layer (YAWL).

I am assuming that you are using tools such as node, grunt and bower. While it is possible to use other tools such as makefiles or generators such as lineman or brunch, the issues that bring such complexity to the workflow process are built into the very nature of web protocols and application deployment model. Unlike java, which has standardized packaging and deployment models, web apps can take on a wide variety of packaging and deployment models based on several factors such as organizational process ("this is the way we do it"), bandwidth/latency constraints, dev debugging needs as well as application model (client heavy, SPA or server heavy).

There is a lively ecosystems to help manage this complexity as ment

@aappddeevv
aappddeevv / web-app-workflow.md
Last active August 29, 2015 14:01
how to create a web app manually using popular web app workflow and build tools

Web Application Workflow and Toolchains

Creating a front-end client application based on html5 technology will require a few toolchains working together to develop, assemble and deploy the application. Each part of the toolchain will perform a fairly small function. Each layer is decoupled from each other so you can swap out your own toolchain or augment it as needed. Standards are still evolving around html5 client application toolchains and workflow.

Since scalajs is a new technology, take a web-app first style approach to the build environment using toolchains that evolved with the html5 development technologies versus a java or jvm centric approach. The following tools are fairly standard although others exist:

  • bower: Management js dependencies
  • grunt: A very simple task runner. Most of what grunt does could be performed by sbt if sbt had a set of plugins that cover the same functionality. In some cases, you can decide whether to use a grunt task or a sbt task. *For example, you can use a grunt ta
@aappddeevv
aappddeevv / scalaz_stream_beginnings.md
Last active August 29, 2015 13:58
scala, scalaz, scalaz stream, scalaz machine

Working on data streaming (many problems can be cast as data streams) is hard. Controlling synchronous and asynchronous behaviors easily and simply requires frameworks and code that is often is uncommon to most programmers, and hence, its hard to write the code while still retaining simplicity.

Scalaz Streams (labelled sstreams in this article) help you manage complexity by providing a few fundamental abstractions. But I found the abstractions hard to use at first because I was not use to thinking in a model that sstreams uses.

sstreams casts the problem as a state machine. There are 3 states and a "driver" that iterates through the states. Each state carries with it enough information to move to the next state. Each state is a "one step process" and so all states derive from the Process trait.

The level of abstraction is pretty high which means that the framework should be able to applied to a highly diverse set of issues. I have used spring integration. I found that framework hard to use as well because

@aappddeevv
aappddeevv / date dimension.md
Last active July 1, 2019 04:42
scala program to create a date time dimension CSV file for a data mart or data warehouse

You may need from time to time to grab a day-level granularity date dimension. You can find some wizards in some database packages or some PL/SQL commands to generate it, but if you just need something simple, try the below:

package datedim

import collection.JavaConverters._
import au.com.bytecode.opencsv.CSVReader
import java.io.FileReader
import scala.collection.mutable.ListBuffer
import au.com.bytecode.opencsv.CSVWriter
@aappddeevv
aappddeevv / convert scala map to tuple or case class.md
Last active March 2, 2023 05:07
You often need to flatten a map into a tuple or case class object. This describes how to do it generically. When using a nosql database, for example, you often need to convert maps to tuple/case classes. That's because nosql databases have implied schemas and we have to help the statically typed code by un-implying the schema.

#The Problem: The Need to Flatten a Map When dealing with nosql databases or even traditional RDBMS, you often need to flatten a map to a tuple or create a case class. The question is, how do you do this?

While a map can be flattened to a list of (key, value) pairs fairly easily, it does not create a tuple of values.

scala> m
res39: scala.collection.immutable.Map[String,Any] = Map(blah -> 10, hah -> nah)

scala> m.flatMap { case (k,v)=>k::v::Nil}
res42: scala.collection.immutable.Iterable[Any] = List(blah, 10, hah, nah)

I forgot to add a full version of the NotesService that was discussed in a previous blog.

Let's revisit the NotesService interface:

/**
 * Primarily for larger notes/docs storage.
 * The API allows the note content to be retrieved
 * separately. Sometimes, we want to separate
 * out the larger, less changing content from
@aappddeevv
aappddeevv / slick sub-select with max value.md
Last active October 12, 2018 06:57
scala, slick, sub-select select, max column

I am learning slick. A bit of tough road initially. The blogs and other posts out there really help. This tutorial was especially helpful.

My domain was storing note objects in database and tracking revisions. Here's the table structure:

 class Notes(tag: Tag) extends Table[(Int, Int, java.sql.Timestamp, Boolean, Option[String])](tag, "Notes") {
    // some dbs cannot return compound primary key, so use a standard int
    def id = column[Int]("id", O.AutoInc, O.PrimaryKey)
    def docId = column[Int]("docId")
    def createdOn = column[java.sql.Timestamp]("createdOn")
    def content = column[Option[String]]("content")
@aappddeevv
aappddeevv / composing service layers in scala.md
Last active March 15, 2024 02:20
scala, cake pattern, service, DAO, Reader, scalaz, spring, dependency inejection (DI)

#The Problem We just described standard design issues you have when you start creating layers of services, DAOs and other components to implement an application. That blog/gist is here.

The goal is to think through some designs in order to develop something useful for an application.

#Working through Layers If you compose services and DAOs the normal way, you typically get imperative style objects. For example, imagine the following:

  object DomainObjects {
@aappddeevv
aappddeevv / multiple cake-patterns.md
Last active January 27, 2024 16:01
scala, cake patterns, path-dependent types and composition (and a little bit of slick)

Scala and Cake Patterns and the Problem

Standard design patterns in scala recommend the cake pattern to help compose larger programs from smaller ones. Generally, for simple cake layers, this works okay. Boner's article suggests using it to compose repository and service layers and his focus is on DI-type composition. As you abstract more of your IO layers however, you realize that you the cake pattern as described does not abstract easily and usage becomes challenging. As the dependencies mount, you create mixin traits that express those dependence and perhaps they use self-types to ensure they are mixed in correctly.

Then at the end of the world, you have to mix in many different traits to get all the components. In addition, perhaps you have used existential types and now you must have a val/object somewhere (i.e. a well defined path) in order to import the types within the service so you can write your program. Existential

#scala, Map[String, Any] and scalaz Validation

#The Problem

I seem to encounter alot of Map[String, Any] in my programming probably because I am using graph databases alot that store key-value pairs in the nodes and relationships (think neo4j).

Because of this, I encounter alot of map-like processing. Being able to fluently handle these map structures fluently during data import processing or just general processing is very important.

The classic problem I ran into a lot was how to use the Map object more fluently and easily in my data import or query-like processing. I usually have a UI with my application and the UI needs to be able almost any data structure, so it is usually setup to be fairly robust to not knowing the exact types of values in the map or deriving those from the data itself. However, for data import processing as well as querying, I typically do need to know and count on a few well known types for values that are guaranteed to be in my objects like a name (a String) or some other proper