Tageless Final interpreters are an alternative to the traditional Algebraic Data Type (and generalized ADT) based implementation of the interpreter pattern. This document presents the Tageless Final approach with Scala, and shows how Dotty with it's recently added implicits functions makes the approach even more appealing. All examples are direct translations of their Haskell version presented in the Typed Tagless Final Interpreters: Lecture Notes (section 2).
The interpreter pattern has recently received a lot of attention in the Scala community. A lot of efforts have been invested in trying to address the biggest shortcomings of ADT/GADT based solutions: extensibility. One can first look at cats' Inject
typeclass for an implementation of [Data Type à la Carte](http://www.cs.ru.nl/~W.Swierstra/Publications/DataTypesA
Step by step guide to getting FsYacc and FsLex for development on OS X.
- 2017-01-09: Updated guide to reflect latest version of FsLexYacc (7.0.3)
- 2016: Created
By doing one of these:
- Recommended1: Via mono-project.com
CREATE OR REPLACE FUNCTION citus_shard_name(table_name regclass, shard_id bigint) | |
RETURNS text | |
LANGUAGE sql | |
AS $function$ | |
SELECT table_name||'_'||shard_id; | |
$function$; | |
CREATE OR REPLACE FUNCTION citus_shard_name(shard_id bigint) | |
RETURNS text | |
LANGUAGE sql |
#How to build and install the lp_solve Java extension on Mac OS X: | |
#Download and expanded lp_solve_5.5_source.tar.gz into a directory named 'lp_solve_5.5'. | |
#Download and expand lp_solve_5.5_java.zip into a directory named 'lp_solve_5.5_java'. | |
# 1) Build the lp_solve library. | |
$ cd lp_solve_5.5/lpsolve55 | |
$ sh ccc.osx |
use std::rc::Rc; | |
trait HKT<U> { | |
type C; // Current type | |
type T; // Type with C swapped with U | |
} | |
macro_rules! derive_hkt { | |
($t:ident) => { | |
impl<T, U> HKT<U> for $t<T> { |
//================================================================== | |
// SPARK INSTRUMENTATION | |
//================================================================== | |
import com.codahale.metrics.{MetricRegistry, Meter, Gauge} | |
import org.apache.spark.{SparkEnv, Accumulator} | |
import org.apache.spark.metrics.source.Source | |
import org.joda.time.DateTime | |
import scala.collection.mutable |
Producer | |
Setup | |
bin/kafka-topics.sh --zookeeper esv4-hcl197.grid.linkedin.com:2181 --create --topic test-rep-one --partitions 6 --replication-factor 1 | |
bin/kafka-topics.sh --zookeeper esv4-hcl197.grid.linkedin.com:2181 --create --topic test --partitions 6 --replication-factor 3 | |
Single thread, no replication | |
bin/kafka-run-class.sh org.apache.kafka.clients.tools.ProducerPerformance test7 50000000 100 -1 acks=1 bootstrap.servers=esv4-hcl198.grid.linkedin.com:9092 buffer.memory=67108864 batch.size=8196 |
This manual describes a complete procedure how to install and run Cabot on OS X for development. It was tested on OS X 10.9.1 and Cabot version 2014-01-23.
We’re using Homebrew to install the required dependencies on OS X. If you don’t have Homebrew yet (how is that possible? ;), see http://brew.sh/ for an installation script. Note: MacPorts can be probably used too, but we didn’t test it.
Although you can use Python and Ruby that comes with OS X, it’s better to use Python installed via Homebrew and use rbenv to manage Rubies. Then you don’t need to use sudo
and mess up your system. If you’re already using rvm instead of rbenv, then stay with it and skip the rbenv installation steps.