- xmlとparser-combinatorが別のjarになった http://d.hatena.ne.jp/xuwei/20130726/1374809559
- case classやメソッドの22制限がなくなった scala/scala#2305
- 「value classのvalがpublicでないといけない」という制限がなくなった scala/scala#2965 scala/scala#3113
- Java8のようなSingle Abstract Methodの糖衣構文を(experimantalだが)サポート scala/scala#3037
- パターンマッチの際に使われる
unapply
の戻り値型が、Option
ではなく「isEmpty: Boolean
とget: A
をもっていればどんな型でもよい」と、制限が緩くなった - http://d.hatena.ne.jp/xuwei/20131005/1380887673
- scala/scala#2848
scala.util.contrll.TailCalls.TailRec
にflatMap
が追加された scala/scala#2865- view boundが非推奨になった scala/scala#2909
- リフレクションが
スレッドセーフになった(またバグが見つかったらしい) scala/scala#3029
case class Hoge(x: Int) { | |
lazy val i = x + 1 | |
lazy val foo = (1 to 10).par.map(_*i) | |
} | |
Hoge(1).foo |
/** | |
* In a static language like scala, how could we repeatedly flatten a datastructure without reflection? | |
* This is an interesting example of using implicit parameters to do the work for you. | |
*/ | |
object DeepFlatten { | |
// what should this really be called? ;) | |
trait Flattenable[F[_]] { | |
def flatten[A](f: F[F[A]]): F[A] | |
} |
/** | |
* b-bit Minwise hashing の Java 実装です。 | |
* <p> | |
* 参考文献 : <a href="http://research.microsoft.com/pubs/120078/wfc0398-lips.pdf">b-Bit Minwise Hashing</a> | |
* </p> | |
* | |
* @author KOMIYA Atsushi | |
*/ | |
public class MinHash { | |
private final int numBits; |
As compiled by Kevin Wright a.k.a @thecoda
(executive producer of the movie, and I didn't even know it... clever huh?)
please, please, please - If you know of any slides/code/whatever not on here, then ping me on twitter or comment this Gist!
This gist will be updated as and when I find new information. So it's probably best not to fork it, or you'll miss the updates!
Monday June 16th
package thunder.streaming | |
import org.apache.spark.{SparkConf, Logging} | |
import org.apache.spark.rdd.RDD | |
import org.apache.spark.SparkContext._ | |
import org.apache.spark.streaming._ | |
import org.apache.spark.streaming.dstream.DStream | |
import org.apache.spark.mllib.clustering.KMeansModel | |
import scala.util.Random.nextDouble |
/** | |
* Part Zero : 10:15 Saturday Night | |
* | |
* (In which we will see how to let the type system help you handle failure)... | |
* | |
* First let's define a domain. (All the following requires scala 2.9.x and scalaz 6.0) | |
*/ | |
import scalaz._ | |
import Scalaz._ |
object jto { | |
type _1 = Succ[_0] | |
type _2 = Succ[_1] | |
type _3 = Succ[_2] | |
type _4 = Succ[_3] | |
type _5 = Succ[_4] | |
// Natural numbers (extracted from shapeless) |
I've had many people ask me questions about OpenTracing, often in relation to OpenZipkin. I've seen assertions about how it is vendor neutral and is the lock-in cure. This post is not a sanctioned, polished or otherwise muted view, rather what I personally think about what it is and is not, and what it helps and does not help with. Scroll to the very end if this is too long. Feel free to add a comment if I made any factual mistakes or you just want to add a comment.
OpenTracing is documentation and library interfaces for distributed tracing instrumentation. To be "OpenTracing" requires bundling its interfaces in your work, so that others can use it to time distributed operations with the same library.
OpenTracing interfaces are targeted to authors of instrumentation libraries, and those who want to collaborate with traces created by them. Ex something started a trace somewhere and I add a notable event to that trace. Structure logging was recently added to O