This is a follow-up on a discussion on gitter I had with Rob Norris. Gitter is great, but it is easier to make a coherent argument in a gist.
The monad laws allow us to assert that in the following the two results will describe computations that eventually yield the same value:
val fut1 = Future(<comp1>)
val fut2 = Future(<comp2>)
val result1 = for {
f1 <- fut1
f2 <- fut2
f3 = f1 + f2
} yield f3
val result2 = for {
f1 <- Future(<comp1>)
f2 <- Future(<comp2>)
f3 = f1 + f2
} yield f3
The discussion was triggered by Rob’s statement that “assuming your computations are pure the answer will be the same.” This statement in turn makes some assumptions that are contentious as will be shown in the rest of this article.
There is a domain in which equational reasoning holds for the expressions given above. A simple proof can be obtained by supplying a sameThreadExecutionContext
as the scheduler for running the shown future computations and combinators:
implicit val sameThreadExecutionContext = new ExecutionContext {
def execute(r:Runnable) = r.run()
def reportFailure(c: Throwable) = c.printStackTrace()
}
But we must also define a way to obtain the computed value. A possibility is
println(result1.value)
println(result2.value)
This will in both cases yield the same result, something like Some(Success(<theResult>))
. The careful observer will also conduct experiments that show that the time to execute the computation is the same in either case.
Running computations on the current thread is expressly not what futures are intended for, their purpose is to run computations elsewhere—be that algorithms that we supply or be that remote services that we call upon. If we use the standard global ExecutionContext and use the same method for obtaining the computed value, we will be disappointed:
scala> val result1 = for { f1 <- fut1; f2 <- fut2; f3 = f1 + f2 } yield f3; println(result1.value)
None
result1: scala.concurrent.Future[Int] = scala.concurrent.impl.Promise$DefaultPromise@7f284218
The reason is that the passing of time is integral to the semantics of the Future type. It matters very much when exactly we examine the result of a computation because in the lifecycle of a Future there is one precise point at which it changes from holding no value to holding its final value.
We can inspect the state of a future after a given time by using the Await.result()
library function:
println(Await.result(result1, 1.second))
If the computations run fast enough then we will see the computed value being printed, otherwise we get a TimeoutException
. This API uses the Future type’s semantics in that once the final value is known no further waiting time is needed because the result cannot possibly change, i.e. for quick calculations the expression above will take less than 1 second to compute.
Equational reasoning means that if two expressions compute the same value then we can replace one for the other without affecting the outcome of the program. This is true if either time does not matter, or if time is properly accounted for when saying “the same value”—it basically becomes “if two expressions compute the same value in the same amount of time”.
Going back to the original example and using { Thread.sleep(1400); 1 }
as an example computation that is used for both placeholders, it is obvious that equational reasoning does not hold for the following program:
println(Await.result(resultX, 2.seconds))
If we use result1
then we get a TimeoutException
while in the case of result2
we get the result 2
.
Futures are used predominantly to model computations that run elsewhere. Because that implies the possibility of partial failure, most uses of Futures therefore include SLAs and timeouts at some level—those cases where that is not true are most likely due to oversight or bugs. The question of how long a computation takes is therefore more relevant for this monad than it is for others.
While the above is the shortest demonstration of the point under discussion I would like to point out that SLAs and timeouts are usually not expressed using Await.result()
but by combining the Future with a timeout using firstCompletedOf
or by using a CircuitBreaker
implementation; these are much more efficient and allow timeouts to be expressed while staying within the Future monad and without blocking the current thread.
Dude, no, this is way off. All of this can be debunked. I won't rob you of the opportunity to do so again, by explaining why (again).
Look up extentional equality if not. I'm sure we've been here before. Start here. But don't stop here. There is more to it. It's just the start point to learn why all of this is bunk.
I'm assuming that Rob has explained all or part of this to you?