There has been some intermingling of Scala and Haskell communities, and I have noticed now and then people commenting on stuff that’s supposed to be easy in Haskell and hard and Scala. Less often (maybe because I read Scala questions, not Haskell ones), I see someone mentioning that something in Scala is easier than in Haskell.

So. I’d like to know from people who are knowledgeable in both what kind of things are easy in Haskell and hard in Scala, and, conversely, what kind of things are easy in Scala and hard in Haskell.

*Tony Morris responds 9 hours later:*

Daniel, As you know my day job is primarily writing Haskell and secondarily Scala. I have also used both languages for teaching, though not in universities (I use other languages there), but mostly for voluntary teaching that I still do today. Very rarely, I use Java or Javascript. I work for a product company.

I am pleased to see that my prediction is false – your question has not provoked a slew of misinformation as I thought it would. As a result, I am compelled not to ignore it :) So here goes.

At a somewhat superficial level, Haskell has significantly superior tool support over Scala and Java. For non-exhaustive example, Haskell has hoogle, djinn and pl; three tools that alone are extremely useful for which there is no equivalent for Scala. These tools exist and are as useful as they are, because of certain fundamental properties of Haskell itself. Here I mean, the hoogle function is only as useful as it is because Haskell tags IO effects in the type delineating values with types IO t and t, so hoogling for say, [a] -> Int eliminates a lot of candidate functions that would have this type in other environments. In Scala, without the delineation between an Int that has been computed with its arguments, and an Int that has been computed with the entire universe, a hoogle-equivalent would not be as useful – nevertheless, it would be somewhat useful were it to exist.

Haskell’s hlint is also superior to say, Java’s CheckStyle. Included in GHC is a warning system, which when coupled with hlint is far more comprehensive. I’ve not seen anything like this for Scala.

Haskell has cleaner syntax and superior type inference. Very rarely is it the case that we must type-annotate our Haskell. This is not to say that we do not type-annotate our Haskell, just that we have the choice. As you know, this is not so with Scala. However, on an extremely superficial level, I prefer Scala’s lambda syntax to Haskell’s, which requires a pseudo-lambda symbol (). In any case, I aim for point-free style where appropriate, making this already-weak point moot. I use Haskell’s clean syntax to appeal to children in the challenges of teaching. Children take very easily to Haskell’s syntax, since there is far less redundant, “off to the side”, “let’s take a little excursion to nowhere”, ceremony (so to speak). As you might guess, children are easily distracted – Haskell helps me as a teacher to keep this in check.

On to the fundamentals. As you know, Haskell is call-by-need by default, where Scala is the other way around. This is, of course, a very contentious issue. Nevertheless, I admit to falling very strongly to one side: call-by-need by default is the most practical and Scala’s evaluation model is a very far-away second place. I have never seen a legitimate argument that comes close to swaying my position on this, though I do invite it (not here please). To summarise, I think Haskell has a superior evaluation model. There are also nuances in Scala’s laziness. For example, while Scala unifies lazy values, it does not do so for those in contravariant position. i.e. a (Int => Int) is not a ((=> Int) => Int).

This contentious issue aside, Haskell’s effect-tracking (by the way, which is a consequence of its evaluation model), is overwhelmingly practical to every-day software tasks. The absence of same or similar is quite devastating as many users of Scala have become aware (by wishing for a solution). I cannot emphasise how important this form of static type-check is to typical business programming in terse form, so I won’t try here.

Haskell has far superior library support than Scala and Java. You and I have discussed this before. Scala lacks some of the most fundamental functions in its standard libraries and the higher-level ones are even more scarce. For example, there are fundamental properties of Scala making good library support more difficult (strict evaluation, Java interop), however, neither of these incur such a penalty as to produce what can only be described as an unfortunate catastrophe as the Scala standard libraries. That is to say, the Scala standard libraries could easily be miles ahead of where they are today, but they are not and you (I) are left with ponderances as to why – something about third lumbar vertebrae and expert levels or something I suppose.

To help emphasise this point, there are times in my day job when I come off a project from using Haskell to Scala. This comes with some consequences that I feel immediately with a very sharp contrast; I then use intellij idea which is slow, cumbersome and full of bugs, the Scala type inferencer is less robust and difficult to prototype with (I may make a type-error and it all just carries on as if nothing happened), but there is nothing more disappointing than having to spend (waste) a few hours implementing a library that really should already be there – what a waste of my time and probably the next guy who has to implement such fundamental library abilities. Talk about shaving yaks. In my opinion, this is the most disappointing part of Scala, especially since there is nothing stopping it from having a useful library besides skill and absence of ability to recognise what a useful library even is. This happens for Haskell too, but to a far lesser extent. I digress in frustration.

The GHC REPL (GHCi) has better support for every-day programming. More useful tab-completion, :type, :info and :browse commands are heavily missed when in Scala. It’s also much faster, but Scala can be forgiven given the JVM.

Why use Scala? I can call the Java APIs, even the most ridiculous, yet popular, APIs ever written. I can call WebSphere methods and I can write servlets. I can write a Wicket application, or use the Play framework or I can do something outragoeus with a database and hibernate. I can do all those things that our industry seems to think are useful, though I secretly contend are pathetic, and I hope our children do too. I can completely avoid the arguments and discussions associated with the merits of these issues, and get on with the job, while still getting the benefits of a language that is vastly superior to Java. This might seem like a cheap stab, though I am sincere when I say that is a true benefit that I value a lot.

Scala’s module system is superior to Haskell’s, almost. That is, it has some things that are very useful that Haskell does not, but also vice versa – for example, Haskell allows a module to export other modules. Scala requires you to stick your finger in your left ear to do this; oh and intellij idea stops working – I exaggerate, but you get the point. Scala has package objects and and first-class module niceties. Haskell lacks here.

Scala also has the ability to namespace a function by giving special status to one of its arguments (some people call this OO, then don’t, in the very next breath – I never get it). What I mean is, you may have two functions with the same name, which are disambiguated at the call site by the type of the argument to the left of the dot. I am deliberately not calling this by any special name, but rather focussing on its utility – Haskell can do this with qualified imports – not quite so nice. I am usually, though not always, particularly unimaginative when it comes to inventing function names – allowing me to reuse one without a penalty is very handy indeed. Note here I do not mean overloading – I think the Scala community has worked out that overloading is just not worth it – do not do it, ever.

In my experiences, Scala appeals to existing programmers, who can usually get up and running quicker than with Haskell. In contrast, non-programmers get up and running with Haskell far quicker than with Scala. As a teacher, I used to think this was a great attribute of Scala, then I tried it, then I thought it was just good. Today, I think it is just not worth it – I have seen too many catastrophes when programmers who are familiar with degenerate problem-solving techniques (ala Java) are handed such things as Scala. Call me pessimistic or some-such, but I wish to remind you that a few years ago, I was handed an already-failed project written in Scala by the Java guys, which I was supposed to “save” because I was the “Scala expert.” I’m sure you can guess how that turned out. I have many (many) anecdotes like this, though most of those are confirmed each time I try to use Scala for teaching existing programmers, rather than in industry (perhaps this is my selection bias, given my extreme apprehension). Nevertheless, my experiences aside, you may call this a benefit over Haskell – there is no doubting that existing programmers get up and running much quicker with Scala.

I can think of a few other tid-bits, but hopefully this satisfies your curiosity. I don’t know how many people are in my position of using both languages extensively in a commercial environment, but I’d truly love to hear from someone who does – especially if they take strong objection to my opinions. That is to say, I invite (and truly yearn for) well-informed peer review of these opinions.

Hope this helps.

*Recovered from StackPrinter after deletion and much subsequent searching.*

`def reverse: List[Banana] => List[Banana]`

Without viewing the body of the code, one might infer that this code “reverses the list.” What does it mean to reverse a list? Let us try to rigorously define reversing a list:

- Reversing the empty list produces the empty list.
- Reversing a single-element list produces that same list.
- Taking two lists, appending them then reversing, produces the same list as reversing each, then appending the latter to the former.

We infer all these things, even if informally, when we conclude that this function reverses the list. We might have some degree of confidence in concluding that all these properties hold purely from the function name, however, as part of this conclusion, we also conclude that the function does not peel the first `Banana`

. We have no evidence of these facts, except for the function name, alleged to be useful to infer some confidence about these facts.

In order for this method of comprehension to be efficacious, it must produce a result better than guessing. That is to say, the degree of confidence invoked by inferring that “this function reverses the list” from the premise, “because the function is named `reverse`

”, must be higher than inferring that the function does not reverse the list, from the same premise. In my experience, with which some will certainly disagree, this is not the case, rendering this comprehension method useless. That is to say, the identifier name persuades confidence of the inference by nil, not even a bit.

Thankfully, this is unimportant. It is unimportant because there exist methods of code comprehension that are *significantly more effective*, so you can abandon the question of whether or not there is efficacy of using identifier names for inferring properties of code.

Suppose the following code:

`def reverse[A]: List[A] => List[A]`

At first glance, it might appear that we must take a leap of confidence in inferring that the function reverses the list. However, we can infer the following fact — *this function definitely does not peel the first Banana in the list*. I can infer this because if the function attempted to do this, it

- The
`reverse`

function did not add`10`

to every second list element. - All elements in the list returned by
`reverse`

are contained in the input list.

We are able to infer these things simply by making the `reverse`

function *parametric*. We are no longer reversing a list of bananas — although that might be our use-case — we are reversing a list of `A`

for all values of `A`

. One virtue of this *parametricity* is that we can infer a significant number of things that *do not occur*. This theme of learning what does not occur is ubiquitous when deploying these programming techniques and is described in more detail by Wadler^{2}.

Here is another example, using a programming language called SafeHaskell (very similar to Haskell):

`add10 :: Int -> Int`

By the name of the function, we might unreliably infer (OK, let’s be honest, we are making a bold guess) that the function adds `10`

to its argument. However, looking at the type, we know for sure that the function *did not print its argument to the standard output stream*. We know this because had the library provider attempted it, the code would not have compiled. To be clear, it would not be a valid SafeHaskell program, so our assumption that we are looking at SafeHaskell fails, forcing us to unify by selecting one of the following:

- The function does not print its argument to the standard output stream.
- We are not looking at SafeHaskell source code.
- We are using an escape hatch, implied by the halting problem.

There are simply no other options. What other things can we reliably conclude this function does not do?

Here is yet another example:

`def constant[A, B]: A => B => A`

In this case, we can reliably infer that this function does one single thing. It ignores its second argument and returns its first. This might be protested:

- It might perform a side-effect first!
- This is true, but assuming a pure subset of the language is useful for reasons other than this one.

- It might type-cast or type-case (
`asInstanceOf`

or`isInstanceOf`

).- This is another unfortunate escape hatch in the Scala type system that conveniently permits unsafe code.

- It might recurse forever, return
`null`

or throw an exception.- This is yet another escape hatch.

So why dismiss these protests? They are inescapable implications of the halting problem. The more practical question is, “how convenient does Scala make these escape hatches available?” and the answer is an unfortunate one — it can often appear to be easier to exploit these escape hatches, but it won’t be too long before the penalty is paid. Although in practice, it is easier both short and long term to avoid these escape hatches, the illusion of convenience persists in some cases.

If we are to concede these abilities, it is simply up to our own discipline to enforce that we have not attempted to take the illusory easy way out. This includes using a pure-functional subset of Scala, which is a lot easier than is often made out. For example, the `scalaz.effects.STRef`

data type permits a pure-functional `var`

that has all the capabilities of `var`

while also maintaining all the aforementioned abilities (unlike `var`

itself). This is a win-win. Rúnar Bjarnason goes into detail on this at QCon 2013.

By these methods of exploiting the type system, we are able to very reliably infer things that did not occur and occasionally, infer and conclude the only thing that did occur. However, what about narrowing it down further? We know that the (parametric) `reverse`

function doesn’t manipulate the list elements, but how do we know it reverses the list? Do we fall back to relying on the function name and simply hope so?

No.

We continue to use more reliable methods of code comprehension. Let us restate the definition of `reverse`

, however, we will include Scala source code. All expressions must return `true`

regardless of the value of any of their arguments:

- Reversing the empty list produces the empty list
`reverse(Nil) == Nil`

- Reversing a single-element list produces that same list
`element => reverse(List(element)) == List(element)`

- Taking two lists,
`l1`

and`l2`

, appending them then reversing, produces the same list as reversing each, then appending the reverse of`l1`

to the reverse of`l2`

.`(l1, l2) => reverse(l1 ::: l2) == (reverse(l2) ::: reverse(l1))`

If we can be confident that these properties hold, then we can also be confident that our `reverse`

function does in fact, reverse the list. In fact, there is no other possible function that satisfies the type and these properties, besides the one that reverses a list. Again, we have not resorted to the function name for code comprehension — we have inspected *algebraic properties about the code*.

So how do we increase confidence that these properties hold?

Unfortunately, an implication of the halting problem is that we cannot prove these program properties, for the general case. This is not the end of the world though — we can still attempt to *disprove* these program properties. That is to say, we can go to efforts to determine if the function is *not* one which reverses the list. We can express our algebraic properties, which give away the full specification of the function, then automate the assertion that there exist values for which the property does not hold. This automation is precisely what ScalaCheck does, however, the expression itself is enough to rigorously specify the function behaviour without degenerating to faith in function names.

The next time you see a function named `<-:`

and you think to yourself, “OMG, how unreadable, what am I going to do now!?”, ask yourself if there are other tools — perhaps more robust than those familiar — to comprehend the code. What is its type? What are its algebraic properties? Are there parametric properties to exploit?

What if it has this type?

`def <-:[A, B](f: A => B): List[A] => List[B]`

Does it map the function across the list elements? Maybe. However, we definitely know that the elements in the resulting list came from running the given function on an element from the input list. You see? Parametricity, just like that, was 73 fuck-loads more reliable than looking at the function name to comprehend how this code works, and this is only the start of answering the question. We have many more tools at our disposal. Importantly, they are *reliable*. I like reliable, because I also like things that are true. Hopefully you do too!

So what about these allegations of utility of identifier names? Do they have any merit at all?

No. Insistence on the value of identifier names for code comprehension has some glaring holes in it. Let us put aside that there are far more robust methods of comprehension. Let us also put aside that the claims are probably motivated by a lack of familiarity with superior tools.

Here is why this allegation is not just bullshit, but very obviously bullshit. Anyone who names a data type, `AbstractAdvisorAutoProxyCreator`

is just as committed to not using identifier names for meaning as anyone else. However, there is another level again — the staunch belief that this identifer name is conveying meaning exposes just how confused that belief is. Any query such as, “What exactly does `AbstractAdvisorAutoProxyCreator`

mean?” is met with handwaving. This is because **nobody knows what AbstractAdvisorAutoProxyCreator means** and the only practical implication here, in the world in which we all find ourselves, is one or more scatterbrains holding a belief otherwise.

From a näive perspective, this situation appears to be a ripe learning opportunity. There appears to be a lot to be gained simply by sharing knowledge with a beginner — a trivial investment of effort. So why not take it? That question is fraught with complexity, but often, it is more constructive to have a giggle, a little lament, then dismiss the confused allegations.

]]>This proposal is in the spirit of the typeclassopedia with the following differences:

Use the Scala programming language for demonstration for those who prefer it.

Discussion need not concern itself with any kind of backward compatibility with existing libraries.

You will find a similar type-class hierarchy in the Scalaz library. That implementation is far more comprehensive and is aimed primarily for production use. A secondary goal here is to help document the Scalaz hierarchy in terse form, however, note that you will find some minor differences (improvements?) in the arrangement.

Discussion about addition or rearrangement of the proposed hierarchy is welcome.

```
trait ~>[F[_], G[_]] {
def apply[A]: F[A] => G[A]
}
case class Id[A](x: A)
trait Semigroup[M] {
def op: M => M => M
}
trait Monoid[M] extends Semigroup[M] {
val id: M
}
trait Functor[F[_]] {
def fmap[A, B](f: A => B): F[A] => F[B]
}
trait Apply[F[_]] extends Functor[F] {
def ap[A, B](f: F[A => B]): F[A] => F[B]
}
trait Bind[F[_]] extends Apply[F] {
def bind[A, B](f: A => F[B]): F[A] => F[B]
}
trait Applicative[F[_]] extends Apply[F] {
def insert[A]: A => F[A]
}
trait Monad[F[_]] extends Applicative[F] with Bind[F]
trait Extend[F[_]] extends Functor[F] {
def extend[A, B](f: F[A] => B): F[A] => F[B]
}
trait Comonad[F[_]] extends Extend[F] {
def extract[A]: F[A] => A
}
trait Contravariant[F[_]] {
def contramap[A, B](f: B => A): F[A] => F[B]
}
trait Distributive[T[_]] extends Functor[T] {
def distribute[F[_]: Functor, A, B](f: A => T[B]): F[A] => T[F[B]]
}
trait Foldable[T[_]] {
def foldMap[A, M: Monoid](f: A => M): T[A] => M
}
trait Foldable1[T[_]] extends Foldable[T] {
def foldMap1[A, M: Semigroup](f: A => M): T[A] => M
}
trait Traversable[T[_]] extends Functor[T] with Foldable[T] {
def traverse[F[_]: Applicative, A, B](f: A => F[B]): T[A] => F[T[B]]
}
trait Traversable1[T[_]] extends Traversable[T] with Foldable1[T] {
def traverse1[F[_]: Apply, A, B](f: A => F[B]): T[A] => F[T[B]]
}
trait MonadTransformer[T[_[_], _]] {
def lift[M[_]: Monad, A]: M[A] => T[M, A]
}
trait BindTransformer[T[_[_], _]] extends MonadTransformer[T] {
def liftB[M[_]: Bind, A]: M[A] => T[M, A]
}
trait MonadTransform[T[_[_], _]] {
def transform[F[_]: Monad, G[_]: Monad, A](f: F ~> G): T[F, A] => T[G, A]
}
trait BindTransform[T[_[_], _]] extends MonadTransform[T] {
def transformB[F[_]: Bind, G[_]: Monad, A](f: F ~> G): T[F, A] => T[G, A]
}
trait ComonadTransformer[T[_[_], _]] {
def lower[M[_]: Comonad, A]: T[M, A] => M[A]
}
trait ExtendTransformer[T[_[_], _]] extends ComonadTransformer[T] {
def lowerE[M[_]: Extend, A]: T[M, A] => M[A]
}
trait ComonadHoist[T[_[_], _]] {
def cohoist[M[_]: Comonad, A]: T[M, A] => T[Id, A]
}
trait ExtendHoist[T[_[_], _]] extends ComonadHoist[T] {
def cohoistE[M[_]: Extend, A]: T[M, A] => T[Id, A]
}
trait Semigroupoid[~>[_, _]] {
def compose[A, B, C]: (B ~> C) => (A ~> B) => (A ~> C)
}
trait Category[~>[_, _]] extends Semigroupoid[~>] {
def id[A]: A ~> A
}
trait First[~>[_, _]] extends Semigroupoid[~>] {
def first[A, B, C]: (A ~> B) => ((A, C) ~> (B, C))
}
trait Arrow[~>[_, _]] extends Category[~>] with First[~>] {
def idA[A, B]: (A => B) => (A ~> B)
}
```

The two functions `runOptions`

and `runIntRdrs`

implement a specific function with a small difference between each. The duplication in their code bodies is noted in the comments and is denoted by asterisks.

**How might the problem of code duplication be solved for this case?**

The puzzle is designed to compile as-is, which means for some languages, support data structures need to be provided. For example, the C# programming language does not provide an immutable (cons) list data structure, so the bare minimum is supplied here. This makes the puzzle appear quite noisy for that specific programming language, however be assured it is the same code.

```
object RefactorPuzzle {
case class IntRdr[+A](read: Int => A) {
def map[B](f: A => B): IntRdr[B] =
IntRdr(f compose read)
def flatMap[B](f: A => IntRdr[B]): IntRdr[B] =
IntRdr(n => f(read(n)).read(n))
}
object IntRdr {
def apply[A](a: A): IntRdr[A] =
IntRdr(_ => a)
}
// Return all the Some values, or None if not all are Some.
def runOptions[A](x: List[Option[A]]): Option[List[A]] =
x.foldRight[Option[List[A]]](Option(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
// Apply an Int to a list of int readers and return the list of return values.
def runIntRdrs[A](x: List[IntRdr[A]]): IntRdr[List[A]] =
x.foldRight[IntRdr[List[A]]](IntRdr(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
// Code Duplication
// ******* ************* ******* ***********
// def runOptions[A](x: List[Option[A]]): Option[List[A]] =
// def runIntRdrs[A](x: List[IntRdr[A]]): IntRdr[List[A]] =
// ************ *********** *************************************************
// x.foldRight[Option[List[A]]](Option(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
// x.foldRight[IntRdr[List[A]]](IntRdr(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
}
```

```
module RefactoringPuzzle where
newtype IntRdr a =
IntRdr {
readIntRdr :: Int -> a
}
mapIntRdr ::
IntRdr a
-> (a -> b)
-> IntRdr b
mapIntRdr (IntRdr g) f =
IntRdr (f . g)
bindIntRdr ::
IntRdr a
-> (a -> IntRdr b)
-> IntRdr b
bindIntRdr (IntRdr g) f =
IntRdr (\n -> readIntRdr (f (g n)) n)
applyIntRdr ::
a
-> IntRdr a
applyIntRdr =
IntRdr . const
type Option = Maybe
mapOption ::
Option a
-> (a -> b)
-> Option b
mapOption Nothing _ =
Nothing
mapOption (Just a) f =
Just (f a)
bindOption ::
Option a
-> (a -> Option b)
-> Option b
bindOption Nothing _ =
Nothing
bindOption (Just a) f =
f a
applyOption ::
a
-> Option a
applyOption a =
Just a
-- Return all the Some values, or None if not all are Some.
runOptions :: [Option a] -> Option [a]
runOptions = foldr (\a b -> bindOption a (\aa -> mapOption b (aa:))) (applyOption [])
-- Apply an Int to a list of int readers and return the list of return values.
runIntRdrs :: [IntRdr a] -> IntRdr [a]
runIntRdrs = foldr (\a b -> bindIntRdr a (\aa -> mapIntRdr b (aa:))) (applyIntRdr [])
-- Code Duplication
-- *** ****** ******* ****
-- runOptions :: [Option a] -> Option [a]
-- runIntRdrs :: [IntRdr a] -> IntRdr [a]
-- *** *********************** ************** ************ ****
-- runOptions = foldr (\a b -> bindOption a (\aa -> mapOption b (aa:))) (applyOption [])
-- runIntRdrs = foldr (\a b -> bindIntRdr a (\aa -> mapIntRdr b (aa:))) (applyIntRdr [])
```

```
using System;
namespace RefactoringPuzzle {
class IntRdr<A> {
public readonly Func<int, A> read;
IntRdr(Func<int, A> read) {
this.read = read;
}
public IntRdr<B> Select<B>(Func<A, B> f) {
return new IntRdr<B>(n => f(read(n)));
}
public IntRdr<B> SelectMany<B>(Func<A, IntRdr<B>> f) {
return new IntRdr<B>(n => f(read(n)).read(n));
}
public static IntRdr<A> apply(A a) {
return new IntRdr<A>(_ => a);
}
}
abstract class Option<A> {
public abstract X Fold<X>(Func<A, X> some, X none);
public Option<B> Select<B>(Func<A, B> f) {
return Fold<Option<B>>(a => new Option<B>.Some(f(a)), new Option<B>.None());
}
public Option<B> SelectMany<B>(Func<A, Option<B>> f) {
return Fold(f, new Option<B>.None());
}
public static Option<A> apply(A a) {
return new Some(a);
}
public class Some : Option<A> {
readonly A a;
public Some(A a) {
this.a = a;
}
public override X Fold<X>(Func<A, X> some, X none) {
return some(a);
}
}
public class None : Option<A> {
public override X Fold<X>(Func<A, X> some, X none) {
return none;
}
}
}
abstract class List<A> {
public abstract X FoldRight<X>(Func<A, X, X> f, X x);
// Return all the Some values, or None if not all are Some.
Option<List<A>> runOptions(List<Option<A>> x) {
return x.FoldRight((a, b) => a.SelectMany(aa =>
b.Select(bb => bb.Prepend(aa))), Option<List<A>>.apply(new Nil()));
}
// Apply an Int to a list of int readers and return the list of return values.
IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
return x.FoldRight((a, b) => a.SelectMany(aa =>
b.Select(bb => bb.Prepend(aa))), IntRdr<List<A>>.apply(new Nil()));
}
public List<A> Prepend(A a) {
return new Cons(a, this);
}
public class Nil : List<A> {
public override X FoldRight<X>(Func<A, X, X> f, X x) {
return x;
}
}
public class Cons : List<A> {
readonly A head;
readonly List<A> tail;
public Cons(A head, List<A> tail) {
this.head = head;
this.tail = tail;
}
public override X FoldRight<X>(Func<A, X, X> f, X x) {
return f(head, tail.FoldRight(f, x));
}
}
}
// Code Duplication
// ************* ******* *********
// Option<List<A>> runOptions(List<Option<A>> x) {
// IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
// ***********************************************
// return x.FoldRight((a, b) => a.SelectMany(aa =>
// return x.FoldRight((a, b) => a.SelectMany(aa =>
// ********************************* ****************************
// b.Select(bb => bb.Prepend(aa))), Option<List<A>>.apply(new Nil()));
// b.Select(bb => bb.Prepend(aa))), IntRdr<List<A>>.apply(new Nil()));
}
```

```
package RefactoringPuzzle;
abstract class Func<T, U> {
abstract U apply(T t);
}
abstract class IntRdr<A> {
abstract A read(int i);
<B> IntRdr<B> map(final Func<A, B> f) {
return new IntRdr<B>() {
B read(int i) {
return f.apply(IntRdr.this.read(i));
}
};
}
<B> IntRdr<B> bind(final Func<A, IntRdr<B>> f) {
return new IntRdr<B>() {
B read(int i) {
return f.apply(IntRdr.this.read(i)).read(i);
}
};
}
static <A> IntRdr<A> apply(final A a) {
return new IntRdr<A>() {
A read(int _) {
return a;
}
};
}
}
abstract class Option<A> {
abstract <X> X fold(Func<A, X> some, X none);
<B> Option<B> map(final Func<A, B> f) {
return new Option<B>() {
<X> X fold(final Func<B, X> some, X none) {
return Option.this.fold(new Func<A, X>(){
X apply(A a) {
return some.apply(f.apply(a));
}
}, none);
}
};
}
<B> Option<B> bind(final Func<A, Option<B>> f) {
return new Option<B>() {
<X> X fold(final Func<B, X> some, final X none) {
return Option.this.fold(new Func<A, X>(){
X apply(A a) {
return f.apply(a).fold(some, none);
}
}, none);
}
};
}
static <A> Option<A> apply(final A a) {
return new Option<A>() {
<X> X fold(Func<A, X> some, X none) {
return some.apply(a);
}
};
}
}
abstract class List<A> {
abstract <X> X foldRight(Func<A, Func<X, X>> f, X x);
// Return all the Some values, or None if not all are Some.
static <A> Option<List<A>> runOptions(List<Option<A>> x) {
return x.foldRight(new Func<Option<A>, Func<Option<List<A>>, Option<List<A>>>>(){
Func<Option<List<A>>, Option<List<A>>> apply(final Option<A> a) {
return new Func<Option<List<A>>, Option<List<A>>>() {
Option<List<A>> apply(final Option<List<A>> b) {
return a.bind(new Func<A, Option<List<A>>>(){
Option<List<A>> apply(final A aa) {
return b.map(new Func<List<A>, List<A>>(){
List<A> apply(List<A> bb) {
return bb.prepend(aa);
}
});
}
});
}
};
}
}, Option.apply(List.<A>nil()));
}
// Apply an Int to a list of int readers and return the list of return values.
static <A> IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
return x.foldRight(new Func<IntRdr<A>, Func<IntRdr<List<A>>, IntRdr<List<A>>>>(){
Func<IntRdr<List<A>>, IntRdr<List<A>>> apply(final IntRdr<A> a) {
return new Func<IntRdr<List<A>>, IntRdr<List<A>>>() {
IntRdr<List<A>> apply(final IntRdr<List<A>> b) {
return a.bind(new Func<A, IntRdr<List<A>>>(){
IntRdr<List<A>> apply(final A aa) {
return b.map(new Func<List<A>, List<A>>(){
List<A> apply(List<A> bb) {
return bb.prepend(aa);
}
});
}
});
}
};
}
}, IntRdr.apply(List.<A>nil()));
}
List<A> prepend(final A a) {
return new List<A>() {
<X> X foldRight(Func<A, Func<X, X>> f, X x) {
return f.apply(a).apply(this.foldRight(f, x));
}
};
}
static <A> List<A> nil() {
return new List<A>() {
<X> X foldRight(Func<A, Func<X, X>> f, X x) {
return x;
}
};
}
}
// Code Duplication
// *********** ************* ******* *********
// static <A> Option<List<A>> runOptions(List<Option<A>> x) {
// static <A> IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
// **************************** ********** *********** **************
// return x.foldRight(new Func<Option<A>, Func<Option<List<A>>, Option<List<A>>>>(){
// return x.foldRight(new Func<IntRdr<A>, Func<IntRdr<List<A>>, IntRdr<List<A>>>>(){
// ***** *********** *********************** ********
// Func<Option<List<A>>, Option<List<A>>> apply(final Option<A> a) {
// Func<IntRdr<List<A>>, IntRdr<List<A>>> apply(final IntRdr<A> a) {
// **************** *********** **************
// return new Func<Option<List<A>>, Option<List<A>>>() {
// return new Func<IntRdr<List<A>>, IntRdr<List<A>>>() {
// ********************** **************
// Option<List<A>> apply(final Option<List<A>> b) {
// IntRdr<List<A>> apply(final IntRdr<List<A>> b) {
// ************************** *************
// return a.bind(new Func<A, Option<List<A>>>(){
// return a.bind(new Func<A, IntRdr<List<A>>>(){
// *****************************
// Option<List<A>> apply(final A aa) {
// IntRdr<List<A>> apply(final A aa) {
// ******************************************
// return b.map(new Func<List<A>, List<A>>(){
// return b.map(new Func<List<A>, List<A>>(){
// ***************************
// List<A> apply(List<A> bb) {
// List<A> apply(List<A> bb) {
// **********************
// return bb.prepend(aa);
// return bb.prepend(aa);
…
// *** ***********************
// }, Option.apply(List.<A>nil()));
// }, IntRdr.apply(List.<A>nil()));
…
```

But what they think I am going to do? Do they really think I have a library of examples in my head that I am holding hostage? Do they think that I am being spiteful toward them by withholding this library of knowledge that I possess and they do not? No, what I would do here is exactly what they are very capable of doing — they can find any type-checking program. In fact, this is the tool support that I aspire to so that I do not have to maintain a “library of examples” in my own head.

By appeasing their demand for “example usages of …”, if I were to do such an unfortunate thing, I am disservicing them. I am taking away their opportunity to develop the skills to answer this question for themselves. I am not even giving a good answer to the specifics of the question. It is not “a good example” of anything at all. It is the *process* by which the example is derived that is useful and nothing else.

So go on, try it. Create yourself a type-checking program. You may be a bit uncomfortable with the much higher degree of tool support in this environment. All I can say is, embrace it, cherish it even. Go.

]]>There are many types of functors. They can be expressed using the Scala programming language.

- covariant functors — defines the operation commonly known as
`map`

or`fmap`

.

```
// covariant functor
trait Functor[F[_]] {
def fmap[A, B](f: A => B): F[A] => F[B]
}
```

- contravariant functors — defines the operation commonly known as
`contramap`

.

```
// contravariant functor
trait Contravariant[F[_]] {
def contramap[A, B](f: B => A): F[A] => F[B]
}
```

- exponential functors — defines the operation commonly known as
`xmap`

. Also known as*invariant functors*.

```
// exponential functor
trait Exponential[F[_]] {
def xmap[A, B](f: (A => B, B => A)): F[A] => F[B]
}
```

- applicative functor
^{1}— defines the operation commonly known as`apply`

or`<*>`

.

```
// applicative functor (abbreviated)
trait Applicative[F[_]] {
def apply[A, B](f: F[A => B]): F[A] => F[B]
}
```

- monad
^{2}— defines the operation commonly known as`bind`

,`flatMap`

or`=<<`

.

```
// monad (abbreviated)
trait Monad[F[_]] {
def flatMap[A, B](f: A => F[B]): F[A] => F[B]
}
```

- comonad
^{3}— defines the operation commonly known as`extend`

,`coflatMap`

or`<<=`

.

```
// comonad (abbreviated)
trait Comonad[F[_]] {
def coflatMap[A, B](f: F[A] => B): F[A] => F[B]
}
```

Sometimes I am asked how to remember all of these and/or determine which is appropriate. There are many answers to this question, but there is a common feature of all of these different types of functor:

They all take an argument that is some arrangement of three type variables and then return a function with the type F[A] => F[B].

I memorise the table that is the type of the different argument arrangements to help me to determine which abstraction might be appropriate. Of course, I use other methods, but this particular technique is elegant and short. Here is that table:

functor | argument arrangement |
---|---|

covariant | `A => B` |

contravariant | `B => A` |

exponential | `(A => B, B => A)` |

applicative | `F[A => B]` |

monad | `A => F[B]` |

comonad | `F[A] => B` |

We can see from this table that there is not much reason to emphasise one over the other. For example, monads get *lots* of attention and associated stigma, but it’s undeserved. It’s rather boring when put in the context of a bigger picture. It’s just a different arrangement of its argument (`A => F[B]`

).

Anyway, this table is a good way to keep a check on the different types of abstraction and how they might apply. There are also ways of deriving some from others, but that’s for another rainy day. That’s all, hope it helps!

]]>```
object FibNaïve {
def fibnaïve(n: BigInt): BigInt =
if(n <= 1)
n
else {
val r = fibnaïve(n - 1)
val s = fibnaïve(n - 2)
r + s
}
}
```

While this implementation is elegant, it is exponential in time with respect to `n`

. For example, computing the result of `fibnaïve(4)`

will result in the unnecessary re-computation of values less than `4`

. If we unravel the recursion, computation occurs as follows:

```
fibnaïve(4)
= fibnaïve(3) + fibnaïve(2)
= (fibnaïve(2) + fibnaïve(1)) + (fibnaïve(1) + fibnaïve(0))
= ((fibnaïve(1) + fibnaïve(0)) + fibnaïve(1)) + (fibnaïve(1) + fibnaïve(0))
```

This algorithm calculates for `fibnaïve(2)`

twice, which ultimately results in a lot of repeated calculations, especially as `n`

grows. What we would like to do is trade some space to store previous stored values for a given `n`

. We can achieve this by looking up the argument value in a table and if it has already been computed, we return it then carry on, but if it hasn’t, we compute the result by calling `fibnaïve`

, store it in the table, then return it. This technique is called *memoisation*.

As a first cut, let’s solve fibonacci with a helper function that passes a `Map[BigInt, BigInt]`

around in the recursion. This map will serve at the memoisation table.

```
object FibMemo1 {
type Memo = Map[BigInt, BigInt]
def fibmemo1(n: BigInt): BigInt = {
def fibmemoR(z: BigInt, memo: Memo): (BigInt, Memo) =
if(z <= 1)
(z, memo)
else memo get z match {
case None => {
val (r, memo0) = fibmemoR(z - 1, memo)
val (s, memo1) = fibmemoR(z - 2, memo0)
(r + s, memo1)
}
case Some(v) => (v, memo)
}
fibmemoR(n, Map())._1
}
}
```

We have traded space (the memoisation table) for speed; the algorithm is more efficient by not recomputing values. However, we have sacrificed the elegance of the code. How can we achieve both elegance and efficiency?

The previous code (`fibmemo1`

) has *passed state through the computation*. In other words, where we once returned a value of the type `A`

, we are accepting an argument of the type `Memo`

and returning the pair `(A, Memo)`

. The state in this case is a value of the type `Memo`

. We can represent this as a data structure:

`case class State[S, A](run: S => (A, S))`

Our `fibmemoR`

function which once had this type:

`def fibmemoR(z: BigInt, memo: Memo): (BigInt, Memo)`

…can be transformed to this type:

`def fibmemoR(z: BigInt): State[Memo, BigInt]`

Let’s write our new fibonacci function:

```
object FibMemo2 {
type Memo = Map[BigInt, BigInt]
def fibmemo2(n: BigInt): BigInt = {
def fibmemoR(z: BigInt): State[Memo, BigInt] =
State(memo =>
if(z <= 1)
(z, memo)
else memo get z match {
case None => {
val (r, memo0) = fibmemoR(z - 1) run memo
val (s, memo1) = fibmemoR(z - 2) run memo
(r + s, memo1)
}
case Some(v) => (v, memo)
})
fibmemoR(n).run(Map())._1
}
}
```

Ew! This code is still rather clumsy as it manually passes the memo table around. What can we do about it? This is where the state monad is going to help us out. The state monad is going to take care of passing the state value around for us. The monad itself is implemented by three functions:

The

`map`

method on`State[S, A]`

.The

`flatMap`

method on`State[S, A]`

.The

`insert`

function on the`object State`

that*inserts a value while leaving the state unchanged*.

I will also add three convenience functions:

`eval`

method for running the`State`

value and dropping the resulting state value.`get`

function for taking the current state to a value.`(S => A) => State[S, A]`

`mod`

function for modifying the current state.`(S => S) => State[S, Unit]`

Here goes:

```
case class State[S, A](run: S => (A, S)) {
// 1. the map method
def map[B](f: A => B): State[S, B] =
State(s => {
val (a, t) = run(s)
(f(a), t)
})
// 2. the flatMap method
def flatMap[B](f: A => State[S, B]): State[S, B] =
State(s => {
val (a, t) = run(s)
f(a) run t
})
// Convenience function to drop the resulting state value
def eval(s: S): A =
run(s)._1
}
object State {
// 3. The insert function
def insert[S, A](a: A): State[S, A] =
State(s => (a, s))
// Convenience function for taking the current state to a value
def get[S, A](f: S => A): State[S, A] =
State(s => (f(s), s))
// Convenience function for modifying the current state
def mod[S](f: S => S): State[S, Unit] =
State(s => ((), f(s)))
}
```

We can see that the `flatMap`

method takes care of passing the state value through a given function. This is the ultimate purpose of the state monad. Specifically, the state monad allows the programmer to pass a state (`S`

) value through a computation (`A`

) without us having to manually handle it. The `map`

and `insert`

methods complete the state monad.

How does our fibonacci implementation look now?

```
object FibMemo3 {
type Memo = Map[BigInt, BigInt]
def fibmemo3(n: BigInt): BigInt = {
def fibmemoR(z: BigInt): State[Memo, BigInt] =
if(z <= 1)
State.insert(z)
else
for {
u <- State.get((m: Memo) => m get z)
v <- u map State.insert[Memo, BigInt] getOrElse
fibmemoR(z - 1) flatMap (r =>
fibmemoR(z - 2) flatMap (s => {
val t = r + s
State.mod((m: Memo) => m + ((z, t))) map (_ =>
t)
}))
} yield v
fibmemoR(n) eval Map()
}
}
```

Now we have used the three state monad methods to pass the memo table through the computation for us - no more manual handling of passing that memo table through to successive recursive calls.

Scala provides syntax for the type of computation that chains calls to `flatMap`

and `map`

. It is implemented using the `for`

and `yield`

keywords in what is called a *for-comprehension*. The for-comprehension syntax will make the calls to `flatMap`

and `map`

, while allowing a more imperative-looking style. For example, where we once wrote code such as `x flatMap (r =>`

, we will now write `r <- x`

inside the for-comprehension.

How does this look?

```
object FibMemo4 {
type Memo = Map[BigInt, BigInt]
def fibmemo4(n: BigInt): BigInt = {
def fibmemoR(z: BigInt): State[Memo, BigInt] =
if(z <= 1)
State.insert(z)
else
for {
u <- State.get((m: Memo) => m get z)
v <- u map State.insert[Memo, BigInt] getOrElse (for {
r <- fibmemoR(z - 1)
s <- fibmemoR(z - 2)
t = r + s
_ <- State.mod((m: Memo) => m + ((z, t)))
} yield t)
} yield v
fibmemoR(n) eval Map()
}
}
```

This is a lot neater as the memoisation table is handled by the state monad. In fact, it is starting to look like the original naïve solution. We are no longer manually handling the state transitions, which allows us to express the essence of the problem and without the calculation speed blow-out.

Where you once may have use `var`

, consider if the state monad is instead more appropriate.

**Write a minimum function that works on Array[String] and List[Int].**

`error("todo")`

)```
trait Foldable[-F[_]] {
def foldl[A, B](f: (A, B) => A, a: A, as: F[B]): A
def reducel[A](f: (A, A) => A, as: F[A]): Option[A] = foldl[Option[A], A]((a1, a2) =>
Some(a1 match {
case None => a2
case Some(x) => f(a2, x)
}), None, as)
}
object Foldable {
val ListFoldable = new Foldable[List] {
def foldl[A, B](f: (A, B) => A, a: A, as: List[B]) =
as.foldLeft(a)(f)
}
val ArrayFoldable = new Foldable[Array] {
def foldl[A, B](f: (A, B) => A, a: A, as: Array[B]) =
as.foldLeft(a)(f)
}
}
sealed trait Ordering
case object LT extends Ordering
case object EQ extends Ordering
case object GT extends Ordering
// contra
trait Order[A] {
def compare(a1: A, a2: A): Ordering
def min(a1: A, a2: A) =
if(compare(a1, a2) == LT) a1 else a2
}
object Order {
def order[A](f: (A, A) => Ordering): Order[A] = new Order[A] {
def compare(a1: A, a2: A) = f(a1, a2)
}
val IntOrder = order[Int]((a1, a2) =>
if(a1 > a2) GT
else if(a1 < a2) LT
else EQ)
val StringOrder = order[String]((a1, a2) =>
if(a1 > a2) GT
else if(a1 < a2) LT
else EQ)
}
object Main {
import Foldable._
import Order._
def minimum[F[_], A](as: F[A], order: Order[A], fold: Foldable[F]) =
// Zm9sZC5yZWR1Y2VsW0FdKChhLCBiKSA9PiBvcmRlci5taW4oYSwgYiksIGFzKQ==
error("todo")
def main(args: Array[String]) {
val i = minimum(args, StringOrder, ArrayFoldable)
println(i)
val j = minimum(List(5, 8, 2, 9), IntOrder, ListFoldable)
println(j)
}
}
```