There has been some intermingling of Scala and Haskell communities, and I have noticed now and then people commenting on stuff that’s supposed to be easy in Haskell and hard and Scala. Less often (maybe because I read Scala questions, not Haskell ones), I see someone mentioning that something in Scala is easier than in Haskell.

So. I’d like to know from people who are knowledgeable in both what kind of things are easy in Haskell and hard in Scala, and, conversely, what kind of things are easy in Scala and hard in Haskell.

*Tony Morris responds 9 hours later:*

Daniel, As you know my day job is primarily writing Haskell and secondarily Scala. I have also used both languages for teaching, though not in universities (I use other languages there), but mostly for voluntary teaching that I still do today. Very rarely, I use Java or Javascript. I work for a product company.

I am pleased to see that my prediction is false – your question has not provoked a slew of misinformation as I thought it would. As a result, I am compelled not to ignore it :) So here goes.

At a somewhat superficial level, Haskell has significantly superior tool support over Scala and Java. For non-exhaustive example, Haskell has hoogle, djinn and pl; three tools that alone are extremely useful for which there is no equivalent for Scala. These tools exist and are as useful as they are, because of certain fundamental properties of Haskell itself. Here I mean, the hoogle function is only as useful as it is because Haskell tags IO effects in the type delineating values with types IO t and t, so hoogling for say, [a] -> Int eliminates a lot of candidate functions that would have this type in other environments. In Scala, without the delineation between an Int that has been computed with its arguments, and an Int that has been computed with the entire universe, a hoogle-equivalent would not be as useful – nevertheless, it would be somewhat useful were it to exist.

Haskell’s hlint is also superior to say, Java’s CheckStyle. Included in GHC is a warning system, which when coupled with hlint is far more comprehensive. I’ve not seen anything like this for Scala.

Haskell has cleaner syntax and superior type inference. Very rarely is it the case that we must type-annotate our Haskell. This is not to say that we do not type-annotate our Haskell, just that we have the choice. As you know, this is not so with Scala. However, on an extremely superficial level, I prefer Scala’s lambda syntax to Haskell’s, which requires a pseudo-lambda symbol (). In any case, I aim for point-free style where appropriate, making this already-weak point moot. I use Haskell’s clean syntax to appeal to children in the challenges of teaching. Children take very easily to Haskell’s syntax, since there is far less redundant, “off to the side”, “let’s take a little excursion to nowhere”, ceremony (so to speak). As you might guess, children are easily distracted – Haskell helps me as a teacher to keep this in check.

On to the fundamentals. As you know, Haskell is call-by-need by default, where Scala is the other way around. This is, of course, a very contentious issue. Nevertheless, I admit to falling very strongly to one side: call-by-need by default is the most practical and Scala’s evaluation model is a very far-away second place. I have never seen a legitimate argument that comes close to swaying my position on this, though I do invite it (not here please). To summarise, I think Haskell has a superior evaluation model. There are also nuances in Scala’s laziness. For example, while Scala unifies lazy values, it does not do so for those in contravariant position. i.e. a (Int => Int) is not a ((=> Int) => Int).

This contentious issue aside, Haskell’s effect-tracking (by the way, which is a consequence of its evaluation model), is overwhelmingly practical to every-day software tasks. The absence of same or similar is quite devastating as many users of Scala have become aware (by wishing for a solution). I cannot emphasise how important this form of static type-check is to typical business programming in terse form, so I won’t try here.

Haskell has far superior library support than Scala and Java. You and I have discussed this before. Scala lacks some of the most fundamental functions in its standard libraries and the higher-level ones are even more scarce. For example, there are fundamental properties of Scala making good library support more difficult (strict evaluation, Java interop), however, neither of these incur such a penalty as to produce what can only be described as an unfortunate catastrophe as the Scala standard libraries. That is to say, the Scala standard libraries could easily be miles ahead of where they are today, but they are not and you (I) are left with ponderances as to why – something about third lumbar vertebrae and expert levels or something I suppose.

To help emphasise this point, there are times in my day job when I come off a project from using Haskell to Scala. This comes with some consequences that I feel immediately with a very sharp contrast; I then use intellij idea which is slow, cumbersome and full of bugs, the Scala type inferencer is less robust and difficult to prototype with (I may make a type-error and it all just carries on as if nothing happened), but there is nothing more disappointing than having to spend (waste) a few hours implementing a library that really should already be there – what a waste of my time and probably the next guy who has to implement such fundamental library abilities. Talk about shaving yaks. In my opinion, this is the most disappointing part of Scala, especially since there is nothing stopping it from having a useful library besides skill and absence of ability to recognise what a useful library even is. This happens for Haskell too, but to a far lesser extent. I digress in frustration.

The GHC REPL (GHCi) has better support for every-day programming. More useful tab-completion, :type, :info and :browse commands are heavily missed when in Scala. It’s also much faster, but Scala can be forgiven given the JVM.

Why use Scala? I can call the Java APIs, even the most ridiculous, yet popular, APIs ever written. I can call WebSphere methods and I can write servlets. I can write a Wicket application, or use the Play framework or I can do something outragoeus with a database and hibernate. I can do all those things that our industry seems to think are useful, though I secretly contend are pathetic, and I hope our children do too. I can completely avoid the arguments and discussions associated with the merits of these issues, and get on with the job, while still getting the benefits of a language that is vastly superior to Java. This might seem like a cheap stab, though I am sincere when I say that is a true benefit that I value a lot.

Scala’s module system is superior to Haskell’s, almost. That is, it has some things that are very useful that Haskell does not, but also vice versa – for example, Haskell allows a module to export other modules. Scala requires you to stick your finger in your left ear to do this; oh and intellij idea stops working – I exaggerate, but you get the point. Scala has package objects and and first-class module niceties. Haskell lacks here.

Scala also has the ability to namespace a function by giving special status to one of its arguments (some people call this OO, then don’t, in the very next breath – I never get it). What I mean is, you may have two functions with the same name, which are disambiguated at the call site by the type of the argument to the left of the dot. I am deliberately not calling this by any special name, but rather focussing on its utility – Haskell can do this with qualified imports – not quite so nice. I am usually, though not always, particularly unimaginative when it comes to inventing function names – allowing me to reuse one without a penalty is very handy indeed. Note here I do not mean overloading – I think the Scala community has worked out that overloading is just not worth it – do not do it, ever.

In my experiences, Scala appeals to existing programmers, who can usually get up and running quicker than with Haskell. In contrast, non-programmers get up and running with Haskell far quicker than with Scala. As a teacher, I used to think this was a great attribute of Scala, then I tried it, then I thought it was just good. Today, I think it is just not worth it – I have seen too many catastrophes when programmers who are familiar with degenerate problem-solving techniques (ala Java) are handed such things as Scala. Call me pessimistic or some-such, but I wish to remind you that a few years ago, I was handed an already-failed project written in Scala by the Java guys, which I was supposed to “save” because I was the “Scala expert.” I’m sure you can guess how that turned out. I have many (many) anecdotes like this, though most of those are confirmed each time I try to use Scala for teaching existing programmers, rather than in industry (perhaps this is my selection bias, given my extreme apprehension). Nevertheless, my experiences aside, you may call this a benefit over Haskell – there is no doubting that existing programmers get up and running much quicker with Scala.

I can think of a few other tid-bits, but hopefully this satisfies your curiosity. I don’t know how many people are in my position of using both languages extensively in a commercial environment, but I’d truly love to hear from someone who does – especially if they take strong objection to my opinions. That is to say, I invite (and truly yearn for) well-informed peer review of these opinions.

Hope this helps.

*Recovered from StackPrinter after deletion and much subsequent searching.*

I use the Haskell programming language for teaching functional programming. I like to emphasise the learning and construction of *concepts* over learning the details of any specific programming language. I am not into teaching programming languages; I really find that boring, uneventful and unhelpful for all involved. However, learning some of the intricacies of Haskell itself is inevitable. It doesn’t take long though and is very much worth the investment of effort if you aspire to learning concepts.

That is to say, using (almost all) other programming languages for this objective is a false economy that is not even a close call. One could spent many weeks or even years demanding to articulate concepts in a specific programming language, only to struggle precisely because that language resists an accurate expression and formation of that concept. I have seen this an overwhelming number of times. It is often supported by the fallacy of false compromise, “but all languages are Turing-complete, so I am going to use JavaScript, which even has first-class functions, in order to come to understand what monad means.”

No, you won’t, I absolutely insist and promise.

This is all somewhat beside the point of this post though. The point is that there is a fact, which often comes up in teaching, that can be expressed briefly and concisely. It requires no exceptions, apologies or approximations. I would like to state this fact and explain some of the nomenclature that surrounds this fact.

All functions in the Haskell programming language take exactly one argument.

This fact is certain to come up early on in teaching. If a student comes to trust then follow this fact, then progress will be unhindered. That is because it is an absolute fact. However, even though a student may initially convince themselves of this fact, it has been my experience that they will renege on it at some time in the future.

The use of casual terminology such as the following surely helps to set this trap:

Examining the signature to the

`map`

function, we see that it takestwo arguments:

`map :: (a -> b) -> List a -> List b`

We will then talk about the *first argument* and the *second argument* as if there even is such a thing.

However, the truth of the original fact has not changed. Look at it, just sitting there, saying nothing, being all shiny and true. So how could all functions take one argument, while we simultaneously and casually talk about a “second argument”? Are we just telling great big lies? Have we made a mistake?

The problem is our vocabulary. In our spoken words, we are using an **approximation** of fact and superficially, it looks like a blatant contradiction. Let us expand our approximation to more obviously coincide with our statement of fact. I have added some annotation in [brackets].

Examining the signature to the

`map`

function, we see that it is a function [therefore, it definitely takes one argument]. That argument is of the type`(a -> b)`

[which is also a function]. The return type of the`map`

function is`(List a -> List b)`

which is a function and [since it is a function] takes one argument. That argument is of the type`(List a)`

and it returns a value [not a function]. That value is of the type`List b`

.

When we say out loud “the `map`

function takes two arguments”, we are approximating for the above expansion. It is important to understand what we really mean here.

During my teaching, I will often make a deal with students; I will use the terser vocabulary with you and I will even let you use it, however, if at any moment you violate our understanding of its proper meaning, I will rip it out from under you and demand that you use the full expansion. Almost always, the student will agree to this deal.

Some time after having made this deal, I will hear the following question. Given, for example, this solution to an exercise:

```
flatten ::
List (List a)
-> List a
flatten =
foldRight (++) Nil
```

I will hear this question:

Wait a minute, you only passed two arguments to

`foldRight`

, however, we have seen that it takes three. How could this possibly work!?

Here is another example of the question. Given this solution:

```
filter ::
(a -> Bool)
-> List a
-> List a
filter f =
foldRight (\a -> if f a then (a:.) else id) Nil
```

I will hear this question:

The argument to

`foldRight`

(which is itself a function) takes two arguments, however, the function you passed to`foldRight`

has been specified to take only one (which you have called`a`

). How could this even work?

It is at this moment that I hand out an infringement notice under our agreed penal code for the offence of:

```
Section 1.1
Failure to Accept that All Haskell Functions Take One Argument
Penalty
Temporary Incarceration at Square One with release at the discretion of an
appointed Probation Officer
```

I understand that in a learning environment, it may be easy to demonstrate and subsequently accept this fact, then later fall afoul when previously learned facts interfere with this most recent one. The purpose of going back to square one and starting again is to properly internalise this fact. It is an important one, not just for the Haskell programming language, but for Functional Programming in general.

Joking aside, the purpose of this post is to help reconcile these observations. There is a recipe to disentanglement. If you find yourself in this situation, follow these steps:

- Revert back to the fact of matter; all Haskell functions always take one argument. There is never an exception to this rule (so you cannot possibly be observing one!).
- From this starting position, reason about your observations with this fact in mind, even if it is a little clumsy at first. After some repetitions, this clumsiness will disappear. Persevere with it for now.
- Introspect on the thought process that led you into that trap to begin with. This will help you in the future as you begin to trust that principled thinking will resolve these kinds of conflicts. It can be initially clumsy, even to the point of resisting on that basis, but that is a one-time penalty which quickly speeds up.

Hope this helps.

]]>The Haskell programming language has a `Monad`

type-class as well as a `Functor`

type-class. It is possible to derive the `Functor`

primitive (`fmap`

) from the `Monad`

primitives:

```
-- derive fmap from the Monad primitives, (>>=) and return
fmap f x =
x >>= return . f
```

Therefore, it is reasonably argued that `Monad`

should extend `Functor`

so as to provide this default definition of `fmap`

. Due to history, this is not the case, which leads to some awkward situations.

For example, since not all `Functor`

instances are `Monad`

instances, a given operation may wish to restrict itself (if possible) to `Functor`

so that it can be used against those data types. In short, use `fmap`

instead of `liftM`

to prevent an unnecessary constraint on the type of the operation.

```
fFlip ::
Functor f =>
f (a -> b)
-> a
-> f b
fFlip f a =
fmap ($a) f
mFlip ::
Monad f =>
f (a -> b)
-> a
-> f b
mFlip f a =
liftM ($a) f
```

The `fFlip`

is available to use to a strict superset of the data types that `mFlip`

is available to, yet they are both equivalent in power. It is desirable to implement `fFlip`

. However, when we combine a usage of `fFlip`

with a monad operation, our type constraint becomes `(Monad f, Functor f) =>`

, which is undesirable boilerplate because `Monad`

implies `Functor`

!

A proposal to amend this introduces the `Applicative`

type-class, which sits between `Monad`

and `Functor`

. In other words, `Monad`

extends `Applicative`

and `Applicative`

extends `Functor`

. This is again, because the primitives of each superclass can be derived:

```
-- derive fmap from the Applicative primitives, (<*>) and pure
fmap ::
Applicative f =>
(a -> b)
-> f a
-> f b
fmap =
(<*>) . pure
-- derive (<*>) and pure from the Monad primitives, (>>=) and return
(<*>) ::
Monad f =>
f (a -> b)
-> f a
-> f b
f <*> a =
do f' <- f
a' <- a
return (f' a')
pure ::
Monad f =>
a
-> f a
pure =
return
```

With this proposal, there is another proposal to extend do-notation to take advantage of this improved flexibility. Currently, do-notation translates to the code that uses the `Monad`

primitives, `(>>=)`

and `return`

^{1}.

There are some arguments against this proposal, because this extension is not always desirable. In particular, the degree to which values are shared may be affected. Consider:

```
result =
do a <- expr
b <- spaceyExpr
return (f a b)
-- monad desugaring (current)
result =
expr >>= \a ->
spaceyExpr >>= \b ->
return (f a b)
-- applicative desugaring (proposed)
result =
fmap f expr <*> spaceyExpr
```

Since `spaceyExpr`

appears inside a lambda for the current desugaring, it will not be retained and so computed on each invocation of `a`

. However, in the proposed desugaring, the value is retained and shared when the expression is evaluated. This could, of course, lead to surprises in space usage.

It might be argued that do-notation should maintain its current desugaring using `Monad`

and introduce another means by which to perform `Applicative`

desugaring.

Whatever the outcome, all of this distracts from the otherwise glaring oversight.

The Functor, Monad, Applicative proposal opens with the following paragraph:

```
Haskell calls a couple of historical accidents its own. While some of them,
such as the "number classes" hierarchy, can be justified by pragmatism or
lack of a strictly better suggestion, there is one thing that stands out as,
well, not that: Applicative not being a superclass of Monad.
```

It is my opinion that this proposal is about to commit *exactly the same historical mistake* that is attempting to be eschewed. Furthermore, by properly eliminating this mistake, the syntax proposal would be improved as a consequence.

Being a strong proponent of progress, and that Haskell is often pushing the front of progress, this makes me a bit sad :(

Fact: not all semigroups are monoids.

No desugaring, current or proposed, utilises the identity value. In the `Monad`

case, this is `return`

and in the `Applicative`

case, this is `pure`

. However, it is a requirement of users to implement these functions. There exist structures that can utilise the full power of this desugaring, but cannot provide the identity value. Therefore, we can eliminate the identity value and still exploit the full advantage of desugaring. Not only this, but it then makes operations available to a strict superset of data types.

Consider the following amendment to the proposal:

```
class Functor f => Apply f where
(<*>) ::
f (a -> b)
-> f a
-> f b
class Apply f => Applicative f where
pure ::
a
-> f a
```

We may still derive many of the ubiquitous functions, without the full power of `Applicative`

.

```
liftA2 ::
Apply f =>
(a -> b -> c)
-> f a
-> f b
-> f c
liftA2 f a b =
fmap f a <*> b
```

We may still exploit our do-notation:

```
result =
do a <- expr1
b <- expr2
return (f a b)
-- apply desugaring
result =
fmap f expr1 <*> expr2
```

However, more to the point, there are now data structures for which these operations (e.g. `liftA2`

) and do-notation become available, that otherwise would not have been.

Here are some examples of those:

```
data NonEmptyList a = NEL a [a]
data Also a x = Also (NonEmptyList a) x
instance Functor (Also a) where
instance Apply (Also a) where
Also (NEL h t) f <*> Also (NEL h' t') x =
Also (NEL h (t ++ h' : t')) (f x)
```

The `Also`

data type has no possible `Applicative`

instance, yet it has a very usable `Apply`

. This means we can use (amended) `liftA2`

and do-notation on `Also`

values, without losing any power.

This data type generalises in fact, while still maintaining an `Apply`

instance.

`data Also s x = Also s x`

There is an `Apply`

instance for `(Also s)`

for as long as there is a `Semigroup`

instance for `s`

, however, if your semigroup is not a monoid, then there is no `Monoid`

instance. I have used `(NonEmptyList a)`

as an example of a data type with a semigroup, but not a monoid.

```
class Semigroup a where
(<>) :: -- associative
a
-> a
-> a
instance Semigroup s => Apply (Also s) where
Also s1 f <*> Also s2 x =
Also (s1 <> s2) (f x)
```

```
data OrNot a = -- Maybe (NonEmptyList a)
Not
| Or (NonEmptyList a)
instance Functor OrNot where
instance Apply OrNot where
Not <*> _ =
Not
Or _ <*> Not =
Not
Or (NEL h t) <*> Or (NEL h' t') =
Or (NEL (h h') (t <*> t'))
```

The `OrNot`

data is isomorphic to `Maybe (NonEmptyList a)`

and has an `Apply`

instance that is similar to the `Applicative`

for `Maybe`

. However, since this data type holds a non-empty list, there is no possibility for an `Applicative`

instance.

Again, with an amended do-notation and library functions, we could use `OrNot`

values.

Your regular old `Data.Map#Map`

can provide an `Apply`

instance, but not an `Applicative`

.

```
instance Ord k => Apply (Map k) where
(<*>) =
Map.intersectionWith ($)
```

There is no corresponding `Applicative`

instance for this `Apply`

instance. This is the same story for `Data.IntMap#IntMap`

.

I want to use `liftA2`

and many other generalised functions on `(Map k)`

values and no, I am not sorry!

I could go on and on with useful data types that have `Apply`

instances, but no corresponding `Applicative`

. However, I hope this is enough to illustrate the point.

If we are going to amend the type-class hierarchy, taking on all the compatibility issues of doing so, then let us provide a kick-arse solution. It is especially compelling in that this amendment to the proposal subsumes the existing error. Let us move on from yet another historical mistake that has already been acknowledged.

This story is not just about `Apply`

and `Applicative`

. All of the same reasoning applies to semi-monads or the `Bind`

type-class. The `return`

operation is not essential to do-notation or even many monad functions, so it is an unnecessary, imposed requirement for implementers of the `Monad`

type-class.

Similarly, there are structures for which there is a `Bind`

instance, but not a `Monad`

instance.

In order to take full advantage of a type-class amendment, I submit the following proposed type-class hierarchy. I contend that it subsumes the existing proposal by providing additional flexibility for zero additional loss.

Library functions, such as `liftA2`

, could slowly adopt an amendment to their type signature so as to open up to more data types.

```
class Functor f where
fmap ::
(a -> b)
-> f a
-> f b
class Functor f => Apply f where
(<*>) ::
f (a -> b)
-> f a
-> f b
class Apply f => Applicative f where
pure ::
a
-> f a
class Apply f => Bind f where
(>>=) ::
(a -> f b)
-> f a
-> f b
class (Applicative f, Bind f) => Monad f where
```

and while we’re at it…

```
class Semigroup a where
(<>) :: -- mappend
a
-> a
-> a
class Semigroup a => Monoid a where
mempty ::
a
```

but maybe I am biting off a bit too much there :)

I have not mentioned the `Pointed`

experiment, because it is not worth mentioning anymore. It was an experiment, executed in both Scala and Haskell, and the result is conclusive.

However, here is the type-class:

```
class Functor f => Pointed f where
pure ::
a
-> f a
```

It was once proposed to slot in between `Applicative`

and `Functor`

. The `Pointed`

type-class is not at all useful and there is no value in continuing discussion in this context, but instead about the result of the failed experiment. This is for another day.

There are other functions on

`Monad`

, but these are either derivable (e.g.`(>>)`

) or a mistake and hindrance to discussion (e.g.`fail`

).↩

The two functions `runOptions`

and `runIntRdrs`

implement a specific function with a small difference between each. The duplication in their code bodies is noted in the comments and is denoted by asterisks.

**How might the problem of code duplication be solved for this case?**

The puzzle is designed to compile as-is, which means for some languages, support data structures need to be provided. For example, the C# programming language does not provide an immutable (cons) list data structure, so the bare minimum is supplied here. This makes the puzzle appear quite noisy for that specific programming language, however be assured it is the same code.

```
object RefactorPuzzle {
case class IntRdr[+A](read: Int => A) {
def map[B](f: A => B): IntRdr[B] =
IntRdr(f compose read)
def flatMap[B](f: A => IntRdr[B]): IntRdr[B] =
IntRdr(n => f(read(n)).read(n))
}
object IntRdr {
def apply[A](a: A): IntRdr[A] =
IntRdr(_ => a)
}
// Return all the Some values, or None if not all are Some.
def runOptions[A](x: List[Option[A]]): Option[List[A]] =
x.foldRight[Option[List[A]]](Option(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
// Apply an Int to a list of int readers and return the list of return values.
def runIntRdrs[A](x: List[IntRdr[A]]): IntRdr[List[A]] =
x.foldRight[IntRdr[List[A]]](IntRdr(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
// Code Duplication
// ******* ************* ******* ***********
// def runOptions[A](x: List[Option[A]]): Option[List[A]] =
// def runIntRdrs[A](x: List[IntRdr[A]]): IntRdr[List[A]] =
// ************ *********** *************************************************
// x.foldRight[Option[List[A]]](Option(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
// x.foldRight[IntRdr[List[A]]](IntRdr(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
}
```

```
module RefactoringPuzzle where
newtype IntRdr a =
IntRdr {
readIntRdr :: Int -> a
}
mapIntRdr ::
IntRdr a
-> (a -> b)
-> IntRdr b
mapIntRdr (IntRdr g) f =
IntRdr (f . g)
bindIntRdr ::
IntRdr a
-> (a -> IntRdr b)
-> IntRdr b
bindIntRdr (IntRdr g) f =
IntRdr (\n -> readIntRdr (f (g n)) n)
applyIntRdr ::
a
-> IntRdr a
applyIntRdr =
IntRdr . const
type Option = Maybe
mapOption ::
Option a
-> (a -> b)
-> Option b
mapOption Nothing _ =
Nothing
mapOption (Just a) f =
Just (f a)
bindOption ::
Option a
-> (a -> Option b)
-> Option b
bindOption Nothing _ =
Nothing
bindOption (Just a) f =
f a
applyOption ::
a
-> Option a
applyOption a =
Just a
-- Return all the Some values, or None if not all are Some.
runOptions :: [Option a] -> Option [a]
runOptions = foldr (\a b -> bindOption a (\aa -> mapOption b (aa:))) (applyOption [])
-- Apply an Int to a list of int readers and return the list of return values.
runIntRdrs :: [IntRdr a] -> IntRdr [a]
runIntRdrs = foldr (\a b -> bindIntRdr a (\aa -> mapIntRdr b (aa:))) (applyIntRdr [])
-- Code Duplication
-- *** ****** ******* ****
-- runOptions :: [Option a] -> Option [a]
-- runIntRdrs :: [IntRdr a] -> IntRdr [a]
-- *** *********************** ************** ************ ****
-- runOptions = foldr (\a b -> bindOption a (\aa -> mapOption b (aa:))) (applyOption [])
-- runIntRdrs = foldr (\a b -> bindIntRdr a (\aa -> mapIntRdr b (aa:))) (applyIntRdr [])
```

```
using System;
namespace RefactoringPuzzle {
class IntRdr<A> {
public readonly Func<int, A> read;
IntRdr(Func<int, A> read) {
this.read = read;
}
public IntRdr<B> Select<B>(Func<A, B> f) {
return new IntRdr<B>(n => f(read(n)));
}
public IntRdr<B> SelectMany<B>(Func<A, IntRdr<B>> f) {
return new IntRdr<B>(n => f(read(n)).read(n));
}
public static IntRdr<A> apply(A a) {
return new IntRdr<A>(_ => a);
}
}
abstract class Option<A> {
public abstract X Fold<X>(Func<A, X> some, X none);
public Option<B> Select<B>(Func<A, B> f) {
return Fold<Option<B>>(a => new Option<B>.Some(f(a)), new Option<B>.None());
}
public Option<B> SelectMany<B>(Func<A, Option<B>> f) {
return Fold(f, new Option<B>.None());
}
public static Option<A> apply(A a) {
return new Some(a);
}
public class Some : Option<A> {
readonly A a;
public Some(A a) {
this.a = a;
}
public override X Fold<X>(Func<A, X> some, X none) {
return some(a);
}
}
public class None : Option<A> {
public override X Fold<X>(Func<A, X> some, X none) {
return none;
}
}
}
abstract class List<A> {
public abstract X FoldRight<X>(Func<A, X, X> f, X x);
// Return all the Some values, or None if not all are Some.
Option<List<A>> runOptions(List<Option<A>> x) {
return x.FoldRight((a, b) => a.SelectMany(aa =>
b.Select(bb => bb.Prepend(aa))), Option<List<A>>.apply(new Nil()));
}
// Apply an Int to a list of int readers and return the list of return values.
IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
return x.FoldRight((a, b) => a.SelectMany(aa =>
b.Select(bb => bb.Prepend(aa))), IntRdr<List<A>>.apply(new Nil()));
}
public List<A> Prepend(A a) {
return new Cons(a, this);
}
public class Nil : List<A> {
public override X FoldRight<X>(Func<A, X, X> f, X x) {
return x;
}
}
public class Cons : List<A> {
readonly A head;
readonly List<A> tail;
public Cons(A head, List<A> tail) {
this.head = head;
this.tail = tail;
}
public override X FoldRight<X>(Func<A, X, X> f, X x) {
return f(head, tail.FoldRight(f, x));
}
}
}
// Code Duplication
// ************* ******* *********
// Option<List<A>> runOptions(List<Option<A>> x) {
// IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
// ***********************************************
// return x.FoldRight((a, b) => a.SelectMany(aa =>
// return x.FoldRight((a, b) => a.SelectMany(aa =>
// ********************************* ****************************
// b.Select(bb => bb.Prepend(aa))), Option<List<A>>.apply(new Nil()));
// b.Select(bb => bb.Prepend(aa))), IntRdr<List<A>>.apply(new Nil()));
}
```

```
package RefactoringPuzzle;
abstract class Func<T, U> {
abstract U apply(T t);
}
abstract class IntRdr<A> {
abstract A read(int i);
<B> IntRdr<B> map(final Func<A, B> f) {
return new IntRdr<B>() {
B read(int i) {
return f.apply(IntRdr.this.read(i));
}
};
}
<B> IntRdr<B> bind(final Func<A, IntRdr<B>> f) {
return new IntRdr<B>() {
B read(int i) {
return f.apply(IntRdr.this.read(i)).read(i);
}
};
}
static <A> IntRdr<A> apply(final A a) {
return new IntRdr<A>() {
A read(int _) {
return a;
}
};
}
}
abstract class Option<A> {
abstract <X> X fold(Func<A, X> some, X none);
<B> Option<B> map(final Func<A, B> f) {
return new Option<B>() {
<X> X fold(final Func<B, X> some, X none) {
return Option.this.fold(new Func<A, X>(){
X apply(A a) {
return some.apply(f.apply(a));
}
}, none);
}
};
}
<B> Option<B> bind(final Func<A, Option<B>> f) {
return new Option<B>() {
<X> X fold(final Func<B, X> some, final X none) {
return Option.this.fold(new Func<A, X>(){
X apply(A a) {
return f.apply(a).fold(some, none);
}
}, none);
}
};
}
static <A> Option<A> apply(final A a) {
return new Option<A>() {
<X> X fold(Func<A, X> some, X none) {
return some.apply(a);
}
};
}
}
abstract class List<A> {
abstract <X> X foldRight(Func<A, Func<X, X>> f, X x);
// Return all the Some values, or None if not all are Some.
static <A> Option<List<A>> runOptions(List<Option<A>> x) {
return x.foldRight(new Func<Option<A>, Func<Option<List<A>>, Option<List<A>>>>(){
Func<Option<List<A>>, Option<List<A>>> apply(final Option<A> a) {
return new Func<Option<List<A>>, Option<List<A>>>() {
Option<List<A>> apply(final Option<List<A>> b) {
return a.bind(new Func<A, Option<List<A>>>(){
Option<List<A>> apply(final A aa) {
return b.map(new Func<List<A>, List<A>>(){
List<A> apply(List<A> bb) {
return bb.prepend(aa);
}
});
}
});
}
};
}
}, Option.apply(List.<A>nil()));
}
// Apply an Int to a list of int readers and return the list of return values.
static <A> IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
return x.foldRight(new Func<IntRdr<A>, Func<IntRdr<List<A>>, IntRdr<List<A>>>>(){
Func<IntRdr<List<A>>, IntRdr<List<A>>> apply(final IntRdr<A> a) {
return new Func<IntRdr<List<A>>, IntRdr<List<A>>>() {
IntRdr<List<A>> apply(final IntRdr<List<A>> b) {
return a.bind(new Func<A, IntRdr<List<A>>>(){
IntRdr<List<A>> apply(final A aa) {
return b.map(new Func<List<A>, List<A>>(){
List<A> apply(List<A> bb) {
return bb.prepend(aa);
}
});
}
});
}
};
}
}, IntRdr.apply(List.<A>nil()));
}
List<A> prepend(final A a) {
return new List<A>() {
<X> X foldRight(Func<A, Func<X, X>> f, X x) {
return f.apply(a).apply(this.foldRight(f, x));
}
};
}
static <A> List<A> nil() {
return new List<A>() {
<X> X foldRight(Func<A, Func<X, X>> f, X x) {
return x;
}
};
}
}
// Code Duplication
// *********** ************* ******* *********
// static <A> Option<List<A>> runOptions(List<Option<A>> x) {
// static <A> IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
// **************************** ********** *********** **************
// return x.foldRight(new Func<Option<A>, Func<Option<List<A>>, Option<List<A>>>>(){
// return x.foldRight(new Func<IntRdr<A>, Func<IntRdr<List<A>>, IntRdr<List<A>>>>(){
// ***** *********** *********************** ********
// Func<Option<List<A>>, Option<List<A>>> apply(final Option<A> a) {
// Func<IntRdr<List<A>>, IntRdr<List<A>>> apply(final IntRdr<A> a) {
// **************** *********** **************
// return new Func<Option<List<A>>, Option<List<A>>>() {
// return new Func<IntRdr<List<A>>, IntRdr<List<A>>>() {
// ********************** **************
// Option<List<A>> apply(final Option<List<A>> b) {
// IntRdr<List<A>> apply(final IntRdr<List<A>> b) {
// ************************** *************
// return a.bind(new Func<A, Option<List<A>>>(){
// return a.bind(new Func<A, IntRdr<List<A>>>(){
// *****************************
// Option<List<A>> apply(final A aa) {
// IntRdr<List<A>> apply(final A aa) {
// ******************************************
// return b.map(new Func<List<A>, List<A>>(){
// return b.map(new Func<List<A>, List<A>>(){
// ***************************
// List<A> apply(List<A> bb) {
// List<A> apply(List<A> bb) {
// **********************
// return bb.prepend(aa);
// return bb.prepend(aa);
…
// *** ***********************
// }, Option.apply(List.<A>nil()));
// }, IntRdr.apply(List.<A>nil()));
…
```