There has been some intermingling of Scala and Haskell communities, and I have noticed now and then people commenting on stuff that’s supposed to be easy in Haskell and hard and Scala. Less often (maybe because I read Scala questions, not Haskell ones), I see someone mentioning that something in Scala is easier than in Haskell.

So. I’d like to know from people who are knowledgeable in both what kind of things are easy in Haskell and hard in Scala, and, conversely, what kind of things are easy in Scala and hard in Haskell.

*Tony Morris responds 9 hours later:*

Daniel, As you know my day job is primarily writing Haskell and secondarily Scala. I have also used both languages for teaching, though not in universities (I use other languages there), but mostly for voluntary teaching that I still do today. Very rarely, I use Java or Javascript. I work for a product company.

I am pleased to see that my prediction is false – your question has not provoked a slew of misinformation as I thought it would. As a result, I am compelled not to ignore it :) So here goes.

At a somewhat superficial level, Haskell has significantly superior tool support over Scala and Java. For non-exhaustive example, Haskell has hoogle, djinn and pl; three tools that alone are extremely useful for which there is no equivalent for Scala. These tools exist and are as useful as they are, because of certain fundamental properties of Haskell itself. Here I mean, the hoogle function is only as useful as it is because Haskell tags IO effects in the type delineating values with types IO t and t, so hoogling for say, [a] -> Int eliminates a lot of candidate functions that would have this type in other environments. In Scala, without the delineation between an Int that has been computed with its arguments, and an Int that has been computed with the entire universe, a hoogle-equivalent would not be as useful – nevertheless, it would be somewhat useful were it to exist.

Haskell’s hlint is also superior to say, Java’s CheckStyle. Included in GHC is a warning system, which when coupled with hlint is far more comprehensive. I’ve not seen anything like this for Scala.

Haskell has cleaner syntax and superior type inference. Very rarely is it the case that we must type-annotate our Haskell. This is not to say that we do not type-annotate our Haskell, just that we have the choice. As you know, this is not so with Scala. However, on an extremely superficial level, I prefer Scala’s lambda syntax to Haskell’s, which requires a pseudo-lambda symbol (). In any case, I aim for point-free style where appropriate, making this already-weak point moot. I use Haskell’s clean syntax to appeal to children in the challenges of teaching. Children take very easily to Haskell’s syntax, since there is far less redundant, “off to the side”, “let’s take a little excursion to nowhere”, ceremony (so to speak). As you might guess, children are easily distracted – Haskell helps me as a teacher to keep this in check.

On to the fundamentals. As you know, Haskell is call-by-need by default, where Scala is the other way around. This is, of course, a very contentious issue. Nevertheless, I admit to falling very strongly to one side: call-by-need by default is the most practical and Scala’s evaluation model is a very far-away second place. I have never seen a legitimate argument that comes close to swaying my position on this, though I do invite it (not here please). To summarise, I think Haskell has a superior evaluation model. There are also nuances in Scala’s laziness. For example, while Scala unifies lazy values, it does not do so for those in contravariant position. i.e. a (Int => Int) is not a ((=> Int) => Int).

This contentious issue aside, Haskell’s effect-tracking (by the way, which is a consequence of its evaluation model), is overwhelmingly practical to every-day software tasks. The absence of same or similar is quite devastating as many users of Scala have become aware (by wishing for a solution). I cannot emphasise how important this form of static type-check is to typical business programming in terse form, so I won’t try here.

Haskell has far superior library support than Scala and Java. You and I have discussed this before. Scala lacks some of the most fundamental functions in its standard libraries and the higher-level ones are even more scarce. For example, there are fundamental properties of Scala making good library support more difficult (strict evaluation, Java interop), however, neither of these incur such a penalty as to produce what can only be described as an unfortunate catastrophe as the Scala standard libraries. That is to say, the Scala standard libraries could easily be miles ahead of where they are today, but they are not and you (I) are left with ponderances as to why – something about third lumbar vertebrae and expert levels or something I suppose.

To help emphasise this point, there are times in my day job when I come off a project from using Haskell to Scala. This comes with some consequences that I feel immediately with a very sharp contrast; I then use intellij idea which is slow, cumbersome and full of bugs, the Scala type inferencer is less robust and difficult to prototype with (I may make a type-error and it all just carries on as if nothing happened), but there is nothing more disappointing than having to spend (waste) a few hours implementing a library that really should already be there – what a waste of my time and probably the next guy who has to implement such fundamental library abilities. Talk about shaving yaks. In my opinion, this is the most disappointing part of Scala, especially since there is nothing stopping it from having a useful library besides skill and absence of ability to recognise what a useful library even is. This happens for Haskell too, but to a far lesser extent. I digress in frustration.

The GHC REPL (GHCi) has better support for every-day programming. More useful tab-completion, :type, :info and :browse commands are heavily missed when in Scala. It’s also much faster, but Scala can be forgiven given the JVM.

Why use Scala? I can call the Java APIs, even the most ridiculous, yet popular, APIs ever written. I can call WebSphere methods and I can write servlets. I can write a Wicket application, or use the Play framework or I can do something outragoeus with a database and hibernate. I can do all those things that our industry seems to think are useful, though I secretly contend are pathetic, and I hope our children do too. I can completely avoid the arguments and discussions associated with the merits of these issues, and get on with the job, while still getting the benefits of a language that is vastly superior to Java. This might seem like a cheap stab, though I am sincere when I say that is a true benefit that I value a lot.

Scala’s module system is superior to Haskell’s, almost. That is, it has some things that are very useful that Haskell does not, but also vice versa – for example, Haskell allows a module to export other modules. Scala requires you to stick your finger in your left ear to do this; oh and intellij idea stops working – I exaggerate, but you get the point. Scala has package objects and and first-class module niceties. Haskell lacks here.

Scala also has the ability to namespace a function by giving special status to one of its arguments (some people call this OO, then don’t, in the very next breath – I never get it). What I mean is, you may have two functions with the same name, which are disambiguated at the call site by the type of the argument to the left of the dot. I am deliberately not calling this by any special name, but rather focussing on its utility – Haskell can do this with qualified imports – not quite so nice. I am usually, though not always, particularly unimaginative when it comes to inventing function names – allowing me to reuse one without a penalty is very handy indeed. Note here I do not mean overloading – I think the Scala community has worked out that overloading is just not worth it – do not do it, ever.

In my experiences, Scala appeals to existing programmers, who can usually get up and running quicker than with Haskell. In contrast, non-programmers get up and running with Haskell far quicker than with Scala. As a teacher, I used to think this was a great attribute of Scala, then I tried it, then I thought it was just good. Today, I think it is just not worth it – I have seen too many catastrophes when programmers who are familiar with degenerate problem-solving techniques (ala Java) are handed such things as Scala. Call me pessimistic or some-such, but I wish to remind you that a few years ago, I was handed an already-failed project written in Scala by the Java guys, which I was supposed to “save” because I was the “Scala expert.” I’m sure you can guess how that turned out. I have many (many) anecdotes like this, though most of those are confirmed each time I try to use Scala for teaching existing programmers, rather than in industry (perhaps this is my selection bias, given my extreme apprehension). Nevertheless, my experiences aside, you may call this a benefit over Haskell – there is no doubting that existing programmers get up and running much quicker with Scala.

I can think of a few other tid-bits, but hopefully this satisfies your curiosity. I don’t know how many people are in my position of using both languages extensively in a commercial environment, but I’d truly love to hear from someone who does – especially if they take strong objection to my opinions. That is to say, I invite (and truly yearn for) well-informed peer review of these opinions.

Hope this helps.

*Recovered from StackPrinter after deletion and much subsequent searching.*

I use the Haskell programming language for teaching functional programming. I like to emphasise the learning and construction of *concepts* over learning the details of any specific programming language. I am not into teaching programming languages; I really find that boring, uneventful and unhelpful for all involved. However, learning some of the intricacies of Haskell itself is inevitable. It doesn’t take long though and is very much worth the investment of effort if you aspire to learning concepts.

That is to say, using (almost all) other programming languages for this objective is a false economy that is not even a close call. One could spent many weeks or even years demanding to articulate concepts in a specific programming language, only to struggle precisely because that language resists an accurate expression and formation of that concept. I have seen this an overwhelming number of times. It is often supported by the fallacy of false compromise, “but all languages are Turing-complete, so I am going to use JavaScript, which even has first-class functions, in order to come to understand what monad means.”

No, you won’t, I absolutely insist and promise.

This is all somewhat beside the point of this post though. The point is that there is a fact, which often comes up in teaching, that can be expressed briefly and concisely. It requires no exceptions, apologies or approximations. I would like to state this fact and explain some of the nomenclature that surrounds this fact.

All functions in the Haskell programming language take exactly one argument.

This fact is certain to come up early on in teaching. If a student comes to trust then follow this fact, then progress will be unhindered. That is because it is an absolute fact. However, even though a student may initially convince themselves of this fact, it has been my experience that they will renege on it at some time in the future.

The use of casual terminology such as the following surely helps to set this trap:

Examining the signature to the

`map`

function, we see that it takestwo arguments:

`map :: (a -> b) -> List a -> List b`

We will then talk about the *first argument* and the *second argument* as if there even is such a thing.

However, the truth of the original fact has not changed. Look at it, just sitting there, saying nothing, being all shiny and true. So how could all functions take one argument, while we simultaneously and casually talk about a “second argument”? Are we just telling great big lies? Have we made a mistake?

The problem is our vocabulary. In our spoken words, we are using an **approximation** of fact and superficially, it looks like a blatant contradiction. Let us expand our approximation to more obviously coincide with our statement of fact. I have added some annotation in [brackets].

Examining the signature to the

`map`

function, we see that it is a function [therefore, it definitely takes one argument]. That argument is of the type`(a -> b)`

[which is also a function]. The return type of the`map`

function is`(List a -> List b)`

which is a function and [since it is a function] takes one argument. That argument is of the type`(List a)`

and it returns a value [not a function]. That value is of the type`List b`

.

When we say out loud “the `map`

function takes two arguments”, we are approximating for the above expansion. It is important to understand what we really mean here.

During my teaching, I will often make a deal with students; I will use the terser vocabulary with you and I will even let you use it, however, if at any moment you violate our understanding of its proper meaning, I will rip it out from under you and demand that you use the full expansion. Almost always, the student will agree to this deal.

Some time after having made this deal, I will hear the following question. Given, for example, this solution to an exercise:

```
flatten ::
List (List a)
-> List a
flatten =
foldRight (++) Nil
```

I will hear this question:

Wait a minute, you only passed two arguments to

`foldRight`

, however, we have seen that it takes three. How could this possibly work!?

Here is another example of the question. Given this solution:

```
filter ::
(a -> Bool)
-> List a
-> List a
filter f =
foldRight (\a -> if f a then (a:.) else id) Nil
```

I will hear this question:

The argument to

`foldRight`

(which is itself a function) takes two arguments, however, the function you passed to`foldRight`

has been specified to take only one (which you have called`a`

). How could this even work?

It is at this moment that I hand out an infringement notice under our agreed penal code for the offence of:

```
Section 1.1
Failure to Accept that All Haskell Functions Take One Argument
Penalty
Temporary Incarceration at Square One with release at the discretion of an
appointed Probation Officer
```

I understand that in a learning environment, it may be easy to demonstrate and subsequently accept this fact, then later fall afoul when previously learned facts interfere with this most recent one. The purpose of going back to square one and starting again is to properly internalise this fact. It is an important one, not just for the Haskell programming language, but for Functional Programming in general.

Joking aside, the purpose of this post is to help reconcile these observations. There is a recipe to disentanglement. If you find yourself in this situation, follow these steps:

- Revert back to the fact of matter; all Haskell functions always take one argument. There is never an exception to this rule (so you cannot possibly be observing one!).
- From this starting position, reason about your observations with this fact in mind, even if it is a little clumsy at first. After some repetitions, this clumsiness will disappear. Persevere with it for now.
- Introspect on the thought process that led you into that trap to begin with. This will help you in the future as you begin to trust that principled thinking will resolve these kinds of conflicts. It can be initially clumsy, even to the point of resisting on that basis, but that is a one-time penalty which quickly speeds up.

Hope this helps.

]]>The Haskell programming language has a `Monad`

type-class as well as a `Functor`

type-class. It is possible to derive the `Functor`

primitive (`fmap`

) from the `Monad`

primitives:

```
-- derive fmap from the Monad primitives, (>>=) and return
fmap f x =
x >>= return . f
```

Therefore, it is reasonably argued that `Monad`

should extend `Functor`

so as to provide this default definition of `fmap`

. Due to history, this is not the case, which leads to some awkward situations.

For example, since not all `Functor`

instances are `Monad`

instances, a given operation may wish to restrict itself (if possible) to `Functor`

so that it can be used against those data types. In short, use `fmap`

instead of `liftM`

to prevent an unnecessary constraint on the type of the operation.

```
fFlip ::
Functor f =>
f (a -> b)
-> a
-> f b
fFlip f a =
fmap ($a) f
mFlip ::
Monad f =>
f (a -> b)
-> a
-> f b
mFlip f a =
liftM ($a) f
```

The `fFlip`

is available to use to a strict superset of the data types that `mFlip`

is available to, yet they are both equivalent in power. It is desirable to implement `fFlip`

. However, when we combine a usage of `fFlip`

with a monad operation, our type constraint becomes `(Monad f, Functor f) =>`

, which is undesirable boilerplate because `Monad`

implies `Functor`

!

A proposal to amend this introduces the `Applicative`

type-class, which sits between `Monad`

and `Functor`

. In other words, `Monad`

extends `Applicative`

and `Applicative`

extends `Functor`

. This is again, because the primitives of each superclass can be derived:

```
-- derive fmap from the Applicative primitives, (<*>) and pure
fmap ::
Applicative f =>
(a -> b)
-> f a
-> f b
fmap =
(<*>) . pure
-- derive (<*>) and pure from the Monad primitives, (>>=) and return
(<*>) ::
Monad f =>
f (a -> b)
-> f a
-> f b
f <*> a =
do f' <- f
a' <- a
return (f' a')
pure ::
Monad f =>
a
-> f a
pure =
return
```

With this proposal, there is another proposal to extend do-notation to take advantage of this improved flexibility. Currently, do-notation translates to the code that uses the `Monad`

primitives, `(>>=)`

and `return`

^{1}.

There are some arguments against this proposal, because this extension is not always desirable. In particular, the degree to which values are shared may be affected. Consider:

```
result =
do a <- expr
b <- spaceyExpr
return (f a b)
-- monad desugaring (current)
result =
expr >>= \a ->
spaceyExpr >>= \b ->
return (f a b)
-- applicative desugaring (proposed)
result =
fmap f expr <*> spaceyExpr
```

Since `spaceyExpr`

appears inside a lambda for the current desugaring, it will not be retained and so computed on each invocation of `a`

. However, in the proposed desugaring, the value is retained and shared when the expression is evaluated. This could, of course, lead to surprises in space usage.

It might be argued that do-notation should maintain its current desugaring using `Monad`

and introduce another means by which to perform `Applicative`

desugaring.

Whatever the outcome, all of this distracts from the otherwise glaring oversight.

The Functor, Monad, Applicative proposal opens with the following paragraph:

```
Haskell calls a couple of historical accidents its own. While some of them,
such as the "number classes" hierarchy, can be justified by pragmatism or
lack of a strictly better suggestion, there is one thing that stands out as,
well, not that: Applicative not being a superclass of Monad.
```

It is my opinion that this proposal is about to commit *exactly the same historical mistake* that is attempting to be eschewed. Furthermore, by properly eliminating this mistake, the syntax proposal would be improved as a consequence.

Being a strong proponent of progress, and that Haskell is often pushing the front of progress, this makes me a bit sad :(

Fact: not all semigroups are monoids.

No desugaring, current or proposed, utilises the identity value. In the `Monad`

case, this is `return`

and in the `Applicative`

case, this is `pure`

. However, it is a requirement of users to implement these functions. There exist structures that can utilise the full power of this desugaring, but cannot provide the identity value. Therefore, we can eliminate the identity value and still exploit the full advantage of desugaring. Not only this, but it then makes operations available to a strict superset of data types.

Consider the following amendment to the proposal:

```
class Functor f => Apply f where
(<*>) ::
f (a -> b)
-> f a
-> f b
class Apply f => Applicative f where
pure ::
a
-> f a
```

We may still derive many of the ubiquitous functions, without the full power of `Applicative`

.

```
liftA2 ::
Apply f =>
(a -> b -> c)
-> f a
-> f b
-> f c
liftA2 f a b =
fmap f a <*> b
```

We may still exploit our do-notation:

```
result =
do a <- expr1
b <- expr2
return (f a b)
-- apply desugaring
result =
fmap f expr1 <*> expr2
```

However, more to the point, there are now data structures for which these operations (e.g. `liftA2`

) and do-notation become available, that otherwise would not have been.

Here are some examples of those:

```
data NonEmptyList a = NEL a [a]
data Also a x = Also (NonEmptyList a) x
instance Functor (Also a) where
instance Apply (Also a) where
Also (NEL h t) f <*> Also (NEL h' t') x =
Also (NEL h (t ++ h' : t')) (f x)
```

The `Also`

data type has no possible `Applicative`

instance, yet it has a very usable `Apply`

. This means we can use (amended) `liftA2`

and do-notation on `Also`

values, without losing any power.

This data type generalises in fact, while still maintaining an `Apply`

instance.

`data Also s x = Also s x`

There is an `Apply`

instance for `(Also s)`

for as long as there is a `Semigroup`

instance for `s`

, however, if your semigroup is not a monoid, then there is no `Monoid`

instance. I have used `(NonEmptyList a)`

as an example of a data type with a semigroup, but not a monoid.

```
class Semigroup a where
(<>) :: -- associative
a
-> a
-> a
instance Semigroup s => Apply (Also s) where
Also s1 f <*> Also s2 x =
Also (s1 <> s2) (f x)
```

```
data OrNot a = -- Maybe (NonEmptyList a)
Not
| Or (NonEmptyList a)
instance Functor OrNot where
instance Apply OrNot where
Not <*> _ =
Not
Or _ <*> Not =
Not
Or (NEL h t) <*> Or (NEL h' t') =
Or (NEL (h h') (t <*> t'))
```

The `OrNot`

data is isomorphic to `Maybe (NonEmptyList a)`

and has an `Apply`

instance that is similar to the `Applicative`

for `Maybe`

. However, since this data type holds a non-empty list, there is no possibility for an `Applicative`

instance.

Again, with an amended do-notation and library functions, we could use `OrNot`

values.

Your regular old `Data.Map#Map`

can provide an `Apply`

instance, but not an `Applicative`

.

```
instance Ord k => Apply (Map k) where
(<*>) =
Map.intersectionWith ($)
```

There is no corresponding `Applicative`

instance for this `Apply`

instance. This is the same story for `Data.IntMap#IntMap`

.

I want to use `liftA2`

and many other generalised functions on `(Map k)`

values and no, I am not sorry!

I could go on and on with useful data types that have `Apply`

instances, but no corresponding `Applicative`

. However, I hope this is enough to illustrate the point.

If we are going to amend the type-class hierarchy, taking on all the compatibility issues of doing so, then let us provide a kick-arse solution. It is especially compelling in that this amendment to the proposal subsumes the existing error. Let us move on from yet another historical mistake that has already been acknowledged.

This story is not just about `Apply`

and `Applicative`

. All of the same reasoning applies to semi-monads or the `Bind`

type-class. The `return`

operation is not essential to do-notation or even many monad functions, so it is an unnecessary, imposed requirement for implementers of the `Monad`

type-class.

Similarly, there are structures for which there is a `Bind`

instance, but not a `Monad`

instance.

In order to take full advantage of a type-class amendment, I submit the following proposed type-class hierarchy. I contend that it subsumes the existing proposal by providing additional flexibility for zero additional loss.

Library functions, such as `liftA2`

, could slowly adopt an amendment to their type signature so as to open up to more data types.

```
class Functor f where
fmap ::
(a -> b)
-> f a
-> f b
class Functor f => Apply f where
(<*>) ::
f (a -> b)
-> f a
-> f b
class Apply f => Applicative f where
pure ::
a
-> f a
class Apply f => Bind f where
(>>=) ::
(a -> f b)
-> f a
-> f b
class (Applicative f, Bind f) => Monad f where
```

and while we’re at it…

```
class Semigroup a where
(<>) :: -- mappend
a
-> a
-> a
class Semigroup a => Monoid a where
mempty ::
a
```

but maybe I am biting off a bit too much there :)

I have not mentioned the `Pointed`

experiment, because it is not worth mentioning anymore. It was an experiment, executed in both Scala and Haskell, and the result is conclusive.

However, here is the type-class:

```
class Functor f => Pointed f where
pure ::
a
-> f a
```

It was once proposed to slot in between `Applicative`

and `Functor`

. The `Pointed`

type-class is not at all useful and there is no value in continuing discussion in this context, but instead about the result of the failed experiment. This is for another day.

There are other functions on

`Monad`

, but these are either derivable (e.g.`(>>)`

) or a mistake and hindrance to discussion (e.g.`fail`

).↩

`def reverse: List[Banana] => List[Banana]`

Without viewing the body of the code, one might infer that this code “reverses the list.” What does it mean to reverse a list? Let us try to rigorously define reversing a list:

- Reversing the empty list produces the empty list.
- Reversing a single-element list produces that same list.
- Taking two lists, appending them then reversing, produces the same list as reversing each, then appending the latter to the former.

We infer all these things, even if informally, when we conclude that this function reverses the list. We might have some degree of confidence in concluding that all these properties hold purely from the function name, however, as part of this conclusion, we also conclude that the function does not peel the first `Banana`

. We have no evidence of these facts, except for the function name, alleged to be useful to infer some confidence about these facts.

In order for this method of comprehension to be efficacious, it must produce a result better than guessing. That is to say, the degree of confidence invoked by inferring that “this function reverses the list” from the premise, “because the function is named `reverse`

”, must be higher than inferring that the function does not reverse the list, from the same premise. In my experience, with which some will certainly disagree, this is not the case, rendering this comprehension method useless. That is to say, the identifier name persuades confidence of the inference by nil, not even a bit.

Thankfully, this is unimportant. It is unimportant because there exist methods of code comprehension that are *significantly more effective*, so you can abandon the question of whether or not there is efficacy of using identifier names for inferring properties of code.

Suppose the following code:

`def reverse[A]: List[A] => List[A]`

At first glance, it might appear that we must take a leap of confidence in inferring that the function reverses the list. However, we can infer the following fact — *this function definitely does not peel the first Banana in the list*. I can infer this because if the function attempted to do this, it

- The
`reverse`

function did not add`10`

to every second list element. - All elements in the list returned by
`reverse`

are contained in the input list.

We are able to infer these things simply by making the `reverse`

function *parametric*. We are no longer reversing a list of bananas — although that might be our use-case — we are reversing a list of `A`

for all values of `A`

. One virtue of this *parametricity* is that we can infer a significant number of things that *do not occur*. This theme of learning what does not occur is ubiquitous when deploying these programming techniques and is described in more detail by Wadler^{2}.

Here is another example, using a programming language called SafeHaskell (very similar to Haskell):

`add10 :: Int -> Int`

By the name of the function, we might unreliably infer (OK, let’s be honest, we are making a bold guess) that the function adds `10`

to its argument. However, looking at the type, we know for sure that the function *did not print its argument to the standard output stream*. We know this because had the library provider attempted it, the code would not have compiled. To be clear, it would not be a valid SafeHaskell program, so our assumption that we are looking at SafeHaskell fails, forcing us to unify by selecting one of the following:

- The function does not print its argument to the standard output stream.
- We are not looking at SafeHaskell source code.
- We are using an escape hatch, implied by the halting problem.

There are simply no other options. What other things can we reliably conclude this function does not do?

Here is yet another example:

`def constant[A, B]: A => B => A`

In this case, we can reliably infer that this function does one single thing. It ignores its second argument and returns its first. This might be protested:

- It might perform a side-effect first!
- This is true, but assuming a pure subset of the language is useful for reasons other than this one.

- It might type-cast or type-case (
`asInstanceOf`

or`isInstanceOf`

).- This is another unfortunate escape hatch in the Scala type system that conveniently permits unsafe code.

- It might recurse forever, return
`null`

or throw an exception.- This is yet another escape hatch.

So why dismiss these protests? They are inescapable implications of the halting problem. The more practical question is, “how convenient does Scala make these escape hatches available?” and the answer is an unfortunate one — it can often appear to be easier to exploit these escape hatches, but it won’t be too long before the penalty is paid. Although in practice, it is easier both short and long term to avoid these escape hatches, the illusion of convenience persists in some cases.

If we are to concede these abilities, it is simply up to our own discipline to enforce that we have not attempted to take the illusory easy way out. This includes using a pure-functional subset of Scala, which is a lot easier than is often made out. For example, the `scalaz.effects.STRef`

data type permits a pure-functional `var`

that has all the capabilities of `var`

while also maintaining all the aforementioned abilities (unlike `var`

itself). This is a win-win. Rúnar Bjarnason goes into detail on this at QCon 2013.

By these methods of exploiting the type system, we are able to very reliably infer things that did not occur and occasionally, infer and conclude the only thing that did occur. However, what about narrowing it down further? We know that the (parametric) `reverse`

function doesn’t manipulate the list elements, but how do we know it reverses the list? Do we fall back to relying on the function name and simply hope so?

No.

We continue to use more reliable methods of code comprehension. Let us restate the definition of `reverse`

, however, we will include Scala source code. All expressions must return `true`

regardless of the value of any of their arguments:

- Reversing the empty list produces the empty list
`reverse(Nil) == Nil`

- Reversing a single-element list produces that same list
`element => reverse(List(element)) == List(element)`

- Taking two lists,
`l1`

and`l2`

, appending them then reversing, produces the same list as reversing each, then appending the reverse of`l1`

to the reverse of`l2`

.`(l1, l2) => reverse(l1 ::: l2) == (reverse(l2) ::: reverse(l1))`

If we can be confident that these properties hold, then we can also be confident that our `reverse`

function does in fact, reverse the list. In fact, there is no other possible function that satisfies the type and these properties, besides the one that reverses a list. Again, we have not resorted to the function name for code comprehension — we have inspected *algebraic properties about the code*.

So how do we increase confidence that these properties hold?

Unfortunately, an implication of the halting problem is that we cannot prove these program properties, for the general case. This is not the end of the world though — we can still attempt to *disprove* these program properties. That is to say, we can go to efforts to determine if the function is *not* one which reverses the list. We can express our algebraic properties, which give away the full specification of the function, then automate the assertion that there exist values for which the property does not hold. This automation is precisely what ScalaCheck does, however, the expression itself is enough to rigorously specify the function behaviour without degenerating to faith in function names.

The next time you see a function named `<-:`

and you think to yourself, “OMG, how unreadable, what am I going to do now!?”, ask yourself if there are other tools — perhaps more robust than those familiar — to comprehend the code. What is its type? What are its algebraic properties? Are there parametric properties to exploit?

What if it has this type?

`def <-:[A, B](f: A => B): List[A] => List[B]`

Does it map the function across the list elements? Maybe. However, we definitely know that the elements in the resulting list came from running the given function on an element from the input list. You see? Parametricity, just like that, was 73 fuck-loads more reliable than looking at the function name to comprehend how this code works, and this is only the start of answering the question. We have many more tools at our disposal. Importantly, they are *reliable*. I like reliable, because I also like things that are true. Hopefully you do too!

So what about these allegations of utility of identifier names? Do they have any merit at all?

No. Insistence on the value of identifier names for code comprehension has some glaring holes in it. Let us put aside that there are far more robust methods of comprehension. Let us also put aside that the claims are probably motivated by a lack of familiarity with superior tools.

Here is why this allegation is not just bullshit, but very obviously bullshit. Anyone who names a data type, `AbstractAdvisorAutoProxyCreator`

is just as committed to not using identifier names for meaning as anyone else. However, there is another level again — the staunch belief that this identifer name is conveying meaning exposes just how confused that belief is. Any query such as, “What exactly does `AbstractAdvisorAutoProxyCreator`

mean?” is met with handwaving. This is because **nobody knows what AbstractAdvisorAutoProxyCreator means** and the only practical implication here, in the world in which we all find ourselves, is one or more scatterbrains holding a belief otherwise.

From a näive perspective, this situation appears to be a ripe learning opportunity. There appears to be a lot to be gained simply by sharing knowledge with a beginner — a trivial investment of effort. So why not take it? That question is fraught with complexity, but often, it is more constructive to have a giggle, a little lament, then dismiss the confused allegations.

]]>This proposal is in the spirit of the typeclassopedia with the following differences:

Use the Scala programming language for demonstration for those who prefer it.

Discussion need not concern itself with any kind of backward compatibility with existing libraries.

You will find a similar type-class hierarchy in the Scalaz library. That implementation is far more comprehensive and is aimed primarily for production use. A secondary goal here is to help document the Scalaz hierarchy in terse form, however, note that you will find some minor differences (improvements?) in the arrangement.

Discussion about addition or rearrangement of the proposed hierarchy is welcome.

```
trait ~>[F[_], G[_]] {
def apply[A]: F[A] => G[A]
}
case class Id[A](x: A)
trait Semigroup[M] {
def op: M => M => M
}
trait Monoid[M] extends Semigroup[M] {
val id: M
}
trait Functor[F[_]] {
def fmap[A, B](f: A => B): F[A] => F[B]
}
trait Apply[F[_]] extends Functor[F] {
def ap[A, B](f: F[A => B]): F[A] => F[B]
}
trait Bind[F[_]] extends Apply[F] {
def bind[A, B](f: A => F[B]): F[A] => F[B]
}
trait Applicative[F[_]] extends Apply[F] {
def insert[A]: A => F[A]
}
trait Monad[F[_]] extends Applicative[F] with Bind[F]
trait Extend[F[_]] extends Functor[F] {
def extend[A, B](f: F[A] => B): F[A] => F[B]
}
trait Comonad[F[_]] extends Extend[F] {
def extract[A]: F[A] => A
}
trait Contravariant[F[_]] {
def contramap[A, B](f: B => A): F[A] => F[B]
}
trait Distributive[T[_]] extends Functor[T] {
def distribute[F[_]: Functor, A, B](f: A => T[B]): F[A] => T[F[B]]
}
trait Foldable[T[_]] {
def foldMap[A, M: Monoid](f: A => M): T[A] => M
}
trait Foldable1[T[_]] extends Foldable[T] {
def foldMap1[A, M: Semigroup](f: A => M): T[A] => M
}
trait Traversable[T[_]] extends Functor[T] with Foldable[T] {
def traverse[F[_]: Applicative, A, B](f: A => F[B]): T[A] => F[T[B]]
}
trait Traversable1[T[_]] extends Traversable[T] with Foldable1[T] {
def traverse1[F[_]: Apply, A, B](f: A => F[B]): T[A] => F[T[B]]
}
trait MonadTransformer[T[_[_], _]] {
def lift[M[_]: Monad, A]: M[A] => T[M, A]
}
trait BindTransformer[T[_[_], _]] extends MonadTransformer[T] {
def liftB[M[_]: Bind, A]: M[A] => T[M, A]
}
trait MonadTransform[T[_[_], _]] {
def transform[F[_]: Monad, G[_]: Monad, A](f: F ~> G): T[F, A] => T[G, A]
}
trait BindTransform[T[_[_], _]] extends MonadTransform[T] {
def transformB[F[_]: Bind, G[_]: Monad, A](f: F ~> G): T[F, A] => T[G, A]
}
trait ComonadTransformer[T[_[_], _]] {
def lower[M[_]: Comonad, A]: T[M, A] => M[A]
}
trait ExtendTransformer[T[_[_], _]] extends ComonadTransformer[T] {
def lowerE[M[_]: Extend, A]: T[M, A] => M[A]
}
trait ComonadHoist[T[_[_], _]] {
def cohoist[M[_]: Comonad, A]: T[M, A] => T[Id, A]
}
trait ExtendHoist[T[_[_], _]] extends ComonadHoist[T] {
def cohoistE[M[_]: Extend, A]: T[M, A] => T[Id, A]
}
trait Semigroupoid[~>[_, _]] {
def compose[A, B, C]: (B ~> C) => (A ~> B) => (A ~> C)
}
trait Category[~>[_, _]] extends Semigroupoid[~>] {
def id[A]: A ~> A
}
trait First[~>[_, _]] extends Semigroupoid[~>] {
def first[A, B, C]: (A ~> B) => ((A, C) ~> (B, C))
}
trait Arrow[~>[_, _]] extends Category[~>] with First[~>] {
def idA[A, B]: (A => B) => (A ~> B)
}
```

The two functions `runOptions`

and `runIntRdrs`

implement a specific function with a small difference between each. The duplication in their code bodies is noted in the comments and is denoted by asterisks.

**How might the problem of code duplication be solved for this case?**

The puzzle is designed to compile as-is, which means for some languages, support data structures need to be provided. For example, the C# programming language does not provide an immutable (cons) list data structure, so the bare minimum is supplied here. This makes the puzzle appear quite noisy for that specific programming language, however be assured it is the same code.

```
object RefactorPuzzle {
case class IntRdr[+A](read: Int => A) {
def map[B](f: A => B): IntRdr[B] =
IntRdr(f compose read)
def flatMap[B](f: A => IntRdr[B]): IntRdr[B] =
IntRdr(n => f(read(n)).read(n))
}
object IntRdr {
def apply[A](a: A): IntRdr[A] =
IntRdr(_ => a)
}
// Return all the Some values, or None if not all are Some.
def runOptions[A](x: List[Option[A]]): Option[List[A]] =
x.foldRight[Option[List[A]]](Option(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
// Apply an Int to a list of int readers and return the list of return values.
def runIntRdrs[A](x: List[IntRdr[A]]): IntRdr[List[A]] =
x.foldRight[IntRdr[List[A]]](IntRdr(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
// Code Duplication
// ******* ************* ******* ***********
// def runOptions[A](x: List[Option[A]]): Option[List[A]] =
// def runIntRdrs[A](x: List[IntRdr[A]]): IntRdr[List[A]] =
// ************ *********** *************************************************
// x.foldRight[Option[List[A]]](Option(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
// x.foldRight[IntRdr[List[A]]](IntRdr(Nil))((a, b) => a.flatMap(aa => b.map(aa :: _)))
}
```

```
module RefactoringPuzzle where
newtype IntRdr a =
IntRdr {
readIntRdr :: Int -> a
}
mapIntRdr ::
IntRdr a
-> (a -> b)
-> IntRdr b
mapIntRdr (IntRdr g) f =
IntRdr (f . g)
bindIntRdr ::
IntRdr a
-> (a -> IntRdr b)
-> IntRdr b
bindIntRdr (IntRdr g) f =
IntRdr (\n -> readIntRdr (f (g n)) n)
applyIntRdr ::
a
-> IntRdr a
applyIntRdr =
IntRdr . const
type Option = Maybe
mapOption ::
Option a
-> (a -> b)
-> Option b
mapOption Nothing _ =
Nothing
mapOption (Just a) f =
Just (f a)
bindOption ::
Option a
-> (a -> Option b)
-> Option b
bindOption Nothing _ =
Nothing
bindOption (Just a) f =
f a
applyOption ::
a
-> Option a
applyOption a =
Just a
-- Return all the Some values, or None if not all are Some.
runOptions :: [Option a] -> Option [a]
runOptions = foldr (\a b -> bindOption a (\aa -> mapOption b (aa:))) (applyOption [])
-- Apply an Int to a list of int readers and return the list of return values.
runIntRdrs :: [IntRdr a] -> IntRdr [a]
runIntRdrs = foldr (\a b -> bindIntRdr a (\aa -> mapIntRdr b (aa:))) (applyIntRdr [])
-- Code Duplication
-- *** ****** ******* ****
-- runOptions :: [Option a] -> Option [a]
-- runIntRdrs :: [IntRdr a] -> IntRdr [a]
-- *** *********************** ************** ************ ****
-- runOptions = foldr (\a b -> bindOption a (\aa -> mapOption b (aa:))) (applyOption [])
-- runIntRdrs = foldr (\a b -> bindIntRdr a (\aa -> mapIntRdr b (aa:))) (applyIntRdr [])
```

```
using System;
namespace RefactoringPuzzle {
class IntRdr<A> {
public readonly Func<int, A> read;
IntRdr(Func<int, A> read) {
this.read = read;
}
public IntRdr<B> Select<B>(Func<A, B> f) {
return new IntRdr<B>(n => f(read(n)));
}
public IntRdr<B> SelectMany<B>(Func<A, IntRdr<B>> f) {
return new IntRdr<B>(n => f(read(n)).read(n));
}
public static IntRdr<A> apply(A a) {
return new IntRdr<A>(_ => a);
}
}
abstract class Option<A> {
public abstract X Fold<X>(Func<A, X> some, X none);
public Option<B> Select<B>(Func<A, B> f) {
return Fold<Option<B>>(a => new Option<B>.Some(f(a)), new Option<B>.None());
}
public Option<B> SelectMany<B>(Func<A, Option<B>> f) {
return Fold(f, new Option<B>.None());
}
public static Option<A> apply(A a) {
return new Some(a);
}
public class Some : Option<A> {
readonly A a;
public Some(A a) {
this.a = a;
}
public override X Fold<X>(Func<A, X> some, X none) {
return some(a);
}
}
public class None : Option<A> {
public override X Fold<X>(Func<A, X> some, X none) {
return none;
}
}
}
abstract class List<A> {
public abstract X FoldRight<X>(Func<A, X, X> f, X x);
// Return all the Some values, or None if not all are Some.
Option<List<A>> runOptions(List<Option<A>> x) {
return x.FoldRight((a, b) => a.SelectMany(aa =>
b.Select(bb => bb.Prepend(aa))), Option<List<A>>.apply(new Nil()));
}
// Apply an Int to a list of int readers and return the list of return values.
IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
return x.FoldRight((a, b) => a.SelectMany(aa =>
b.Select(bb => bb.Prepend(aa))), IntRdr<List<A>>.apply(new Nil()));
}
public List<A> Prepend(A a) {
return new Cons(a, this);
}
public class Nil : List<A> {
public override X FoldRight<X>(Func<A, X, X> f, X x) {
return x;
}
}
public class Cons : List<A> {
readonly A head;
readonly List<A> tail;
public Cons(A head, List<A> tail) {
this.head = head;
this.tail = tail;
}
public override X FoldRight<X>(Func<A, X, X> f, X x) {
return f(head, tail.FoldRight(f, x));
}
}
}
// Code Duplication
// ************* ******* *********
// Option<List<A>> runOptions(List<Option<A>> x) {
// IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
// ***********************************************
// return x.FoldRight((a, b) => a.SelectMany(aa =>
// return x.FoldRight((a, b) => a.SelectMany(aa =>
// ********************************* ****************************
// b.Select(bb => bb.Prepend(aa))), Option<List<A>>.apply(new Nil()));
// b.Select(bb => bb.Prepend(aa))), IntRdr<List<A>>.apply(new Nil()));
}
```

```
package RefactoringPuzzle;
abstract class Func<T, U> {
abstract U apply(T t);
}
abstract class IntRdr<A> {
abstract A read(int i);
<B> IntRdr<B> map(final Func<A, B> f) {
return new IntRdr<B>() {
B read(int i) {
return f.apply(IntRdr.this.read(i));
}
};
}
<B> IntRdr<B> bind(final Func<A, IntRdr<B>> f) {
return new IntRdr<B>() {
B read(int i) {
return f.apply(IntRdr.this.read(i)).read(i);
}
};
}
static <A> IntRdr<A> apply(final A a) {
return new IntRdr<A>() {
A read(int _) {
return a;
}
};
}
}
abstract class Option<A> {
abstract <X> X fold(Func<A, X> some, X none);
<B> Option<B> map(final Func<A, B> f) {
return new Option<B>() {
<X> X fold(final Func<B, X> some, X none) {
return Option.this.fold(new Func<A, X>(){
X apply(A a) {
return some.apply(f.apply(a));
}
}, none);
}
};
}
<B> Option<B> bind(final Func<A, Option<B>> f) {
return new Option<B>() {
<X> X fold(final Func<B, X> some, final X none) {
return Option.this.fold(new Func<A, X>(){
X apply(A a) {
return f.apply(a).fold(some, none);
}
}, none);
}
};
}
static <A> Option<A> apply(final A a) {
return new Option<A>() {
<X> X fold(Func<A, X> some, X none) {
return some.apply(a);
}
};
}
}
abstract class List<A> {
abstract <X> X foldRight(Func<A, Func<X, X>> f, X x);
// Return all the Some values, or None if not all are Some.
static <A> Option<List<A>> runOptions(List<Option<A>> x) {
return x.foldRight(new Func<Option<A>, Func<Option<List<A>>, Option<List<A>>>>(){
Func<Option<List<A>>, Option<List<A>>> apply(final Option<A> a) {
return new Func<Option<List<A>>, Option<List<A>>>() {
Option<List<A>> apply(final Option<List<A>> b) {
return a.bind(new Func<A, Option<List<A>>>(){
Option<List<A>> apply(final A aa) {
return b.map(new Func<List<A>, List<A>>(){
List<A> apply(List<A> bb) {
return bb.prepend(aa);
}
});
}
});
}
};
}
}, Option.apply(List.<A>nil()));
}
// Apply an Int to a list of int readers and return the list of return values.
static <A> IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
return x.foldRight(new Func<IntRdr<A>, Func<IntRdr<List<A>>, IntRdr<List<A>>>>(){
Func<IntRdr<List<A>>, IntRdr<List<A>>> apply(final IntRdr<A> a) {
return new Func<IntRdr<List<A>>, IntRdr<List<A>>>() {
IntRdr<List<A>> apply(final IntRdr<List<A>> b) {
return a.bind(new Func<A, IntRdr<List<A>>>(){
IntRdr<List<A>> apply(final A aa) {
return b.map(new Func<List<A>, List<A>>(){
List<A> apply(List<A> bb) {
return bb.prepend(aa);
}
});
}
});
}
};
}
}, IntRdr.apply(List.<A>nil()));
}
List<A> prepend(final A a) {
return new List<A>() {
<X> X foldRight(Func<A, Func<X, X>> f, X x) {
return f.apply(a).apply(this.foldRight(f, x));
}
};
}
static <A> List<A> nil() {
return new List<A>() {
<X> X foldRight(Func<A, Func<X, X>> f, X x) {
return x;
}
};
}
}
// Code Duplication
// *********** ************* ******* *********
// static <A> Option<List<A>> runOptions(List<Option<A>> x) {
// static <A> IntRdr<List<A>> runIntRdrs(List<IntRdr<A>> x) {
// **************************** ********** *********** **************
// return x.foldRight(new Func<Option<A>, Func<Option<List<A>>, Option<List<A>>>>(){
// return x.foldRight(new Func<IntRdr<A>, Func<IntRdr<List<A>>, IntRdr<List<A>>>>(){
// ***** *********** *********************** ********
// Func<Option<List<A>>, Option<List<A>>> apply(final Option<A> a) {
// Func<IntRdr<List<A>>, IntRdr<List<A>>> apply(final IntRdr<A> a) {
// **************** *********** **************
// return new Func<Option<List<A>>, Option<List<A>>>() {
// return new Func<IntRdr<List<A>>, IntRdr<List<A>>>() {
// ********************** **************
// Option<List<A>> apply(final Option<List<A>> b) {
// IntRdr<List<A>> apply(final IntRdr<List<A>> b) {
// ************************** *************
// return a.bind(new Func<A, Option<List<A>>>(){
// return a.bind(new Func<A, IntRdr<List<A>>>(){
// *****************************
// Option<List<A>> apply(final A aa) {
// IntRdr<List<A>> apply(final A aa) {
// ******************************************
// return b.map(new Func<List<A>, List<A>>(){
// return b.map(new Func<List<A>, List<A>>(){
// ***************************
// List<A> apply(List<A> bb) {
// List<A> apply(List<A> bb) {
// **********************
// return bb.prepend(aa);
// return bb.prepend(aa);
…
// *** ***********************
// }, Option.apply(List.<A>nil()));
// }, IntRdr.apply(List.<A>nil()));
…
```

Reconstructive surgery is required.

- Tipton, John Sison. “Obturator neuropathy.”
*Current reviews in musculoskeletal medicine*1.3-4 (2008): 234-237.*(link)*

Unfortunately, many specialists are unaware of this condition and so may not recognise the correct diagnosis, which has been my case. Obturator Neuropathy typically affects those engaged in kicking sports and especially, Australian Rules football. Although I was playing Australian Rules football at the time of my injury, I live in Brisbane, where the sport is less popular than in other cities such as Melbourne and Sydney, where the condition is more recognised. I have recently learned that many more specialists in these cities are aware of this insidious disease, while I have yet to find anyone in Brisbane who even know it exists — I suppose this set of circumstances came together to delay my correct diagnosis for many years.

This article is the first of my writings on this condition, as at this time, it is still currently unresolved. I am planning to meet with many specialists in the next few days to discuss surgical management. Later, I intend to raise greater awareness of this condition in the hope that other athletes, professional or otherwise, needn’t go through the dramas that I have done in these last few years.

My pain is now so severe that I am unable to perform my normal duties for my employer. Thankfully, my employer is awesome and understanding of my plight and I am committed to resolving this matter as soon as possible so that I can contribute back my highest level of performance. I am thankful to others in their understanding and I apologise to those who have sometimes been on “the wrong end of the stick” of my frustrations especially when I am having a “bad day” with pain. I am also usually taking narcotic medication during these times, so my recollection of the events is typically blurry to me.

If you are ever inflicted with this condition, I highly recommend these publications, with most recommended appearing first:

Bradshaw, Chris, et al. “Obturator Nerve Entrapment A Cause of Groin Pain in Athletes.”

*The American journal of sports medicine*25.3 (1997): 402-408.*(link)*Bradshaw, Chris, and Paul McCrory. “Obturator nerve entrapment.”

*Clinical Journal of Sport Medicine*7.3 (1997): 217-219.*(link)*Tipton, John Sison. “Obturator neuropathy.”

*Current reviews in musculoskeletal medicine*1.3-4 (2008): 234-237.*(link)*Harvey, Gregory, and Simon Bell. “Obturator neuropathy: an anatomic perspective.”

*Clinical orthopaedics and related research*363 (1999): 203-211.*(link)*Busis, Neil A. “Femoral and obturator neuropathies.”

*Neurologic clinics*17.3 (1999): 633-653.*(link)*Koulouris, George. “Imaging review of groin pain in elite athletes: an anatomic approach to imaging findings.”

*American Journal of Roentgenology*191.4 (2008): 962-972.*(link)*Sorenson, Eric J., Joseph J. Chen, and Jasper R. Daube. “Obturator neuropathy: causes and outcome.”

*Muscle & nerve*25.4 (2002): 605-607.*(link)*Kitagawa, Ryan, et al. “Surgical management of obturator nerve lesions.”

*Neurosurgery*65.4 (2009): A24-A28.*(link)*Arnold, William David, and Bakri H. Elsheikh. “Entrapment Neuropathies.”

*Neurologic Clinics*(2013).*(link)*Orchard, John, et al. “Pathophysiology of chronic groin pain in the athlete.”

*Int J Sports Med*1.1 (2000).*(link)*Anagnostopoulou, Sofia, et al. “Anatomic variations of the obturator nerve in the inguinal region: implications in conventional and ultrasound regional anesthesia techniques.”

*Regional anesthesia and pain medicine*34.1 (2009): 33-39.*(link)*Falvey, Eanna Cian, Andrew Franklyn-Miller, and P. R. McCrory. “The groin triangle: a patho-anatomical approach to the diagnosis of chronic groin pain in athletes.”

*British journal of sports medicine*43.3 (2009): 213-220.*(link)*

21 surgeries under a strict definition of the term “surgery” but let’s stick with 19 :)↩