James Roper presents the Play Framework at Melbourne Scala group

James Roper, Typesafe’s technical lead for the Play Framework demonstrates building a webapp in 45 minutes, at the Melbourne Scala user group. April 2013.

Designing with Types in Scala

Design issues when using Static Typing in Scala

This was given at the Melbourne Scala User Group, in August 2012.

To get the most out of it, I recommend viewing the code that accompanies the talk:
https://github.com/benhutchison/ScalaTypeSystemTutorial/blob/master/src/main/scala/typestalk/TotalFunctions.scalaub.com/benhutchison/Scala…

Reinventing Maybe/Option in Guava: get the basics right, please!

Google’s Guava open source library provides a class Optional<T> that holds zero or one value, as a superior alternative to null references. Its a certainly good idea, but also a very mature, well understood idea. Haskell has Maybe, Scala has Option.

For some inexplicable reason, Guava’s designer turn there back on prior art and wisdom with a lame dismissal: Optional is not intended as a direct analogue of any existing “option” or “maybe” construct from other programming environments, though it may bear some similarities.

“Some similarities”? Sure does! Its the same damn abstraction. How many abstractions can you get that consist of exactly 0..1 values of some type T? One, that’s how many.

And one of the most useful capabilities of this abstraction is that can be a monad; essentially, support a flatMap (of flattenTransform in Guava terms) operation. The flattening part is useful when you have an Optional<T> value, and a function from T to another Optional<S> value. If you simply transform the input, you get a Optional<Optional<T>> container as a result. Typically, you want to flatten any number of Optional wrappers down into a single Optional<S> layer. That is, either I have a result value of type S, or I don’t.

Sadly, Guava hobbled their Optional class by leaving out a flatten operation.

I find myself speculating on why they left it out, and suspect that it resulted from the kind of ignorance of the outside world that seems to common in Java programming culture. An attitude that its OK to reinvent a existing abstraction that has been widely used and well understood, and yet  ignore or dismiss any wisdom that comes from outside the Java ecosystem.

Heroscape 2.0 Slides from Melbourne Scala user group presentation

HeroscapeMkIICaseStudy

Attached are the slides from my April 2011 Melbourne Scala user group presentation, on “Heroscape 2.0:  A case study in using Scala for complex HTML document generation”

State of Scala 2010 slides from OSDC

StateOfScala

Here are the slides from a “State of Scala” talk I gave at OSDC in November 2010, summarizing where Scala is currently at.

http://2010.osdc.com.au/

In search of something better than Test-Driven Development

My previous post was perhaps a bit symbolic (or plain smart-arse) and in need some some explanation.

In simple terms, it is a lament about the shortcomings off Test- (aka Behavior) Driven Development (TDD or BDD), a testing practice that is widely considered current best practice for software engineering, or least my experience of doing it.

In this method, software is constructed by first building executable tests, or specifications, that function somewhat like moulds in the casting of metals or plastics; they determine form of the software by specifying very precisely how it must behave. This takes more of the time and effort; once the specifications are constructed, it is usually less effort to construct the software to satisfy them.

Its been discovered, by myself empirically, but also by many others, that using many fine-grained, non-overlapping, unit tests yields software and tests that are easier/cheaper to build and maintain. However, an uncanny thing happens to unit tests as they become fine-grained: they start to look eerily similar to, or perhaps mirror images of, the code they specify.

So, the mirrors halves of photo in the previous post represents the two sides of a system (in this case, the text on the sign): the code (on the left), and the tests (on the right). What I was seeking to highlight is how the information content of the system has not been increased by adding the tests. We have said the same thing twice: once, as code, and again, as tests over that code.

What we have added is redundancy, which serves to detect errors, just as redundancy does in error correcting memory or RAID disks.

So, to me, the essence of the TDD practices is actually to ensure redundancy in a system’s specification. When the 2 copies of the specification, Code and Tests, do not agree, we have a test failure that is easily detected.

My problems with present day TDD are that

  1. This redundancy is currently achieved by enormous manual effort. In the most efficient cases Ive seen where system behavior is 100% specified by tests, test code is typically 200% of the bulk of the code it specifies, and it can be far worse, 300-400% is not uncommon. In my experience, that consumes a great deal of effort/time/money.
  2. It seems very difficult to measure or control the level of redundancy in the system introduced by the TDD practices. There seems to be a tendency to prescribe a “one-size-fits-all” approach of 100% unit test redundancy (ie every line, every branch, every data case in code covered by a unit test) as being appropriate for every project.
    In contrast, when we use redundancy to control errors for information storage or transmission, the amount of redundancy is a parameter that we can freely vary, to achieve a trade-off between bulky robustness and lightweight delicateness. We need to find a similar variable dial for software testing practices.
  3. When we develop software in the TDD style, we are writing down the same pieces of information twice in 2 ledgers as we work: Write a test. Write the matching code. Execute and verify matching. Rinse and Repeat.
    For me, that feels tedious and unnatural. My instinct is to write it once, and see it execute, to engage in a flowing conversation with the compiler and the runtime.

Ive been thinking alot lately about what could replace TDD, to give us reliable software at more lightweight redundancy levels, in the 10-50% of code size range, rather than the 100%+ redundancy of textbook TDD. Some early ideas:

  • Static Typing
  • Dependent Types
  • Design by Contract (which seems partly isomorphic to the very powerful Property-Based Testing approaches)
  • Automatic type-driven generation of function inputs, so that function can be executed without explicitly writing tests (The other half of Property-Based Testing).
  • Inline executable Examples, expressed as annotations, that describe valid sample function inputs and corresponding outputs, much as tests do, but much closer to the code.

it should have a title that is “Man having serious doubts about whether saying everything twice is actually the best we software engineers can do”

Easy solutions for Caching in the JVM

Bounded LRU Cache

A common use case. A cache data structure that:

  • has a bounded maximum size and thus memory consumption (which in turn implies a cache eviction policy like LRU)
  • is safe and performant for concurrent use

AFAICT, there’s nothing in the JDK or the concurrency JSRs that hits both these requirements. I googled and found the open source Concurrent Linked Hashmap library, which does.

[An incorrect comment about the project missing tests has been removed]

Unbounded Garbage Collectable Cache

If instead, you want a cache that will grow with use, but can be reclaimed if memory is short, then the use of a ConcurrentReferenceHashMap, configured with SoftReferences, is a good solution.

Escape Analysis changes the rules for performance critical code

The last few days its been dawning on me how Escape Analysis changes the rules on performance critical code.

Its a big deal. Short lived objects become nearly free to use. Convenience rules. If you need to tuple-ize 2 Ints, then stick them in a List, to pass through an API, only to immediately unpack back to the Ints on the other side: go ahead. Dont worry. Those objects probably will never be created; they’re conceptually there.

Get back to what you ought to be doing: Abstracting and Composing.

Arcadia’s level editor lives again

This is a sight I havent seen for 3 months – my game’s level editor working. Its a relief to have it back (partially).

Heavy refactoring of game code, technical debt and experimentation have conspired to make it broken. Its taken much of the past month to fix.

Resource_-_ArcadiaResourcesScenariosNuWoodhaven.scenario.xml_-_Eclipse_SDK-2009.07.06-19.03.27

« Older entries