Friday 27 June 2014

Not A Polymorphic Method

Since the first version of this post I hit on a much tidier solution, and the post has been updated to reflect this.
I was writing a renderer yesterday. I haven't done this in a long time, but ended up at a problem I remember not finding a nice solution to before (probably c. 2000!). I think that the solution I came up with yesterday was nicer than 14 years ago, which is good. However, Java 1.7 didn't let me do quite what I wanted...
I've got some features to draw. Using inheritance they derive from a base class. The features are created by parsing some input and may be numerous and long lived. The details are unimportant here, so we'll reduce them to:
abstract class Feature { ... }
class FeatureA extends Feature { ... }
class FeatureB extends Feature { ... }

I have a separate rendering class. This walks over the features and draws them, as you'd expect. I want to be able to decouple this from the building of the feature set. When I say decouple I mean separate machines, possibly repeated renderings, possibly using different rendering engines on the same data over time. One intuitive (to me) way to write this is:
class RenderEngine { 
  List features;

  void render() {
    setUpGraphics();
    for (Feature f : features) {
      renderFeature(f);
    }
    outputResult();
  }

  private void renderFeature(Feature f) {
    // Feature is abstract, this just catches unimplemented renderings
  }

  private void renderFeature(FeatureA f) { ... }
  private void renderFeature(FeatureB f) { ... }

Of course this doesn't work! Only renderFeature(Feature f) gets used. Java doesn't give me choose a method implementation by the dynamic type of the arguments within an object.

So, what to do?
I really don't want to put the rendering code into Feature - it is busy enough describing the feature and helping the parser abstract from the raw data.
The Decorator pattern isn't really applicable as Feature and rendering have different purposes and different base classes.

I could use a sequence of test, cast, calls:
if (f instanceof FeatureA) { renderFeatureA((FeatureA)f); }
It worked. But, to me, it also smells. Long lists of tests for classes feels like I've used inheritance then broken it, and it would be easy to muck up adding features later. Also, all these tests would be going on in a loop.

The next solution I ended up with is influenced by the fact that all features of a given class will be rendered the same. No exceptions. The Command pattern is the starting point. I can refactor my rendering methods into rendering classes. The methods only use a very little state from the RenderEngine class so the objects will be easy to set up. There will be a lot of feature objects of few types so I don't want to tell each object about the renderer individually - especially as I can't be sure of the renderer at creation time. The trick was to use a static variable to hold the reference to the rendering object from the feature class. However, note that the following is also "broken" as a solution:
abstract class Feature {
  static RenderEngine render;
}

The sub-types use the one static variable in Feature so both return whichever one is assigned last. This is also the downside of the approach - if the rendering of a feature depends somehow on the oveerall rendering of that task and there may be multiple concurrent tasks then the statics are broken.
Java 8 would give abstract static, but I'm not using that here (yet).

At the point the rendering object is assigned the class of the feature is definitely known. I can just use a method in the feature sub-types to assign to a field of those classes. Using inheritance to avoid the duplication would be better, but the code is trivial. The getter is then defined as abstract in Feature and written for each Feature sub-type. So I end up with code roughly as follows:
abstract class Feature { 
  abstract FeatureRender getFeatureRender(); 
}

class FeatureA extends Feature { 
  static FeatureRender renderA;
  static void setRenderObject(FeatureRender ro) {
        renderA = ro;
  }
    
  @Override
  public FeatureRender getRenderObject() {
    return renderA;
  }  
  ...
}
class FeatureB extends Feature { ... }

abstract class FeatureRender {
  abstract void render() ;
  ...
}

class RenderA extends FeatureRender { ... }

class RenderEngine { 
  List features;

  void render() {
    setUpGraphics();
    RenderA ra = new RenderA( ... );
    FeatureA.setRenderObject(ra);
    ...
    for (Feature f : features) {
      f.getRenderObject().render(f);
    }
    outputResult();
  }

My Feature classes are now aware of rendering, but don't hold any of the code to make it happen. I have a sequence of calls to set the rendering before the render loop, and at each iteration I get the rendering object. I think that is more elegant than the sequence of test, cast, call that I had in the rendering loop before. Adding a new feature still involves adding a line of code to set the rendering object, although arguably cleaner and more efficient code than the instanceof test. One bonus of splitting the rendering methods out into their own classes is that the little collection of constants for each goes with it. None is a big step, but together they make RenderEngine much tidier than it was before.

The statics are a problem however, as this is now part of a web service, rather than a stand-alone application. So, I might be rendering with two different rendering classes at the same time - for different outputs. This required a change of direction. One which, having arrived at it, I quite like. In essence, the RenderEngine takes over responsibility for mapping feature objects to rendering objects - but without the clumsy instanceof tests. The solution looks roughly as follows:
class RenderEngine { 
    protected Map renderingMap;
...
    public void setupRendering() {
        RenderFeature rf = new RenderFeatureA(sizing);
 renderingMap.put(FeatureA.class, rf);
        // and similar for other features, also set any parameters due to overall document
    }
    private void renderFeatures(Graphics2D g2d) {
        for (Feature f : features) {
     renderingMap.get(f.getClass()).render(g2d, f);
 }
    }

Also, this approach has greatly simplified the various feature classes: they no longer need to refer to rendering at all. This means that the rendering is in a more coherent group of classes and the coupling between the feature data and feature drawing is much reduced; in particular the data doesn't need to be aware of the drawing any more. This decoupling would also have been present in my initial, impossible, approach - it was only adopting the command pattern that created it.
And the obligatory testing note: I was writing tests as I went. But unit testing graphics is hard so it didn't pick up the failure to draw the right thing. Now I have it working I could test by example, but first time round I couldn't. Even so, I did have a test of running the rendering so it made a tight cycle of refactoring through the various solutions with the aid of a logger and debugger. I could quickly see when it worked and when it didn't, and when my changes broke something else.

Friday 20 June 2014

Ninja Web Framework: Hello World

In a little bit of spare time this week I've been trying out a new thing. We've been using the Play framework in one project, with some success. However that bit was led by someone else and while I had seen the basics I couldn't really claim to have used it myself. It was time for me to explore it a bit more deeply for something new, rather than straight servlets on Tomcat. However, it seemed to be hard work:
Java isn't first choice for Play. That's OK, but I wanted to use Java.
I have a mild preference for NetBeans, partly because I find the UI more intuitive, partly because I find getting NetBeans and Jenkins to use the same build file easier than in eclipse. NetBeans has some Scala support but doesn't fully integrate with Play.
Finally, working in an environment where I run a VM on my laptop for development testing is a bit clunky in Play. (I have grown bored of untangling the software installs for a mix of projects at the MacOS level. New project, new local linux VM.) Play just seemed to be more involved in the writing step than felt natural to me - I like to be able to pull my code back from the infrastructure.
The "hello world" step is often the hardest with a new tool, but this was clunky enough for me to look around.

I soon came across the Ninja Web Framework. Their selling points are: simple to program in Java web framework; easy to use an IDE; works with testing and CI; talks JSON and html; built-in authentication and email. Particularly attractive was the model of development which doesn't need a lot of server setup, with Maven doing all the package management beyond Java and Maven itself. The "works with testing" point merits expanding a little. In the configuration there is a built in notion that you might want different configurations for development, test and production environments. I can see that making life easier.

There was quite a lot of package management for maven to do - it spent a while being busy when creating the project and again when first building / running. Checking back with mvn -o dependency:list reveals 116 packages from a variety of sources. This complex set of dependencies worries me slightly. Projects which die or become radically different will have a knock on effect. The rate of regression testing that might arise as changes happen will make good configuration management a must. And because if any of the imports gets used again in our / some other code on the same server the opportunity for evolving to run multiple versions of one 3rd party package isn't pretty. But I'll set this aside for now and see how it goes.

As I said above, "hello world" is often the hardest step with something designed to be very general. The overhead over the simple approach is wince making. However, the documentation stepped me through the basics and I soon had a web page up. Then I changed the text. Then I added a page. Then I spoofed a session. And so on, little tweaks to learn my way around. Commenting out some HTML broke it for a while, but mostly it did what I expected! And in the end the structure is pretty logical and not over-complex, where over-complex is defined as "requires more than one page in my notebook":


Routes map URLs (optionally with regular expressions and variable parts) to controller methods.
Controllers can call other code and build a response. There are also lots of annotations, for filters, brining in config file parameters, parsing JSON, setting session cookie data etc.
The view is achieved using Freemarker templates. These can import other template files, making consistent headers and footers easy. They work with i18n properties files. The rendering of the result in the controller can pass in parameters which are picked up in the template.

Returning to the issues I had in the first paragraph:

Java just works.
NetBeans (and other IDEs) just work.
The IDE issue is at least in part fixed because of the looser coupling between code and a running Ninja environment. 

So, I haven't done anything very clever with it yet. But I think I know where I'll try using it on a proper project soon. I'll report back when I've got a fuller picture!


Friday 13 June 2014

Programming 102

Posts here have been conspicuous by their absence for a while, but things have been moving on. My energy has been focussed on delivering a revamped "programming 102", so development and business have been very much part time. I've just finished the marking and thought that would make a reasonable basis for a few observations.

The course teaches Java, building on a previous Java course. So the real basic basics have been covered. Whether everyone's got them or not is another thing. Any programming course has a tail of participants that, for various reasons, just don't quite click with the ideas. Like the difference between classes and instances, and how you go from one to the other. At the other end, some have pretty well developed skills already and need to be kept interested.

I mentioned a revamp, and apart from updating against Java versions and my own style of presenting the main change was to push test driven development. This isn't radical, but it was new for (I think) all of these students. The previous course had used BlueJ, with its very visual define / construct / inspect flow. Which I think it great for getting started. Testing makes more sense with a complex program that changes - which was the end goal. However, it provides a useful analogy to the visual approach with our more basic problems. At some point constructing objects and checking what they do by hand gets long winded. The few that hung onto BlueJ hit this about half way through. Testing allows that "I want one of these, one of those, to set that variable, and see what this method returns" flow to be scripted. Making the leap that its the same thing, done in a different way, needs a pushing - if you're finding coding hard then testing can feel like yet more code with this new set of commands to learn. If you've programmed before then for simple problems the value of getting out of the hacking rut can be a hard sell - and tests are written second for the marks. But, I think a significant proportion of my class found problems where "what do I want this to do?" or "this test lets me test a sequence of commands that can then be plugged in to the real code" was a useful step and even more found problems where making their code work was complex enough that the JUnit approach was more efficient than an ad-hoc testing approach.

So, for a first go, I think that worked. Plans for next time include videoed coding demos. Watch this space for a sample! All that teaching makes me want to stretch myself in a different way, and with a bit of luck learn something new. With a bit of luck something bloggable.