Scalable ajax in Grails

Went to the Groovy and Grails exchange at Skillsmatter, in London this week. Great fun. 
Got to meet some good people, see some great sessions and even be honoured to present a session on one of my favourite subjects, http messaging!

First time in a more formal setting at a tech conference.  Exciting! 

http://skillsmatter.com/podcast/java-jee/high-volume-scalable-ajax-with-grails(code resources)

It seemed to go ok. Scary, but a total blast!

David.

Grails upgrade pains and joys..

The title of this is probably unfair, as its not grails itself I’m writing about. I thought I’d share with the world the recent pain of a Grails 1.2 to 1.3 app upgrade.

Leading off from my previous post, I’d just finished doing the upgrade of a mavenised grails app. This is a fairly weighty beast, worked on by mybe 20+ peeps on and off. I’s a Grails/ GWT combo, with pretty good test coverage and an extensive set of webdriver tests (more on these further down!)

So, I start the application up, it breaks straight away, complaining about a tomcat ‘SESSIONS.ser’ file.. a quick read around indicates a tomcat version change, caused by the updated dependencies.

I forgot to clean, ah well, an inauspicious start!

Grails clean sorts me out and the app starts up.

Params

I flick to the one of the main screens of the app (GWT remember), and it renders correctly. I click a button to request some data (a GET request to a controller) and nothing happens.. nada.
I trace the usual suspects, db changes, security filtering, bad http requests. All seems fine, until I notice that a a query parameter going into the db looks odd. (I was an hour in by this point).
Its there, but different. It often includes a space, the user selecting it. eg ‘Some Items’. When it gets into the service its now ‘Some+Items’.
Strange, obviously some form of URL encoding. Not the normal kind though, which would render as ‘Some%20Item’

I’m hiding something. I’ll admit it.

The URL being requested by GWT looked like this /workItemController/getItems/Some+Item?blah=blah

Its the grails params.id that is the problem. In 1.2.2 the above URL would mean that in your controller you could have ‘params.id == “Some Item”‘
In 1.3.4, you can’t. Slightly upsetting.

Changing the url to /workItemController/getItems?id=Some+Item&blah=blah means that the behaviour is the same in both cases.

Fun eh?

No, I don’t think so either. Small, minute, even, but a little annoying. Moving on.

as JSON

In the same controller there was some code that used the ‘as JSON’ construct to serialise some objects to JSON. This stopped working.
All the googling I could find said that the as JSON operator can’t be used on arbitrary objects. Well, that may be true in 1.3. It wasn’t in 1.2.
I may pluck up the energy to investigate why, but likely not… as the testing below soaked up most of it :-S

Tests

Lots of fun and games with the tests.

The unit and integration tests were mostly ok I thought, nothing special about them. However on running them I noticed that there were a bunch of failures. Classes that I hadn’t seen before.
It turns out that some of the devs had felt constrained by Grails testing infrastructure, being junit 3 based. So they decided to slip some junit 4 tests in there for their own personal use.

Now, I like Junit 4, its much nicer than Junit 3. I do, however, think that unless the tests are run in CI, they should not be in source control. If they are then they will start rotting. Rotten tests are an abomination, and cause untold headaches for whover is the maintenance programmer, ie me. So, all these tests you are tempted to write, be they performance checks, static data population, or just something in a special style that doesn’t integrate well; delete them. Go on…. do it now…

It’s not big, and it’s not clever. Don’t do it again or I’ll track you down with my pet bear. He doesn’t like them either, and he is capable of rippiing your arms off.

So, I think to myself, its a Junit 4 test, grails now supports junit 4… @Ignore??

Nope. doesn’t work. Grails then decides that its a junit 3 test suit and runs it anyway…
So I butcher up the rotten test classes and leave them with @Ignore as a marker for anyone else passing by.

Moving onto the webdriver tests. What fun!
We’ve had lots of trouble with this. Mostly boiling down to distinct differences in behaviour between an eclipse invocation of a test and via a test-app call.

Most are harmless; eg like lots of the tests have the assert keyword in them. With grails 1.2/ groovy 1.6 this meant that tests would have the most useless errors I’ve seen in a unit test. The cause? The Eclipse Groovy plugin imports Groovy 1.7, making the tests useful. Ah well, fixed in in the upgrade!

No, the real issue is that the maven invocation kept on leaving firefox windows hanging around, causing the integration server to eventually crash. This is caused by a resource leak in the handling of the webdriver instances. We had this once before, caused by our code having lazy instantiations of the driver. Not a good idea.
This time changes to the grails test infrastructure meant that all the resource cleanup code was being ignored.

Be aware, Grails does its best to handle both junit 3 and 4 tests at the same time, but there are foibles. We were overriding the runBare() method in TestCase to do the resource cleanup (pulled from a heavily forked grails webdriver plugin, actually).
This stopped working in 1.3.4, so instead I’ve moved all the functional tests to junit 4 and created a @BeforeClass and @AfterClass to manage the driver. This works well enough.

So ended my saga…..

junit4

Quality Software – Fact or Fiction?

I’m a software developer, I write software systems, big and small. I like to make things that work well, that are seen to be good. I want the software I write to be of good quality. I’m sure that any other software devs among you will agree, and those that use software would want the same out of your developers….

This seems on the surface to be a worthy goal. A good ambition for writing software; if you take the above as is, however, it isn’t.

Without much more thought about it’s implications, it’s a recipe for evangelistic wars about testing tools, testing levels; frameworks, architecture patterns and so many other things.

Now, you may have figured out from the title of this where I’m going. The above declaration has a bit of a problem, what is the definition of quality software?

This is something that all of us have come across, but is often only discussed at the low level. JMock versus Mockito; TDD versus BDD; code comments versus documentation.

All these are valid discussions, but they become evangelistic when removed from their context. If I were to write an article on code commenting versus self documenting code (for the sake of argument), I would necessarily be quite abstract about the different advantages each has. Why each is good or bad etc. Given no context, the majority would probably side with self documenting code as being of good quality, and commented code as being of bad quality, and they may well be right.

However, until you take this idea and drop it into a codebase; you can’t know what will work and what will not. Context is the final arbiter of what is good in that particular case.

This leads to an interesting thought. If we can’t discuss something as apparently clear cut as comments versus self documenting code without reference to the actual system we’re going to be putting it it, the context of the problem; software quality is intimately tied up to it’s context.

This means that we can’t come up with on overarching theory of what software quality is!

We can’t create a pithy phrase, define a set of tools or create a prescribed code style that the world will follow.

Bugger!

At this point, I’m going to take a leaf from other practices in the world. In other spheres, quality is not defined by the how something is done (as all the examples above are), but by the end product in it’s context. How the end product matches up to it’s requirements is the measure of quality.

Applied to software, what would that look like?

Let’s imagine a fictional product, a messaging system (because I like them!)

The business requirements would look something like :-

  • Process 500 messages a minute.
  • Full audit
  • Store messages for 30 days.
  • 99.9% uptime.

These are the explicit requirements. There would be some implicit ones (commonly seen as expectations) that go with it as well, and are often overlooked in planning. For example

  • It should be quick to add new features.
  • Smaller team is better for the budget.
  • Functionality regressions must be avoided at all costs

They will change on the environment, but there are always the 2 sets.

Now, taking the above idea, that we can measure the quality our messaging app by testing it against its requirements, both implicit and explicit.

Anything we can do to meet those requirements will improve the quality of the software, as seen by the business, and by the developers. ‘Quick to add new features’ may lead us to develop a highly modular system that is enjoyable to code, or a test driven approach; ‘Avoid regressions’ may lead us to implement complete set of regression tests. All good things.

Given the context of this system, we can begin to piece together tools, techniques and technology that will meet all the requirements. It gives us an analysis tool that helps us to choose what will help us create quality software; because we have a common understanding of what is meant by that.

It allows us to be pragmatic in our choices, without resorting to simplistic generalisations on which tool/ technique/ whatever is best.

It does lead in interesting, and I think valid directions, however.

Take the requirement above – ‘It should be quick to add new features’. This can have several implications, only some of which impact the code itself. If the code were being implemented in an agile environment, you’d be able to schedule work into the flow very easily; and have it completed quickly. Similarly, a very junior team working on an old codebase will take longer to add new features, an experienced team will be much quicker. If you make efforts to spread knowledge widely around a team then you’ll see your features being implemented more rapidly.

So, the software becomes a higher quality piece, when tested against it’s requirements, only by changing the team developing it; the software itself hasn’t changed. This is weird… but an expected outworking of my assertion that software quality is intimately tied up with it’s context.

So, is software quality fact or fiction? Is there one true way to write code that is ‘good quality’?

I would say no. There isn’t. Software Quality when applied to the whole field of software development is a Unicorn, a creature that can’t be caught or tamed; and ultimately, doesn’t even exist…

Viewed pragmatically however, we are asking the wrong question.

Given my circumstances, given my team, given my code base as it is now; given my requirements, what would a quality piece of software look like? This is a powerful tool for evaluating all the tools, techniques and development methods that can help you on the way.

So the question becomes. What would quality look like in the FooBar Messaging App V1.2?

That’s real, and it’ll help tame the evangelists among us too!