Category Archives: Uncategorized

Crayon syntax highlighter

I’ve tried to install a couple of different syntax highlighters in the past and was thoroughly unimpressed. They either failed to install cleanly, didn’t work or had numerous issues that rendered them useless.

Anyway, after manually formatting source code snippets for years, I’m giving Crayon a go. The install was easy and the output looks good. Let’s see how we go :-)

You can find it here: https://wordpress.org/plugins/crayon-syntax-highlighter/

Data-driven testing tricks

It’s a fairly common occurrence — somebody wants to use NUnit’s data driven testing, but they want to vary either the action under test, or the expectation.  I.e. they’re not parametrising simple data, they’re parametrising the actions.

You cannot encode these things via normal data-driven testing (short of doing really nasty things like passing string names of methods to be invoked or using enums and a dictionary of methods) and even if you use a hackish workaround, it’s unlikely to be flexible or terse.

Test readability is paramount, so if you have some tests written in an unfamiliar style, it is very important to express the intent clearly, too.

NUnit’s data-driven testing

NUnit uses a few mechanisms to parametrise tests.  Firstly, for simple test cases, it offers the [TestCase] attribute which takes a params object[] array in its constructor.  Each argument passed to the TestCaseAttribute’s constructor is stored, ready for retrieval by the framework.  NUnit does the heavy lifting for us and casts/converts each argument to the test method’s parameter types.  Here’s an example where three ints are passed, then correctly mapped to a test method:

The main limitation here is that we can only store intrinsic types.  Strings, ints, shorts, bools etc.  We can’t new up classes or structs because .NET doesn’t allow it.  How the devil do we do something more complicated?

Passing more complicated types

It would appear we’re screwed, but fortunately, we can use the [TestCaseSource] attribute.  There are numerous options for yielding the data, and one of them is to define an IEnumerable<TestCaseData> as a public method of your test class (it works if it’s private, but since it’s accessed via reflection it’s a good idea to keep it public so that ReSharper or other tools do not flag it as unused).  You can then fill up and yield individual TestCaseData instances in the same fashion as before.  Once again, NUnit does the mapping and the heavy lifting for us.

If you do not require any of the fancy SetDescription, ExpectedException etc. stuff associated with the TestCaseData type, you can skip one piece of ceremony by simply yielding your own arbitrary type instead (i.e. change the IEnumerable<TestCaseData> to IEnumerable<MyType> and then simply yield return new MyType()).

Passing a delegate as a parameter (simple)

The simplest case is that you want to vary which methods are called.  For example, if you have multiple types implementing the same interface or multiple static methods, encoding which method to call is very simple.

Here’s an example from Stackoverflow that I answered recently where the author wanted to call one of three different static methods, each with the same signature and asserts.  The solution was to examine the method signature of the call and then use the appropriate Func<> type (Funcs and Actions are convenience delegates provided by the .NET framework).  It was then easy to parametrise the test by passing in a delegates targeting the appropriate methods.

More advanced applications

Beyond calling simple, stateless methods via delegates or passing non-intrinsic types, you can do a lot of creative and cool stuff.  For example, you could new up an instance of a type T in the test body and pass in an Action<T> to call.  The test body would create an instance of type T, then apply the action to it.  You can even go as far as expressing Act/Assert pairs via a combination of Actions and mocking frameworks.  E.g. you could say “when I call method X on the controller, I expect method Y on the model to be called”, and so forth.

The caveat is that as you do use more and more ‘creative’ types of data-driven testing, it gets less and less readable for other programmers.  Always keep checking what you’re doing and determine whether there is a better way to implement the type of testing you’re doing.  It’s easy to get carried away when applying new techniques, but it’s often the case that a more verbose but familiar pattern is a better choice.

Debug.Assert vs. Exceptions

“When should I use Debug.Assert and when should I use exceptions?” — It’s a fairly sensible question to ask, but you’ve got to sift through a lot of articles to get anything resembling solid guidance on it (Eric Lippert’s stack overflow post is particularly enlightening).  I’ve wrestled with it quite a bit as a programmer and test engineer, so here’s my 2 pence.

Good rules of thumb I’ve arrived at:

  1. Asserts are not a replacement for robust code that functions correctly independent of configuration. They are complementary debugging aids.
  2. Asserts should never be tripped during a unit test run, even when feeding in invalid values or testing error conditions. The code should anticipate and handle these conditions without an assert occurring!
  3. If an assert trips (either in a unit test or during normal application usage), the class containing the assert is the prime suspect, as it has somehow managed to get into an invalid state (i.e. it’s bugged).

For all other errors — typically down to environment (network connection lost) or misuse (caller passed a null value) — it’s much nicer and more understandable to use hard checks & exceptions. If an exception occurs, the caller knows it’s likely their fault.  This is what makes the .NET base class libraries a joy to develop with — it usually clear when you are misusing an API, resulting in fewer “select is broken” moments.  It fails early and clearly communicates the reason for failure.

You should be able to test and use your class with erroneous input, bad state, invalid order of operations and any other conceivable error condition and an assert should never trip expectedly.  Each assert is checking something should always be true regardless of the inputs or computations performed.  If something should always be true, then the number of asserts used shouldn’t be a barrier to thorough unit testing.  If an assert occurs, the caller knows it’s likely a bug in the code where the assert is located.

If you stick to this level of separation, things are a bit easier.

No longer a software engineer in test

.. I’ve swapped jobs at Realtime Worlds; I’m now a plain ol’ software engineer.  As a result, there’ll be no more test engineering for me.

Whilst it’s true that I am changing jobs, what I learned as a test engineer has irrevocably changed the way I write software for the better.  I learned about the value of automation and wrote tools to automate processes, but the most satisfying thing I did was learn about how to design for testability.  Not only do these principles aid us in automated testing, but I firmly believe that following these principles results in better code quality.  These two separate things have much in common and seem to naturally converge.

Any time I had to pick through some code, the more thought that went into the testability of the code, the easier it was to work with and reason about the code.  As such, doing a two year stint as a test engineer was possibly one of the best entry level routes I could’ve hoped for.

I’m test infected and there is no going back on that.

Internals: To test, or not to test?

Prepare for some flimsy and strange analogies.

I’ve been reading a few stackoverflow questions dealing with whether you should test the guts of a system as well as the public API.  Most of the people who advocated never testing anything but the main class APIs seemed to talk as if these APIs were extremely coarse-grained and any change to the internals would result in major breaking changes to the tests.  This strikes me as somewhat strange, as it’s often not the case in my (admittedly limited) experience.

On the other hand, many of the developers who advocated testing the implementation details also advocated test-driven development (TDD); that’s when the penny dropped.  To me, this is a good illustration of why designing for testability can make change less painful.  Sometimes I cringe when I hear the phrase “agile” bandied about, but it rings true here.

Little, bitty pieces

Designing for testability in conjunction with TDD tends to produce loosely coupled classes that have very few responsibilities.  You trade more complicated wiring and interactions for unit isolation and simplicity.  In the majority of cases, I feel it is an attractive proposition.  Instead of one 3000 line class that does everything, I end up with 25 x 50 line classes and the odd bigger one here and there.

Complicated behaviour is usually achieved through composition (inversion of control using constructor injection) and delegation.  I consider most of my inner classes to be implementation details, because the user doesn’t get to do anything with them.  They’re off sitting in a library somewhere; they’re not exposed to the user.  The user gets a hold of the top level class that tells the innards to do the real work (instead of the 3k line class that does everything).  I can take any unit in the system off the shelf and test it without a struggle.

So, who is going to suffer more breaking changes and hardship if they test the internals?  Is it the developer who writes the all-singing, all-dancing monolithic class, or the developer who writes numerous, small, simple classes that can easily be tested in isolation without any fuss?  Since you’re reading a testing blog, I think you know what I’m going to say, and it’s not going to be the twisted brain-wrong of a one-off man mental*.

Big is awkward

Large classes are harder to understand, maintain, refactor and test thoroughly.  Furthermore, it is my experience that the biggest classes tend to grow and grow.  The bigger it grows, the more ungainly it becomes.  Gangly limbs poke out every side; sharp edges are present in abundance.  It’ll probably stick the heid on you or punch you into paralysis.

Have you ever picked up a huge class and been tasked with adding new functionality and testing it while you go?  It’s painful.  By the time you’ve worked out which tightrope you’ve got to walk (and fallen off it 20 times due to the principle of most surprise), you’ve wasted a lot of time adding in the new functionality and even more time writing the tests.

Small is beautiful

Contrast that to a system where you just add a method or two in a simple class (or add a new one) and the maintenance headache is reduced to a dull ache.  If it’s easy to write, it’s easier to understand, test, maintain, refactor and — just as importantly — it’s even easier to throw away.  I don’t get attached to tiny classes or their respective unit tests.  They’re like tic-tacs; if I lose one or five, I shrug.  Big deal.  I get some new ones.  Open for delete.  Don’t cry you buffoon, it’s just a tic tac.

Misko Hevery recently posted something interesting on his testing breakdown and, while most of us won’t reach his level of testing efficiency, it’s an interesting read.  Misko states that the vast majority of his time is spent writing production code, not test code.  Yes, the ratio of lines of test and production code produced  is almost 1:1, but the time invested is wildly different.  Test code is usually verbose, but it’s easy to write out when your classes are small and you test in lockstep.

In summary, I believe testing the guts of your classes can be a worthwhile approach, but designing for testability is paramount when doing so.

*

Source control for the common man

Due to barely taking a day off during my first six months (my own choice), I accrued so many unused holidays that I found myself with a stretch of 17 or 18 days of holiday over Christmas.  Now, I’ve never been one to shy away from laying around for prolonged periods, but this time around I found myself restless after a week or so.  In the end, I cracked; I started programming quite a bit — just pottering around doing my own personal experiments.

It was fun, but I realised that developing at home wasn’t quite as fulfilling as it is at work.  I have a good PC at home, dual screens etc. so it wasn’t that — it was something that never used to bother me: a lack of source control.  I kept finding myself in situations where I wanted to undo something, but I’d closed the file and/or forgotten to back something up before making a change.  When you’re used to using source control every day, its absence is painful.  Now, this will likely result in me being labelled as a heathen by some (Hi, Mishets :P) but I really couldn’t be bothered downloading and setting up SVN.  I’ve used SVN before and it worked fine, but I use perforce at work, so perforce was the ideal solution.

Handily, perforce offers a free server and client download for up to two users.  For personal work, this is perfect.  The reason I’m posting about it is because it was incredibly easy to set up — I was impressed at how simple the process was.  The client and server are tiny downloads, the configuration (for my needs anyway) was cursory and the results immediate.  Although Perforce can be a little cranky at times, I think it’s well worth a look.

Additionally, if you get a job programming (or doing anything related to programming) source control software is almost definitely going to be involved, so it pays to start now and get used to it.  While some will scoff and say “well, obviously!”, I will qualify in advance by saying that some programmers don’t use source control software prior to filling in job applications; some don’t even know what it is!

The perforce download page has the server & P4V client files