Tuesday Blues

18 11 2016

Well, not really blues, maybe a light aqua sort of shade…

“Amir” I hear you say, “you crazy beautiful bushy bearded programmer, I don’t think you’ve ever talked about managing the workload with Buck!” And you’re right, I haven’t written much about the project management side of things. So perhaps now is the time to draw the outlines of the hows, whys and wheres of managing a project the size of Buck.

Let me preface this post. Every team is different and so in my opinion every methodology must therefore be customized for those particulars. In essence; there is no magic “one size fits all” management process that would boost a team’s productive output. And I remember spending a lot of time when I had less experience looking for one. With that said, let’s start.

[quick note at this point; I know the title says “Tuesday” and today is Friday, but I started writing this post on Tuesday]

Before getting into our process and because our team is different from yours. Let’s meet the team.

Buck has three people working on it. A programmer [me!], the artist and the designer (who does animations, sounds, music and the business side of things on top of it all… we love you Gal). As you can see we’re a small team. A very small team. We work separately except one day a week when we meet and put in a full office-like workday. We wanted three basic things from our process. Simple, transparent and fast. We’ve tried several approaches which failed (and I might write about those later because knowing what failed is also important but I’m running out of time again).

So, what do we do?
Enter Trello.

Trello is a simple and free (yay for awesome free tools) online tool which allows its users to create vertical lists across a horizontal axis. Each list contains cards which can be moved between the lists or archived into an internal [hidden] list. As I said, simple. At the moment we’re using a single board for Buck and have broken it into two sections; one for the code/development and one for the art/design. I’m not familiar with the art/design section (I’m the programmer after all) so I’ll concentrate on my section.

Let’s take a small detour to the cards. We have two types. Feature cards and Bug cards. And, just as they sound, each does exactly what you’d expect. A feature card describes some game or editor feature that someone wants (me or the designer) and will usually take the form of a request; ie. “I want to do [X] because of [Y]”. Bug cards are broken into two. The title of each card will describe what the bug is while the description in each card will (ideally) describe the steps required to reproduce the bug (called in parlance, repo).

Onward to lists; five lists. Backlog, Sprint, Doing, Review and Issues.

The backlog list is the catch all of all features we want in the game at some point or another. It isn’t organized in any particular way and we add to it constantly… too constantly… it always seems to grow 😀 (though not all of cards will see the light of day and will remain forever unimplemented – until the next project).

The issues list isn’t anything special; any issue we or one of our testers discover ends up there. And like the backlog it has a tendency to grow somewhat. Unlike the backlog we do sort the bugs. Into minor, major and game-breaking. Guess which ones we take care of first?

Then we have the Sprint list. This list contains all the cards we’ve decided to implement in the current sprint session. A sprint session is a static (ie. not changeable) length of time we’ve set to get from point A to point B. Point B meaning BuildI can’t actually stress this enough but;
Sprint times are immutable. Do not make them longer or shorter while sprinting. Stick to your sprint time during the sprint and Do. Not. Change.

You can change sprint times. In fact iterating over the process is as important as anything else in the process. So, what did I mean? First, we do not end the sprint prematurely. And secondly WE DO NOT PUSH THE SPRINT END UNTIL WE HAVE A WORKING BUILD. Whenever we arrive at the end of a sprint and can’t make a build. Tough. We don’t make a build. We end the sprint. Failing to implement features means we’ve taken on too much and/or we’ve misjudged the workload on any particular card. This is our opportunity to learn. Otherwise the process will fail over and over again because we’ll never become better at estimations. Trust me; I’m speaking from experience here. Don’t do this.

So what can you do?
Like all good software developers: Iterate. Do a few sprints (unless some colossal failure happens). Review; do you want more features in a build? less features in a build? Add a week, take off a week. Rinse/Repeat. Small increments will yield stable results.

Our current sprints last about a week (fast iteration, only one programmer) and usually amount to one new feature implementation and bug fixes. They were somewhat longer when the project started because we needed more time to implement systems and set up the framework before we could implement game features. Experience and pain; get it and learn from it 🙂

Once the sprint list is set work can begin. I pull the card I’ll be working on from the sprint list to the doing list and when I think I’m done, I push it to the review list. After each build Gal will test and review each card in the review list while I’m working on the next sprint. Review cards that passed are archived; cards that failed are updated with information and put back into the backlog for the next available sprint. And that’s about it.

Nothing I said here is set in stone. Our team is small and very flexible. The planets do align sometimes to throw us a curve ball and we have to step outside the sprint. As a small team I suspect it’s easier for us and at times too easy. Keeping focus is also hard work. Anyway, I hope whoever reads these things of mine enjoys and until next time…

“I speak of madness, my heart and soul, I weep for people who ain’t got control”





Unit Tests v2.0

6 11 2012

Not the most profound title to be found out there but I couldn’t think of anything better. I’m a big proponent of testing and test-driven-development in particular. This enthusiasm, unfortunately, does not automatically make me good at writing tests or at driving my designs using tests – but I try. One of the things I find really hard about TDD is how to organize the tests into coherent units and, extrapolating from this, which tests should I write where. I’ve tried a few schemes and didn’t really feel comfortable with any of them (most likely an issue with me rather than the schemes themselves by the way). So lately, and I mean as of the following week or so, I’ve adopted the following pattern.

Quick aside, I’m using C# and the .Net version 4.0 along with NUnit and Visual Studio 2010.

First of I’ve decided to symbolically separate test classes [TestFixture] and methods [Test] from the rest of my code by using, *gasp*, underscores and all lower-case letters. Secondly I’ve started making instances of the class under test so that I can verify actual usage of the class itself and finally I’m making a namespace for each instance and a test fixture for each operation/property that I want to test under that instance.

It’s probably a bit hard to follow that explanation (and I probably will refine it to a set of written standards at some point) so let me give a real world example from my very own code!

/// <summery>
/// Testing the generated target configuration when using the
/// DefaultInternalBuildProperties.
/// </summer>
[TestFixture]
public class target_configuration
{
  [Test]
  public void should_be_release()
  {
    var instance = new ContentForTestProject();
    Assert.That(instance.Factory.GetTargetConfiguration(),
                Is.EqualTo("Release"));
  }
}

That’s the testing piece and it lives under the namespace of:

testing_default_internal_build_factory

Which means that when I’m reading the entire test description it generates the following string:

testing_default_internal_build_factory.target_configuration.should_be_release

Another point should also be made here concerning the following line in the testing code:

 var instance = new ContentForTestProject();

ContentForTestProject is a type of, ah, factory I guess you could call it, that generates for me an instance of the class being tested. The previous line of code will be repeated for each testing method simply because I’ve come to realize that this article is correct when using NUnit.

/// <summary>
/// Generates an instance of the DefaultInternalBuildProperties [Factory]
/// which will in turn fabricate those properties for some fake game
/// project in a folder called 'Test'.
/// </summary>
public class ContentForTestProject
{
  public readonly DefaultInternalBuildProperties Factory;

  public ContentForTestProject()
  {
    Factory = new DefaultInternalBuildProperties();
    Factory.Fabricate();
  }
}

This entire scheme might not be ideal or even used and maybe I’ll find it doesn’t scale very well down the line. But for now it sort of fits the way I view my code and it helps me keep track of the operations that I’m testing (that is the operations that I want my classes to perform).

“And we only smoke when bored, So we do two packs a day”





put ’em up!

2 11 2011

Alright, this post is mostly about complaining, it started off as two posts complaining about different things but I figured it’ll be better to concatenate them into one and be done with it. So let’s roll our sleeves, er… my sleeves, actually I’m not wearing a shirt which is probably more information than anyone wants to know…

Agile development
I’ll be frank here, I’m not actually sure I know what being agile is. I try to be agile, I’ve read heaps about it and try to make sure I follow the spirit of it (as far as I understand it anyway) but it’s hard and I’ve got lots to learn. That said I think by now I do know what agile isn’t… The following are just some snippets I’ve picked up along the way.

“Don’t worry about formalities too much, we’re agile after all, we’ll just do a couple of spontaneous scrums”

“Story points… what’s that? You’ve got 12 hours just pick the stories you think you can do”

“Are you synced with the repository?” –> “What, no, it’s been like 3 days since I’ve done a commit”.

“We only unit test the really important parts of the system”

“TDD is stupid, you’ll just break all the tests when writing new code and then have to rewrite them all”

“Don’t write new unit tests because it’ll take too long to set up the ENTIRE system in order to run them”

There are a few more but these are enough to get it out of my system.
The next point I’m going to complain about is also the last one, so without further ado;

Unity3D
Let me preface this section with some positives. Unity is a very nice renderer and you’ll be hard-pressed to find a better solution for mobile devices cross-platform deployment. That said,

Error/Warning Messages
It isn’t hard to write good error messages, they should be short and tell me what happened (and if possible some probable ways to solve it). The following are not good messages:

  • “Impossible” – really… really… Is that the best we can do?
  • “This shouldn’t have happened” – well no shit Sherlock, I don’t need the program to tell me that do I???

Stability
I know it’s hard to have something as complex as Unity at perfect stability, but my workflow involves the Windows’ Task Manager and in the words of a great poet, “it ain’t no good”.

Okay, I think that’s it. There is more of course, the magic methods Unity uses, or the insane object model but I’m tired and my system’s cleared. Stay tuned for next week when I’m going to start writing about actual game dev stuff, Yay!

“But I’ll try to carry off a little darkness on my back, ‘Till things are brighter, I’m the Man In Black.”





Agile games

9 08 2011

Just watched the “Gamification” episode of Extra Credit and it got me thinking… Are Agile methodologies the gamification of programming? It all fits in the end, the achievements, the progress bars, levelling up and marauding zombie dinosaurs; well maybe not the zombie dinosaurs but you get what I mean.

The soldier came knocking upon the queen’s door; He said, “I am not fighting for you any more”





And I’m back!

16 03 2011

So, I’m not dead, not yet anyway; let’s just say I took a very looooong vacation. Which is a lie since I never take vacations and I’ve actually started full time work, but hey I’m allowed to dream.

The last time I’ve seen this blog it was actually still 2010 and I was writing tests for a timer class which is long been tested to death and it works… Honest… Trust me… Okay, alright, I’ll post the rest of the in the next couple of days once I reorganize and try to remember what the hell I was talking about. In the meanwhile I’m also glad to report that I’ve picked up two other projects to work on, both sort of have to do with Microsoft’s Dream.Build.Play (rather than Dream.Build.Debug which is a completely different thing altogether) and the upside of that is that I get to work with C# (which I have grown to love in the last few months) and XNA (which I have grown to like in the last two weeks). Well, being the lovable programmer that I am the first thing on my mind is, you guessed it, TDD!

And you know what, XNA is not the friedliest framework for tests as I’ve discovered, at least from what I’ve gathered by trying a few bits and reading up online. See the problem with a framework and the concept of Unit Tests is that I don’t think they mash together quite perfectly. Unit tests rely on nothing but the classes, frameworks force complience on everything they have and the kitchen sink (actually XNA isn’t that bad, I’ve seen worse, much much worse).

Let’s put things in perspective.

One of the projects in question to use XNA is a 2D platformer type thingy, which I won’t go into much detail since I’m not really at liberty to say much at the moment. I’ve been nabbed on Sprite duty. XNA comes with a lot of built in functionality for loading and drawing pretty 2D pictures via the ContentManager and the SpriteBatch objects and having a central object to hold all the nice data and functionailty will be really nifty; enter the mighty Sprite class. Let’s TDD!!

class TestNewSprite
{
  [TestMethod]
  public void name_is_NewSprite()
  {
    Assert.AreEqual("NewSprite", _sprite.Name); 
  }
}

Wooh, so many things that break on that it isn’t even funny. For the rest of it you’ll have to imagine me building the code backwards.

class TestNewSprite
{
  public TestNewSprite()
  { 
    _sprite = new Sprite("NewSprite"); 
  }

  [TestMethod]
  public void name_is_NewSprite()
  { 
    Assert.AreEqual("NewSprite", _sprite.Name); 
  }

  private Sprite _sprite;
}

Alright, I know the drill; create the class, create the fields and watch everything go green. Now what?

My first thought was a Sprite needs an image. Basic right? I mean, the whole point of it is to wrap a texture and describe how to draw it with extra data… Wrong! This is basically where I hit a brick wall. A Texture2D object requires the XNA framework to load, meaning I would need start fiddling around with mocks and manually instantiating the parts of the framework which I need. While these things are possible, and in fact have been done by a couple of other people on the ‘net, it just doesn’t strike me as a unit test, intergration test – yes, but not a unit test. So after some thought and talking to myself while doing the dishes I’ve decided to change my perspective on the problem.

I’ll TDD my Sprite class as a data set which describe the drawing behaviour to apply on a texture. The class will hold a reference to a texture which will be created by a ContentManager object but I’ll not test that particular functionality, I’ve got to trust XNA that it works right?

I’ll most likely wrap XNA’s concept of a ContentManager in my own custom object which will automate the loading of a texture using the Sprite class data. And as for drawing; same thing. I’ll wrap the SpriteBatch in an object which will know how to use the Sprite, SpriteMap and whatever else I’ll think about and joy to the world. Unit tests will ensure not only that I only have the data I need but that all my assumptions about manipulating it [like scaling, moving, etc.] will be correct.

Since I haven’t yet tried it I don’t know if it’ll work but I suspect that it will. I’ll write up a self-review tomorrow after the first attempt.

“Though I do like breaking femurs, You can count me with the dreamers”





Timer – Part 2

2 12 2010

Finally, Timers and all that Jazz. Let’s continue with a quick sneak peek into the previous post and the original requirements.

Another thing I would like to test is whether our timer is running or not.

TEST_FIXTURE(timer_not_running, _check_timer_is_not_running)
{
  CHECK_EQUAL(false, _timer.IsRunning());
}

Add the functionality and remove the duplication.

#ifndef _TDD_TIMER_
#define _TDD_TIMER_

class Timer
{
  public:
    typedef char* name_type;
    typedef double elapsed_type;

    bool IsRunning() const { return _isRunning; }
    name_type Name() const { return _name; }
    elapsed_type TotalElapsed() const { return _totalElapsed; }

    Timer(name_type name)
    : _name(name)
    , _totalElapsed(0)
    , _isRunning(false)
    {}

  private:
    bool _isRunning;
    name_type _name;
    elapsed_type _totalElapsed;
};

#endif/*_TDD_TIMER_*/

Compile, run, works, let’s stop for a moment and think. At the end of the last post I’ve had to make a design decision regarding the Timer and with the boolean test I’ve just written that decision comes into light. But what is it? The answer lies in the fixture itself. By allowing tests to be carried across a Timer object which is not running I’ve actually also stated that the Timer object will have to be started manually by the class users. It may seem to be a small part of the overall design and probably fairly intuitive and/or obvious but we also have to remember that this functionality emerged as a consequence of writing the tests first. Another benefit of these tests which can become apparent is their value as documentation. With just a glance at the name of this fixture we are already shown what can and needs to be done in order to use our Timer.

There’s one last thing I can think of that the Timer needs at the moment and that’s the ability to measure the time elapsed between updates. A timer that hasn’t been started and hasn’t updated should report that value as zero though.

TEST_FIXTURE(timer_not_running, _check_tick_elapsed_is_zero)
{
  CHECK_EQUAL(0, _timer.TickElapsed());
}

And we know the drill by now so let’s just skip and create the property and variable in the same step.

#ifndef _TDD_TIMER_
#define _TDD_TIMER_

class Timer
{
  public:
    typedef char* name_type;
    typedef double elapsed_type;

    bool IsRunning() const { return _isRunning; }    
    name_type Name() const { return _name; }    
    elapsed_type TickElapsed() const { return _tickElapsed; }    
    elapsed_type TotalElapsed() const { return _totalElapsed; }

    Timer(name_type name)
    : _isRunning(false)
    , _name(name)
    , _tickElapsed(0)
    , _totalElapsed(0)
    {}

  private:
    bool _isRunning;
    name_type _name;
    elapsed_type _tickElapsed;
    elapsed_type _totalElapsed;
};

#endif/*_TDD_TIMER_*/

Run the tests, see that everything works. And that’s all the functionality I can think of that our non-running timer should or can expose for us. We need to step it up, go crazy… Let’s make a timer that actually runs!

#include <UnitTests.h>
#include "Timer.h"

struct timer_is_running
{
  Timer::name_type _name;
  Timer _timer;

 ~timer_is_running(){}
   timer_is_running()
   : _name("IsRunning")
   , _timer(_name)
   {}
};

TEST_FIXTURE(timer_is_running, _check_timer_name_is_correct)
{
  CHECK_EQUAL(_name, _timer.Name());
}

Run tests and everything is green. Now comes a little confession. I’m not sure whether I should re-test the name functionality which is immutable in the sense that the name cannot change and has already been proven to be working properly by removing the code duplication from the previous tests. For now I’ll keep the test and the test-code duplication it [sort of] brings, why? I’m not sure…

Let’s look at the running timer concept. I know that the fixture as it stands will not start a timer running because the previous test for IsRunning() passes on false. My assumption though is that the IsRunning() on a running timer should be true. Test time.

TEST_FIXTURE(timer_running, _check_is_running)
{
  CHECK_EQUAL(true, _timer.IsRunning());
}

Run and fail. Clearly we need a way to start the timer and change the value returned by IsRunning(). Unlike the previous tests I can’t just hardcode a return true since the other test will fail. I’ve got to actually add a function to do that and call it before I test, to do this I’ll modify the fixture a bit.

#include <UnitTests.h>
#include "Timer.h"

struct timer_is_running
{
  Timer::name_type _name;
  Timer _timer;

 ~timer_is_running(){}
   timer_is_running()
   : _name("IsRunning")
   , _timer(_name)
   {
      _timer.Start();
   }
};

Compile, run… Hey, it’s not even compiling now; apparently we need to actually write the Start() function.

#ifndef _TDD_TIMER_
#define _TDD_TIMER_

#include <cassert>

class Timer
{
  public:
    typedef char* name_type;
    typedef double elapsed_type;

    bool IsRunning() const { return _isRunning; }    
    name_type Name() const { return _name; }    
    elapsed_type TickElapsed() const { return _tickElapsed; }    
    elapsed_type TotalElapsed() const { return _totalElapsed; }

    void Start();

    Timer(name_type name)
    : _isRunning(false)
    , _name(name)
    , _tickElapsed(0)
    , _totalElapsed(0)
    {}

  private:
    bool _isRunning;
    name_type _name;
    elapsed_type _tickElapsed;
    elapsed_type _totalElapsed;
};

inline void Timer::Start()
{
  assert(_isRunning==false);
  _isRunning=true;
}

#endif/*_TDD_TIMER_*/

I’ve jumped a step in there by already putting in the code to change the _isRunning variable because there’s no point in trying to run the tests for running’s sake. I’ve also added an assert there [I’ll assume anyone who reads this also read my other posts about preconditions/postconditions and writing custom asserts]. The assert is another preliminary test to communicate the internal logic of the Timer. In this case I’ve chosen to say that trying to Start a running timer is cause for alarm because somewhere there’s a logic misstep. Anyway, compile, run, test and pass.

This post is running a bit long and I’m running a bit tired but I’m happy with the direction this is taking…

“I pounded on a farmhouse looking for a place to stay, I was mighty mighty tired I had come a long long way”





Timer – Part 1

27 11 2010

Why part 1? Because arrays are zero-indexed for me.
Lame jokes aside, let’s continue… um… now what?

In the army when we’ve done navigation field exercises they told us that whenever we get confused or lost the best option is to find higher grounds and survey the land. This particular advice is just as relevent for me here as it was back then [although no matter how many times I got up I always got lost again on the way down but that’s another story :D].

The equivalent to taking to the higher grounds here is looking back at the original answer to “what do I want?”. Ok, looked at it, now what’s the simplest state in which a Timer object can exist? The simple answer [and simple is the only answer one should look for when testing first] is a Timer that isn’t running. Sounds good, let’s do that. I can already see that I’m going to end up with some duplicated setup code so I’ll write the fixture code now [but keep the previous test in a different file, call this one timer_not_running_tests.cpp].

#include <UnitTests.h>
#include "Timer.h"

struct timer_not_running
{
  Timer::name_type _name;
   Timer _timer;

 ~timer_not_running(){}
    timer_not_running()
    : _name("NotRunning")
    , _timer(_name)
    {}
};

TEST_FIXTURE(timer_not_running, _check_timer_name_is_correct)
{
  CHECK_EQUAL(_name, _timer.Name());
}

Compile, run and watch it flame. Why? Timer::Name() returns a constant string to satisfy the previous test. Stop, let’s reflect here for a second before going on. If we look at the code from last post we’ll also notice that the first test we wrote suffers from a bad case of code duplication. The “Timer1” string is constantly duplicated, in the test and in the Timer class code. I’m going to take the time now to remove the duplication by refactoring the Name() function.

#ifndef _TDD_TIMER_
#define _TDD_TIMER_

class Timer
{
  public:
    typedef char* name_type;

    name_type Name() const { return _name; }

    Timer(name_type name)
    : _name(name)
    {}

  private:
    name_type _name;
};

#endif/*_TDD_TIMER_*/

Compile, run and watch both tests pass. What have I learnt? Removing code duplication as soon as possible leads to more general solutions. So from now on, obvious code duplication should be removed as soon as possible. The second test should always be written though, to verify that the solution is indeed what we want. The other approach for something like this is using the additional tests to force the functionality to change, in this case I would have had to hold the name string in an actual variable in order to have different names. Using more tests to zoom-in on the correct implementation is called Triangulation within the Agile circles and I usually use it to flesh out complex functionality. Alright, less talk more code, I’m testing a timer that isn’t running. If a timer isn’t running then the total time that it counted should be zero, right? let’s test.

TEST_FIXTURE(timer_not_running, _check_total_elapsed_time_is_zero)
{
  CHECK_EQUAL(0, _timer.TotalElapsed());
}

Compile, run, things explode, let’s add the right function.

#ifndef _TDD_TIMER_
#define _TDD_TIMER_

class Timer
{
  public:
    typedef char* name_type;
    typedef double elapsed_type;

    name_type Name() const { return _name; }
    elapsed_type TotalElapsed() const { return 0; }

    Timer(name_type name)
    : _name(name)
    {}

  private:
    name_type _name;
};

#endif/*_TDD_TIMER_*/

Whoops, code duplication. Let’s refactor.

#ifndef _TDD_TIMER_
#define _TDD_TIMER_

class Timer
{
  public:
    typedef char* name_type;
    typedef double elapsed_type;

    name_type Name() const { return _name; }
    elapsed_type TotalElapsed() const { return _totalElapsed; }

    Timer(name_type name)
    : _name(name)
    , _totalElapsed(0)
    {}

  private:
    name_type _name;
    elapsed_type _totalElapsed;
};

#endif/*_TDD_TIMER_*/

Compile, run, all tests pass and I’m happy. The code itself isn’t very exciting to look at, after all of this and I’ve just got two functions up and running. But the bigger picture is a bit more invloved. I’ve learnt the importance of removing code duplication early, I’ve had some more practice in making assumptions about functionality [a timer counts the total elapsed time therefore a timer that isn’t running should have a zero value for it]. And last but far from least, I’ve started documenting the contract between the client [what is needed] and the developer [what the code does] through the tests; that is, the timer must have a name and will measure a total amount of elapsed time. There is one more design decision I’ve made here, it might not be as obvious though, can anyone spot what it is??

“…Into the half light, Another velvet morning for me, yeah…”