Little trick

29 07 2010

This one’s going to be quick; it’s a little trick I came up with [not that I’m under the impression I’m the first or only one to come up with it].

The trick is to do with defining templated vectors, I’m not actually using it at the moment since I’ve taken to a different route but it struck me as nice enough to post up.

Basically what I had was a BaseData struct which defined a union and was used as both a Matrix base and a Vector base; here’s the code:
template
< const unsigned R
, const unsigned C
, typename T
>
class BaseData
{
  protected:
    union{
      m[R*C];
      mm[R][C];
    };
};

The BaseData iteslf is a bit more complicated obviously, having overloaded operators and constructors but for the sake of clarity.

The first thing I wanted to add to it was the ability to discern between RowMajor and ColMajor matrices [a valid distinction as also noticed recently by a friend of mine on his own blog]

So I’ve quickly whipped up a little enum;
enum MajorOrder
{
  ROW_MAJOR=0,
  COL_MAJOR=1
};

Take a quick note of the values assigned because that’s going to come in handy in a minute.

Now simply add that to the template arguments:
template
< const unsigned R
, const unsigned C
, typename T
, MajorOrder OrderType
>
class BaseData
{
  protected:
    union{
      m[R*C];
      mm[R][C];
    };
};

Matrix, taken care of. But I’m also using it for vectors and those of you with a sharp eye [and you know who you are] probably already noticed a bit of a problem because this setup lets me define a vector using R=3 and C=7 and that’s not a vector at all, is it?

The question, as it turned out, wasn’t too hard to solve, the Vector base class looks like this:
template
< const unsigned D
, typename T
, MajorOrder OrderType
>
class VectorBase : public BaseData<hmm, what goes in here>{};

Whoops, another problem, it’s easy to say a vector can only be defined using a dimension value, but how will that translate into rows and columns? and how to get the compiler [template remember] to figure it all out?

Next on the agenda was to define the relationship between the rows, cols and order type in mathematical terms [remember that order can only be 1 or 0]. Eventually I came up with:
rows=(dimension*order)+(!order);
cols=dimension/rows;

And there you go, compile time decision on how many rows/cols the vector has while allowing me to decide whether I’m using row vectors or column vectors.
template
< const unsigned D
, typename T
, MajorOrder OrderType
>
class VectorBase
: public BaseData
< (D*OrderType)+!OrderType
, D/((D*OrderType)+!OrderType)
, T>{};

Advertisements




Traits as layers of indirection

25 07 2010

“We can solve any problem by introducing an extra layer of indirection” Quoted by Butler Lampson and attributed to David Wheeler.

This qoute is true for nearly, nearly all problems I suspect; unless of course we have code with so many layers of indirections that it’s just, um… impossible to maintain. I should probably emphasis here that I do mean indirection rather than abstraction which are two different concepts.

My idea of traits is that of meta-information encoded within a class and used to describe the nature [properties] of the instanced attributes. And just because that sentence is filled with too many synonymous words I’m going to show some code, which would hopefully clarify my definitions of; Properties, Attributes and Traits.
struct Vector3
{
  /*traits*/
  typedef float value_type;
  enum { size=3 };
  /*functions*/
  type_value& x();
  type_value& y();
  type_value& z();
  Vector3(value_type x, value_type y, value_type z);
  /*data*/
  value_type m[size];
};

Simple; but for what purpose? Isn’t all that information available for us by just looking at the header file, surely more typing is bad for the soul?

A little bit of typing here can save us more typing later, but I’ll get back to that. For now let’s also make a simple function which returns the sum of two vectors.
Vector3 operator+(const Vector3& lhs, const Vector3& rhs)
{
  Vector3(lhs.x()+rhs.x(), lhs.y()+rhs.y(), lhs.z()+rhs.z());
}

Easy, straightforward and readable [also bit of a waste of registers, but I’m not going to optimized anything unless I have to].

Easy, straightforward and readable. And not extensible at all. That is to say, I now want to add a Vector2, Vector4 and just for the sheer join of it, Vector6. That also means I would now have to write 3 other functions to sum those classes. Code bloat anyone?

It’s obvious that the sum function has to have a loop which counts the elements and adds them, but how are we to push all of those classes into the same function and still expect that to work?

Duh, obviously, inheritence. If all vectors inherit from the same class [let’s call it VectorBase] then we can treat them all as the base class and run on it, right?

Wrong. let’s introduce Matrix2, Matrix3, Matrix4, Quaternion, Color, and some other classes which are different but are still summed the same way and if you think that one giant base class is the solution than we have a different understanding of OOP.

Inheritence is a powerful tool but I rather not use it unless I have to. Templates to the rescue [templates are of course another powerful tool and as with all power, please wield with care!].
template<typename T>
T operator+(const T& lhs, const T& rhs)
{
  T result;
  for (int i=0; i!=T::size; ++i) { result.m[i] = lhs.m[i] + rhs.m[i]; }
  return result;
}

This piece of code is almost, but not quite, what I had in mind; the reason for that being the direct access into the T [type] in order to extract the size trait. It might look like a good idea, but consider what happens when I would want to treat an object as a Vector/Matrix/Whatever using this function but the object already has a size functionality which has nothing to do with the usage of this trait, or worse, what if it doesn’t have the size trait at all…?

Here I introduce my next layer of indirection; the trait class.
template<typename T>
struct SizeTrait
{
  enum { size=T::size };
};

And for our new operator+
template<typename T>
T operator+(const T& lhs, const T& rhs)
{
  T result;
  for (int i=0; i!=SizeTrait::size; ++i) { result.m[i] = lhs.m[i] + rhs.m[i]; }
  return result;
}

Is there a difference, it seems like nothing has changed other than adding more complexity. Not true, although the SizeTrait class still pulls into the T [type] to extract its size trait we can now specialize the trait in case another class is needed in the algorithm but doesn’t have a size trait. Like this;
template<typename T>
struct SizeTrait
{
  enum { size=Matrix4::rows*Matrix4::cols };
};

And voila, instant size. This of course is a very simple example but the method scales well to more complex objects [check out the STL’s trait classes].

Greensleeves was my heart of gold





Cookies & Maths

21 07 2010

Finished the Vectors side of the library today. Mostly anyway, I’ve left it somewhat unoptimized at the moment, ran some preliminary tests which told me that premature optimization is the root of all evil and that the vectors are performing within the acceptable performace ratio [in theory].

I’ve used templates quite heavily throughout the vector functions themselves, so that the functions would work on any object which inherits from the vector base. But maybe I’m jumping the gun here. Let me take a step back and just walk down the path of my concepts and prototypes.

One of the things I found the hardest to approach was the basic design of an extensible Vector class. I know that the most used objects are Vector2, Vector3 and Vector4, but there are also uses for Vector6 and Vector7 and having to rewrite every single function for every single object just struck me as, well, not something I’d endorse. Enter templates…

Now, making a template Vector class for all dimensions and for some value type is quite easy:
template<const unsigned Dimension, typename ValueTyp>
class Vector
{
  public:
    typedef ValueType value_type;
    enum { size=Dimension };
    /*operators*/
    /*Functions*/
  private:
    value_type m[size];
};

On the surface it looks good, easy to use, easy to define as many types of Vectors as I need, but there’s a little, and not so little, annoying aspect of this class. There’s no way to provide it with a custom constructor for different sizes and there’s no way to provide read/write data functions [or references if that tickles yer fancy].

The other point I’d like to make about this class, before moving on, is the use of the enum and the typedef. For those who read this thing [other than me] and have had experience with template and/or meta-programming, will quickly recognize these as defining the traits of the class. Traits are quite important to any meta-program as they allow us to query information of the class/object at compile time [and runtime of course] and are a cental feature of writing proper meta-functions which work on objects that have these traits. I’ll write more on this subject in the next post.

Back to my Vector.

My first idea was to extract the data itself from the Vector object and into what I’ve termed the DataDescriptor and inherit from this object on construction of the Vector template. The code would look like this:
template<typename DataDescriptor>
class Vector : public DataDescriptor
{
  public:
    typedef DataDescriptor data;
    /*operators*/
    /*functions*/
};

A data descriptor, for example could be:
class VectorXY
{
  public:
    enum { size=2 };
    typedef float value_type;
    value_type& x() return { m[0]; }
    value_type& y() return { m[1]; }
    VectorXY(value_type x, value_type y);
  protected:
    value_type m[size];
};

Looks good, and supplies everything I need, right? not quite. Having the main Vector class inheriting from the data descriptor forced me to overwrite the Vector copy constructor to allow the usage of the DataDescriptor’s custom constructor which lead to code that looked like this:
typedef Storm::Vector<VectorXY> vector2;
vector2 vec1(vector2::data(1.0f, 1.0f);

Not too bad, but still annoying. It also forced me to either provide a direct accessor for the underlying data of the DataDescriptor so that the vector functions could access it, or scope the underlying data as protected [as in the example]. I’m a great believer that data should either be public or private, and it shouldn’t be public. Any data at the protected scope is essentially half a step away from being public and I’d rather avoid using it altogether.

To conclude, it’s clear, that while this approach is more extensible then having a separate class for each vector type with its own specialized functions, it is still not quite there. Having the DataDescriptor as the base class still involves writing far too much code than is needed and also forces some not-so-desirable convensions on the DataDescriptor’s writer. But what other alternatives did I find?

That’s an excellent question. But it’s 3:49 in the morning and the dishes aren’t done [again] so the answer will have to wait until my next post [which should be either tomorrow night, or Friday night].

We are the road crew





Vector access notation

18 07 2010

wish I could come up with a quirky and witty title for this post about vector math, but I just couldn’t.

I’ve managed to keep up with a post every 3-4 days, which makes me happy. I’ve looked over the posts and most of them are fairly verbose I would say, with very little code. Let’s change that, shall we.

I’ve been retouching my math library… Thinking of some better ways of doing things, and generally tidying up and trying to generalize/organize stuff into templates. May not be your cup of Tea, but hey, I drink Coffee right?

The first little thought that came to my mind was the fact I was using functions as accessors to the Vector underlying values array; now, I’m not particular about having these as functions but I know other people who get annoyed at typing:
vec1.x() = 10.0f

I admit, it does kind-of look weird [althought I got used to it]. So let’s change that. How? References. A reference is a constant synonym for some type [user or in-built]; it isn’t a pointer, it’s a synonym which can only be set to a type ONCE in its lifetime. That’s an important little point that took me a while to understand. So our new Vector class now looks like:
class Vector2
{
  public:
    float& x;
    float& y;
    inline Vector2(): x(m[0]), y(m[1]) {}
  private:
    float m[2];
};

And voila, that’s it, just type up:
vec1.x = 1.0f

And have fun!

That’ll be it for tonight, short and with some code; next on the agenda, defining traits for the math objects and generalizing the math expressions [functions] and parameters…

I fly a starship across the Universe divide; And when I reach the other side; I’ll find a place to rest my spirit if I can; Perhaps I may become a highwayman again





Test Driving my Mage

15 07 2010

Mage being my favorite acronym for the moment; Math And Geomatry Engine. Makes me laugh anyway…

I’ve been thinking alot about TDD [writing alot about it as well] and have noticed I’ve already gone through several stages of coding practices using it. The first of which was simply write as many tests as possible, for every class, function and stuffed toy I had lying around. I’ve even tried to convince my wife to test-drive her social science essays [didn’t work out, I guess assuming the essay is complete, writing a test to make sure and then trying to fill it out doesn’t really work, but I digress]. In my readings of blogs I’ve come across Noel Llopis’ excellent GamesFromWithin; in it he writes alot about TDD and game development and one thing that caught my eye especially was his recommandation against test-driving a math library. I wasn’t sure why and after speaking to another friend decided that if it doesn’t help at least it won’t harm… It doesn’t harm at all, but trying to test-drive it showed, I think, a distinct lack of understanding for the reason to test-drive anything.

In my opinion, and correct me if I’m wrong, test-driving should be used when the starting point is defined, and you know where you want to end up – but you don’t know How. This means, in my book, that TDD should mainly be applied in places where functionality is not well defined, or not defined at all for that matter. Math code, vectors, matrices and their ilk have behaviour which is well defined, there’s simply no need to pretend otherwise. Test-driving objects such as a custom MovieList makes sense, we know what it eventuallty needs to have, but not how to achieve it, so we make assumptions and test-drive it; anything else is superlative.

Keeping in mind all that, I would like to add that I still think testing the library is very important. Making sure that we’ve achieved the correct precision levels and have properly implemented all the code is imperative to being able to optimize/refactor later without worrying about accidently adding the 3rd component rather than the 2nd.

Well, that’s it for tonight, I go back to my templates, my slowly evolving codebase and I still need to do the dishes!

One for sorrow, two for joy, three for boys and four for girls, five for silver, six for gold; seven for a secret never to be told…





Do I need a license to test-drive?

13 07 2010

I’ve spent the last few days writing tests then erasing them then writing them again. I’ve started, restarted and restarted so many times I could probably type most of the classes with my eyes closed. I should be frustrated by it all, I really should [to put it into perspective I think I wrote about 500 lines of code over the last couple of days and not one of them has been kept in the project]. But I’m not. Why?

I love learning. I really mean it, there is nothing I like more than doing something, getting it wrong then doing it again only better and really understanding the logic behind it.

I love coding. I really really do, I sometimes can write code for hours before even noticing that I’m hungry [many a-lunch do I eat cold].

I love coffee. I just, you know, love it. And nothing offers a better opportunity to slowly sip my coffee then programming.

But most of all I’ve got this itch, this little voice that tells me that understanding TDD, even just the bare-minimum at the moment, is of immense value. So I keep at it, throwing myself at the wall, trying over and over again. And I think that I’ve slowly come to understand some aspects of it and the reasons I’ve been missing the mark; assumption.

Never assume I’ve been told and have follow in life. Make sure, ask, document and design, never assume. Good advice but not for TDD, the way I see it anyway. I found it absolutely HARD to make assumptions about my code, to use/test it in advance. I kept falling back into writing the classes first, writing the bare functionality that I KNEW had to be there, and then write the tests. Each and every time I’ve over designed the system and wrote the objects in advance I ended up the same, changing the tests to match the code and getting locked up in complexity. It wasn’t TDD, it didn’t feel right.

3 days , 500 lines of code later [and who know how many deleted header/source files] and I’ve come to the conclusion that I should just bite the bullet and write the tests first. Write the #include in the test file, use namespaces without having them in place, declare and define objects which do not exist and just ignore the annoying little squiggles Visual Studio uses to tell me that something isn’t going to compile. Who cares, postpone writing code until you absolutely, positively have to.

Writing tests is still hard; there’s no magic to replace experience and I just need more of it [experience not magic]. But test-driving feels better now, it flows better and it shapes my code rather than the other way around. There’s still a long way to go with this project [which only started, luckily] but as long as I get to learn, code and drink coffee I guess I’ll be alright.

“Dreams aren’t broken here, they’re just walking with a limp”





Rainbows and Lollipops

9 07 2010

I’ve done a wonderful and terrifying new thing today… I’ve changed the color scheme on my IDE…

Wow, that’s a pretty strange thing to be excited over but I am. I haven’t actually done anything actively to the layout of my IDE in over 3 years now, with dimming the green comments somewhat being the only exception. Changing the entire scheme is hard. It’s also rewarding.  I always assumed that the just functional, normal, document-like colors are quite sufficient for me and my code. I was wrong.

From a health point-of-view the white background was always far too bright, especially at night [my normal developing hours] and I always find myself playing with the screen’s brightness. Twiddling it to manage, if not comfortable levels, then at least the realm of “please, please, don’t burn me anymore”. That was the first thing that went; from white to a deep grey [I didn’t like pure black, swallows too much of the font color range].

For the rest of the scheme I tried to choose low, cold colors as I found bright colors seem to “sink” into the grey and disappear. My comments are still green, my keywords are a nice dull ice-blue and the preprocessor directives became deep-purple. All in all I’m happy, it’s easy for me to read, it doesn’t strain my eyes and more so it gives me an even greater sense of ownership/control over my tools.

It also has the benefit of kinda looking like a roguelike, so yeah, I’m happy. Maybe I should post a picture.