object.h is dead, long live object.h!

Ugh, sorry for the lack of updates, I feel like I’ve been clawing my way through a giant pile of manure and have barely gotten my head out into the air.

If you recall, my current task on SpyParty has been to get the folks at the party to be able to pick things up, like books, magazines, martinis, cigarettes, top secret plans for a nuclear device, etc.  Well, steps 3 and 4 on that task were the following:

3.  Add non-character dynamic items as a concept to the code.
4.  Add a simple attachment system to the character AI and rendering code.

Oh boy.  These look pretty simple, but since I’ve been 100% focused on getting the gameplay prototyped, the actual game code had gotten a little, shall we say, funky smelling.  It was never very clean, having started from the Indie Game Jam 3 engine source code, which was not in great shape itself, but my singular focus on getting gameplay in without regard to how I got it in has made matters worse.

It became clear that I needed to clean things up before I could move forward.  My friend and executive producer at Maxis, Lucy Bradshaw, used to bristle whenever a programmer would mention the word “refactoring” because it had a perfect trifecta of badness1, but it was time.  I couldn’t figure out how to attach an object to another object in the old system without hacks even more heinous than I was willing to attempt.

This kind of task gets to the heart of the “game object system” issue.  I’ve got a good powerpoint by Doug Church on my site on the topic, and I’m going to write more about my new system soon, but I wanted to at least post something to indicate that I’m not dead.

Briefly, I choose a fairly simple component based object system architecture.  There are really two basic options when you’re doing object systems:  inheritance-based and component-based.  I’ll have much more to say about this later after I’ve used it for a bit and worked out the kinks, but for my usage I thought components were the way to go.  These terms are pretty loose; they don’t necessarily directly correspond to C++ (or whatever) definitions when you use them in this general way.

The other thing I am finally starting to test out is my new namespace-centric coding style I’ve been simmering on for the past 5 years.  I’ll also write more about that soon as well.  I’ve been styleless for a long time, after becoming dissatisfied with my previous programming style, and not seeing any others I really liked.

The final thing I wanted to mention was that even after 20 years of programming, I still need constant reminders to figure out ways to keep changes small.  I started writing the new object system and porting all the code over to it in one big change, and after bogging down in that for days and days, I finally rolled back to the old object system and figured out a way to incrementally change over.  It’s more hacked code in the interim with the two object systems fighting for attention, but having a compiling and working piece of software you can test incrementally is so important it’s hard to overstate.  I had two object.h’s in the project for a few days (the old one renamed to _object.h), but it let me move things over piecemeal and keep testing, and I was finally able to delete the old object.h last night.

Also, source code control is your friend and it saved me when I realized I’d bitten off more than I could chew in one bite during the refactor.  I currently use Subversion and have for years, but have been thinking about switching to Mercurial or Bazaar.  I wish these distributed systems dealt with large binary files like those found in games better, though.

  1. 1) refactoring takes a long time, 2) it adds bugs to the code, 3) even when it works perfectly, it has no visible improvement on the game []

22 Comments

  1. jordy says:

    Thanks for the update, I can’t help myself but to check your page constantly, even tho I know the game is so far away stil, and it’s always nice to read some updates. Altho I don’t rally understand most of it, I hope you feel you are on the right track now.

    • checker says:

      Yeah, I’m never sure how much technical detail to put into posts. Oh well, I figure people can skip stuff if they don’t care about it. :)

  2. Rinaldo says:

    you should atleast track your code in git, svn is a dinosaur.

  3. Tiibiidii says:

    the other day i stumbled on this page:

    http://blog.extracheese.org/2010/05/why-i-switched-to-git-from-mercurial.html

    one of the problems he laments about is that if you rename a 20MB directory, in mercurial you’ll get a 40MB repository…

    so maybe git or bzr would indeed be better choices for you

    (for my own little projects, without any big files, i’m used to bzr and i love the way it manages renames… that is: it manages them… after trying git i was underwhelmed to discover that for it a rename is just a delete+add… on the other hand after a git-gc, git seems much more efficent in managing space)

    • checker says:

      Thanks for the link. Apparently bzr switched to a similar repo format as git last year. I worry about git and win32, which is my main development platform right now. The main problem with large binary files and dvcs is that every time you check in a file with high entropy you basically get a copy of the file, which is fine in a centralized situation where the repository grows on the server (so for p4 you have only your local copy, for svn you have 2x locally), but if everybody gets an ever-growing local repository, that’s a huge problem. Plus, most dvcs’s don’t allow partial checkouts, it seems, so you can’t even just decide not to have those directories locally. And, for unmergable binary asset files, svn and p4 have the concept of exclusive checkouts, but there’s no dvcs that has something similar (it’s not even clear how that would work in the dvcs model).

      Anyway, I’m still back-burner evaluating and trying to figure out what the right thing is. I really would like to switch away from svn, but I keep finding issues with the alternatives.

    • Jeff says:

      I’ve been using Mercurial for a while and love it, but I’ve also run into the “asset” problem of distributed version control. There are a few hacky solutions I occasionally use to try to get around the problem, and also a potential long term solutions in development.

      There are 3 solutions currently in development from hg developers to try to solve the problem.
      bfiles: http://mercurial.selenic.com/wiki/BfilesExtension
      Bigfiles: http://mercurial.selenic.com/wiki/BigfilesExtension
      and External Binaries: http://wiki.netbeans.org/HgExternalBinaries

      I think bfiles is the most interesting. It uses a web server to hold big files and just uses your local repositories and clones to track which of those files belong to which revisions. Since hg is based on SHA hashes, you’re assured that even if you’re branching assets they won’t conflict. From there, it only ever downloads the copy when it needs for the particular revision you’re updating to. The extension needs work (e.g. using an actual source control repository as it’s back end rather than just the file system, and better command integration with hg) but it’s a potential solution to the problem that could easily be developed on top of.

      At my current job, the developers use Subversion, and I’ve found hgsvn to be super useful. Since we keep source and assets in separate folders, I checkout the folders out separately. Then, I can use svn for all asset modification and hgsvn for all source modification. It means two checkins / changelists when I need to check in code and assets, but it does allow me to clone temporary copies in hg without impacting other developers. I do find it works, at least as a stop-gap measure until someone develops a good asset repository for use with the bfiles extension.

    • checker says:

      Yeah, Mercurial seems to have the most active development towards solving this problem, but the directions seem pretty hacky to me. They’re also missing the concept of a lock, which is important so two artists don’t modify the same Photoshop file and then have to figure out how to “merge” them.

      What you really want is a hybric dvcs. You want to be able to mark some files as centralized/unmergable, and some as distributed. You need to be able to mix these arbitrarily in a source tree, and not have to run two different commands. You want to be able to have svn style local revert and whatnot, but you don’t need/want to have full history for these files, or better yet, you want to set how much history you keep for them. And, they need to be lockable, so an artist can check one out, modify it, and another artist is prevented from checking out the same file while this is happening. If you’re offline, it doesn’t let you check it out by default, but you can override it, and on your head be it (you have to revert if you reconnect and it’s been locked, say).

      I don’t know any dvcs that is even contemplating real support for mixed code/content development like this, but it sure would be awesome.

    • Tim Ambrogi says:

      I myself have been lusting for a hybrid DVCS for game development, though I suspect there are hidden complexities that will only reveal themselves when someone actually gets in there and start implementing it. Right now I split my work between Hg and SVN, which results in a loss of atomicity of any checkins that straddle the line (such as changes to my asset data formats).

      On the subject of VCS, one feature I would like out of a centralized lockable asset repo is a mobile-accessible interface for locking files, so that the minimum technology required to lock an asset is a smart phone with signal. Other than trans-oceanic travel, I’m hard-pressed to contrive many use cases where I would be unable lock my files. Would be kludgy, but also very useful, I suspect.

    • checker says:

      A mobile interface is an interesting idea, but I think people would forget to lock the file a lot. In other words, the flow would be “in photoshop, want to edit this image, oh right, it’s read only, go to sccs, oh right, I’m offline, okay, now I have to find the file via my mobile browser to unlock it, ugh, a pain”. Hmm, I wonder if the unlock command in offline mode could generate some token or one of those qr code things that would allow you to not have to find the file or type anything on the phone…

    • Jeff says:

      I haven’t used it yet (it’s on my list of software to evaluate) but Plastic SCM report to have “solved the problem” by allowing their central SCM to operate in distributed mode.

      I can’t say that it’s good or does what you expect, but it might be worth looking into.

    • checker says:

      I looked at Plastic SCM a bit a while ago, but their comparison grid thing doesn’t compare against any of the modern OSS DVCS’s (TLA FTW), and their page that mentions them starts with a bit of FUD, so I kinda soured on it. Also, the fact that it’s not free kinda sucks.

      Where do you see them claiming to have solved the problem we’re talking about, though? I couldn’t see anything about it in my brief look

    • Jeff says:

      Sorry for late response…

      This was in conversations with them, and I can’t confirm. It is so significantly buried, though, even in the documentation from them directly. I had to discuss a bit with one of their engineers to have him explain it.

      From what I gather (though again, haven’t tested) Plastic works as a Perforce replacement until you tell your local client to work in a distributed mode. It then grabs everything and allows you to work from there, I assume similarly to how hgsvn works.

      Now that I’ve been using hgsvn for a few weeks, I will say it works fairly well, but not without its faults. I find I have to collapse my change sets at every commit, and force edit commit messages, because it’s not completely intuitive how it will commit things. Just thought you should know.

    • checker says:

      Sounds like Plastic would have the same local-copy issues, sadly.

  4. It’s awesome that you’re going to talk in more detail about the code side of things. It’s always interesting to see how other people are handling their code architecture. I was going to be cheeky and ask for more insight into the more technical side of Spy Party but I guess I don’t have to anymore. I’ll save my one wish for another day :D

  5. I’m with you on the big refactor issue — it is really really hard to do bite-size chunks. I can think of at least three examples for me in the last year where I had to roll back and start from scratch because I couldn’t find the one straw that broke the camel’s back. For me, at least, the problem is one of chaining — I will happily follow a chain of small changes as far as it leads because each individual change is pretty small, so there can’t be any risk with that, right? Soon I’ve ended up changing 90% of the internals of a system and somewhere in there is a small mistake that I simply can’t find.

    I think the problem is how we think of the tasks. It’s refactoring, right? So it ends up being all fair game. I’ve found myself switching to thinking of my refactors in the narrowest possible ways as a result. (“I am refactoring this method in this and its derived classes to be const.”) Having Martin Fowler’s Refactoring book on hand is helpful, since it gives lots of little recipes for refactors. Also, bearing in mind his dictum that you write some test cases to make sure the change you’re making passes the tests before and after your change is important — which seems tough to do in game development, but helps me keep the scope of my changes small.

    With your changes to using components, maybe you need to make a sane interface but only cut over one bit of functionality into a component at a time. Or maybe you need to bundle up a bit of functionality into a smaller struct or class to make it removable from your bigger object, etc.

    • checker says:

      Yeah, that’s basically what I did once I rolled back. I first got the static objects with no gameplay moved over, then the static objects with only “you can look at these”, then the ones with interactions, then I did the pathing system, which meant the character type had both new and old object systems in it at the same time, then the animation system, and then I was able to delete the old object system. I’m still not done, but at least the old code is gone, so now it’s a case of code going away as opposed to being piled on. :)

  6. Lee says:

    Here’s my contrived metaphor when it comes to explaining refactoring to an executive producer :o)

    Imagine you’re a plumber and your job is to sit inside a room routing pipes from one location on a wall to another. Initially you can route pipes wherever without a problem. After a while you have to route pipes around other pipes in other to connect them. At some point the act of adding one more pipe becomes very tortuous (and inefficient). If you don’t need to add more pipes then you’re fine. But if you do then a refactor is beneficial in order to reduce the cost of installing future pipes. Of course the final additional cost is the cost of the refactor plus the cost to route the future pipes. So the trick is predicting how many more additional pipes will likely be needed before the job’s done (game ships). If you don’t know then it might be conservative to do a partial refactor, but if you end up needing more pipes than you anticipated and have to do yet another refactor then it will likely cost more than if you’d done a bigger one initially.

    It’s a difficult problem that relies heavily on experience and is always tough to sell to non-programmers :o)

    • checker says:

      Oh, she knows exactly what a refactor is and why it’s desireable, but she’s got a lot of hard experience that points towards programmers underestimating the cost of those three points in the footnote, and I have to say if I’m being honest with myself, I have very little data that refutes her opinion. :)

  7. Tom Bui says:

    I would counter and say I have a lot of experience that points towards producers/project managers/development directors underestimating how long it actually takes to get anything done, the cost of sticking with an existing infrastructure, or if all the work fits within the schedule, etc. Also, there are lots of exceptions to #3. It’s just that we don’t often say, “Look, we wouldn’t have been able to do this before at all or nearly as fast before refactoring.”

    • checker says:

      Oh, definitely, I’m not saying never to refactor, just that programmers (including me) do tend to underestimate the negatives. Refactoring tends to have a high cost, so you need to be honest with yourself about whether you actually need it.

  8. lqtink says:

    I CAN’T WAIT TO PLAY THIS GAME!!!

  9. Albpoolshark says:

    ok this game looks amazing and ive been following your work for a while now. i cant wait to buy this and play it with my friends

I have temporarily disabled blog comments due to spammers, come join us on the SpyParty Discord if you have questions or comments!