Pages

Sunday, December 12, 2010

Book Review: Peopleware

The book Peopleware:Productive Projects and Teams (2nd edition) by Tom Demarco and Timothy Lister should be required reading for anyone in charge of or working with software projects. So much of it is such good sense, and yet so much of it will make you shake your head and say "why are we still doing it like this?" If you feel like you are not being able to do work between 9-5, read this book.

Two of the major concepts in the book are teams "jelling" and programmer "flow".

A team "jelling" or coming together is so important and yet nearly impossible to plan for. Demarco and Lister mention that you can create a scenario for a team to jel, but it all depends on the people involved in whether or not the team comes together. There are however elements that can be "teamicide" or kill a team (defensive management, bureaucracy, physical separation, fragmentation of time, quality reduction of the product, phony deadlines, and clique control).

Some recommendations are to group people together. Groups usually go quiet at the same time, you want them to interact and trust each other. If they are not sitting together, they are more likely they will interact with people on other projects and become friends with them instead of the people on their team. Trust your people. Get the right people and turn them loose, hiring the right people is the most important thing. Develop a feeling of permanence, train people to go through the ranks, make it a place people are expected to stay, not just passing through.The authors use the imagery of a team like a choir (or visualize an orchestra), with everyone playing their part in a complex symphony of sound and harmony, which differs from the traditional example of a team like a sports team, which often only focuses on the star players. The book also talks about having "free electrons", people whose judgement and work ethic make the decisions they make better than those mangaging them. Which again comes down to trusting your team. If you are a bad manager (and aware of your shortcomings), the best thing you can do is hire the best people you can find and get out of their way, and try and avoid changing too much.

You want your team to work together, to not compete against each other and to be happy and productive... how that happens is always complex, because it depends on the individuals in your team, but making it work is your job as a leader.

The other element programmer "flow" and how work is like sleep, it is best done in long periods of silence with minimal interruptions. Try to minimize interrupting people, programmers (and all workers) need quiet time to think and concentrate. Even with music or white noise the programmer is not using their full mental capacity, which, while they are able to perform some logical function, still might not be able to perform or think as creatively. The book also focuses on noise, how lots of people packed into cubicles make a lot of noise (the smaller the space, the more noise they make) and that a PA system is almost insulting to the occupants in disrupting the masses in order to find one person.

The book talks a bit about the Capability Maturity Model (CMM) and how so many companies are in the lowest levels (often with no sign of improvement).

Another part of the book that struck a cord with me is the huge cost of turnover. Even if the chair stays warm with a new body coming in, they are less then useless the first day, and take other peoples time to get up to speed. If they are lucky it will be less than 2 years before they replaced a senior person, but there is still no guarantee they will "jell with the team, be as reliable as the previous person, or will even stay long enough to get up to speed that the previous employer was able to manage. The cost can be enormous for replacing people, if it takes six months to get up to speed an be productive coming into a project that has seven months left, you have paid someone seven months worth of pay for one month of good work. A quote from the book suggests it can be even higher than that: "One of our clients, a builder of network protocol analyzers and packet sniffers, estimates that it takes more than two years to bring a new worker up to speed (pg 207)."

The book also has the idea that there is a "slumbering giant" in a corporation, against sillyness, sharing the load of responsibility against problems and focusing on the never ending battle of how to improve the workplace. Hopefully he awakens soon.

"The ultimate management sin is wasting people's time (pg 215)."

Great book, get it for your loved (or not so loved) managers for Christmas :P
Michael Hubbard
http://michaelhubbard.ca

Saturday, December 4, 2010

Book Review: Herb Schildt's C++ Programming Cookbook

Cookbook books are interesting in that they often cover a wide range of topis in a short amount of time and space. Herb Schildt's C++ Programming Cookbook, handles a number of topics including string handling, STL containers, I/O, formatting data, overloading operators like subscript, new and delete, the increment and decrement operators and the use of typeid for RTTI runtime type information. The book has some useful information, and is given in a clear and concise format, with code examples to help explain the details. The book has been structured to make finding things easy, and as is the case with most cookbooks, becomes useful as a reference for how to accomplish some task/functionality (as well as explainations of the steps and logic behind that approach).

This book is especially interesting for beginners, as it is carefully explained and gives thorough example code for a number of tasks. For more intermediate to advanced programmers there is likely a few tricks that are new or interesting too. I wouldn't say the book is targetted at beginners, but they will likely be the ones that benefit the most from this type of book. It is not the kind of book you want to learn your first basic programming techinques from, but provides a good reference along the way.

Keep on cooking that code,
Michael Hubbard
http://michaelhubbard.ca

Monday, November 22, 2010

Book Review: Essentials of Interactive Computer Graphics

Essentials of Interactive Computer Graphics by Kevin Sung, Peter Shirley, Steven Baer is a good book for theory of how to work in and with interactive graphics. Their chapters include things like event driven programming, model view controller architecture, GUI APIs and working with the graphics APIs. I especially liked how they brought in real world examples like dealing with 3D modeling packages (Maya), abstracting behaviour of game elements and the number of examples available.

The book is a mix of Direct X and OpenGL, but there are lots of examples that make it worthwhile. This book is targetted more for a beginner to intermediate, but I especially liked the concepts of event driven programming in a book about graphics. So often, I find programmers from other disciplines and backgrounds (with the exception of real-time programmers) are not as concerned with event driven programming. Often, in application programming event driven programming may only be used for GUI elements, which in most cases is all that is needed. In a game however, there are often a lot of elements (objects colliding, proximity triggers, network sync events etc.) that cause a lot of events to be happening simultaneously, and having a good grasp on how to deal with these events and design code to work with these events becomes very important.

Good book overall, especially for those new to the concepts (and hopefully everyone has at least heard of model-view-controller).

Best of luck,
Michael Hubbard
http://michaelhubbard.ca

Saturday, November 20, 2010

Unity 3.0 ShaderLab

Everybody loves shaders, and Unity's 3.X ShaderLab http://unity3d.com/support/documentation/Components/SL-Reference.html is interesting in what they have done (ShaderLab is similar to CgFX scripts, but unique to Unity).

Unity 3.X has updated their rendering model, and specifically changed how to write shaders for Unity 3.X. The newly structured surface shaders, which through a combination of ShaderLab syntax (often pragma settings) and Cg progamming language allowed shaders to be created that would work with both Unity's forward and deferred rendering lighting model. The pragma basics include the Lambert and BlinnPhong lighting models as well as more complex bump, cubemap and emission shader properties and logic. The Unite talk I went to also mentioned a specific “gotcha” related to Unity's use of the specular color, requiring the specular color parameter to be specifically named _SpecColor to allow Unity access to the separate specular highlight (this appears to be a hardcoded issue, but is different from the Unity 2.X standard).

As a whole Unity is still rendering average scenes with minimal lights much faster in forward rendering (at least in the test scenes that I have been working with). The deferred rendering will likely be useful for very specialized scenes with lots of lights, but it is nice to have options.

Unfortunately the biggest issue for large scale projects is the incompatibility between Unity 2.X shaders and Unity 3.X. If you have written custom shaders, and have exported assetBundles using those shaders, you will have to rewrite the shader using Unity 3.X techniques and re-export all the assetBundles (hopefully you have an automated build process, for your assets to help with this, but not everyone will). If you notice assetBundles brought in with bright pink colors, it means that your shaders are incompatible and will need to be updated (which makes QA much easier on these assets).

Overall, there is not turning back, and you should strive ahead using Unity 3.X and all the other great features it has, the ShaderLab language is much simpler and will allow more artists to contribute their ideas for shaders and will greatly improve writing time and debugging of shaders in general. Still, would have been nice if the shaders were backwards compatible and not pink :P

Hopefully you have minimal pink,
Michael Hubbard
http://michaelhubbard.ca

Sunday, November 14, 2010

Unite 10

I was at Unity's UNITE conference in Montreal, and it was both informative and fun http://unity3d.com/unite/ UNITE is an annual conference to showcase and educate users and developers using Unity technologies. This conference is the largest annual Unity developer conference and is the tenth such conference. UNITE is often the platform Unity uses to announce new releases and features, as well as gain feedback and interact with the Unity developers.

Unity appears to be gaining popularity with more mainstream game and interactive media companies. They have begun to announce more pronounced companies such as EA, Disney, Marvel and Nokia developing software using Unity. Unity has currently carved out a niche market with its attempt to support as large a range of platforms as possible (mobile devices to high end consoles) and have attracted more attention because of this business model. With fiercer competition in the game and web/browser market it is important to get additional training and knowledge directly from the source.

The conference allowed developers to have one-on-one talks with the Unity developers as well as see demos of competing vendors and hear about alternative implementations and solutions for development problems. The networking opportunities also provide us additional information about how Unity is being used.

The main keynote presentation was broken into two main parts. The first part was a talk by David Helgason (CEO of Unity Technologies) and Brett Seyler, who presented information about some current Unity stats and upcoming Unity enhancements (of course mentioning the release of Unity 3.1). The information provided included the current number of Unity plugins installed at 40 million, and over one thousand iPhone games produced using Unity's iPhone license (including some top selling games). Helgason also showed Unity's commitment to the indie market with what would become the common phrase of the conference “democratization” which when used by the Unity team, described a philosophy of providing more large scale production tools and options to indie developers. The major democratization addition was the Unity Asset Store, which allows developers to buy, sell and share content through the Unity editor (similar to the Apple App Store, but embedded in the editor) The focus on the indie developer can also be seen in the announcement of the Unity Union, which allows companies to get into a contractual agreement with Unity to provide custom licenses and options for developing Unity game on a platform currently not supported by Unity (such as the Nokia phones).

The second part of the talk was by Jesse Schell a game designer (previously a creative director at Disney's virtual imagineering studio) but the focus of Schell's talk was not specific to Unity, but mroe on game character's and virtual characters. Schell provided a of his predictions on the direction of game and character interaction, including a focus on: facial expression tracking; persistent databases; speech recognition; natural language understanding; emotion sensing; integrated multi-platform games; interface to everything; cognitive tutors; intelligent actors; and augmented reality. Schell also mentioned that he is doing work on ‘The Mummy Online’ in collaboration with Universal and Bigpoint for launch in Winter 2010.

I learned some new things about Unity and how it was used, although much of the conference was focused on the high level introduction, and since I have been using Unity since 2007 in many forms (Mac, PC, Unity source code, web and iPhone) I would have liked to have had a bit more indepth approach about the low-level details.

A number of talks were focused on developing tools for Unity that more commercial engines (such as Unreal or the Hero engine) already have and maintain. It did not appear that many companies were using Unity for large scale MMOs (outside of the mention of the upcoming Mummy MMORPG) and Unity's immediate focus still appears to be indie (over large scale commercial) support.

I really enjoy Unity and feel like the company has a bright future, with Unity 3.X and their support for consoles, more and more companies will look to Unity as their potential engine. As always, it is so important to use the right tool for the job, and I can't stress that enough. Unity covers a wide range of options, and feel like it is a great engine to use for an indie to intermediate sized project. Unity has a great and helpful community and you can make some fun games. It is always worthwhile in doing your homework on any engine, and look at what kinds of games are produced, and make sure that these are the approximate types of results you would expect. One caveat though, Unity also deals with some additional licenses for undocumented features which may give other companies a competitive advantage, some other companies also purchase the Unity source code and develop their own plugin that gives them additional features that do not exist "out of the box". There are some fun games and applications made with Unity, and I wish you best of luck with yours.

Unite Montreal,
Michael Hubbard
http://michaelhubbard.ca

Thursday, November 4, 2010

Book Review: The Art of Concurrency

One of the books I picked up recently is The Art of Concurrency A Thread Monkey's Guide to Writing Parallel Applications by Clay Breshears. The book itself is an interesting read, and Breshears is a funny writer, I especially liked the quote: "No evil threads, just threads programmed for evil". He also had some interesting examples, one especially vivid example of how code can interact was to think of two hands as threads, each finger as a line of code and how many different arrangements those fingers can be intertwined when clasping your hands together.

Breshears mentions threading as the future of applications, with multicore becoming more and more popular, and concurrent programming eventually becoming the norm for everything wanting to take advantage of all these additional processors.

The book talks about approaching concurency and threading with eight simple rules:
1. Identify Truly Independent Computations.
2. Implement Concurrency at the Highest Level Possible.
3. Plan Early for Scalability to Take Advantage of Increasing Numbers of Cores.

4. Make Use of Thread Safe Libraries Whenever Possible
.
5. Use the Right Threading Model.
6. Never Assume a Paricular Order of Execution.

7. Use Thread Local Storage whenever possible or associate locsk to specifi data.

8. Date to change the algorithm for a better chance of concurrency.

There are also some talk of various threading libraries, OpenMP (implicit threading) , Intel Threading Building Blocks as well as explicit threading: such as pthreads and Windows Threads.

The book also shows some threading examples of sorting, searching, graphs, as well as some information on the Intel's VTune Performance Analyzer which sounds pretty useful in finding those hotspots that would be potential candidates for parallel computing.

Lots of interesting stuff, and the book covers quite a bit of different areas. I especially liked the eight rules and found the overarching view of the different areas of threading interesting.

Best of luck managing your threads,
Michael Hubbard
http://michaelhubbard.ca

Monday, October 18, 2010

Coding Style and Stylecop

How important is coding style? This debate is something that really draws out the "artist" and critic in every programmer. In many ways the style of writing code is similar to writing poetry, and what makes the subject so hotly debated is that the beauty of the code, sentence or poem is all in the eye of the beholder. Just like those critics of of prose and poetry, the serious critiques of coding style come from understanding the limits and history of the form. Practice reading and writing lots of code, in different languages if possible, and acknowledging how many ways there are to write the same functionality.

In the same way that experimentation in poetry and prose makes for a better understanding of the overall form, understanding how different coding standards can make for better (or worse) code. With every coding standard the code will be limited but also potentially clarified in a new way.

Organization of code one way can make code reviews easier, but potentially making maintanability more difficult, code may be separated into different sections that are ordered in a certain way that causes significant scrolling, opening of multiple files and jumping through layers of code, that structured a different way may minimize bouncing around.

Coding standards for teams is especially important, and should be strictly adhered to as best as possible, for the overall benefit of the team. Consistency is very important as being able to quickly jump into a new class, module or section of the code "should" be as easy as jumping into a section that you wrote yourself. Coding standards are an agreement between programmers to create a cohesive work, much like any collaborative effort, the final product should be a uniform work of art.

The most diplomatic way to handle this is often through adopting a tool that lets catches errors quickly and consistently. This also frees up code reviews to be on more important aspects like architecture, re-usability, error checking, design patterns etc. as non-adherence to coding standards often ends up becoming the main focus if the code strays too far from the standards outlined. These tools should also be part of the commit process, so that the code is rejected if it does not pass the coding standards (in certain cases some of these tools may have to have slight adjustments, as desired, but with a focus on consistency).

For those C# Unity or XNA projects some guys really like Resharper (30 day trial at http://www.jetbrains.com/resharper/) For a free style specific checker I like StyleCop: http://stylecop.codeplex.com/ there is a good amount of customization and easily embeds into Visual Studio. The information that is provided from the code checks make it easy to know the issue and change the code to follow the coding standards that have been setup.

For python there is pep8 http://pypi.python.org/pypi/pep8 and pychecker http://pychecker.sourceforge.net/ as well as a few more.

In general the style and syntax checkers are becoming quite popular (especially with scripting languages that don't compile) but regardless a good "style cop" will improve code quality, readability of all sections of the code, and allows everyone to focus on writing great code, instead of what it looks like in the end.

Until next time, StyleCop: that is all.
Michael Hubbard
http://michaelhubbard.ca

Saturday, October 16, 2010

Book Review: Agile Software Engineering

Agile Software Engineering by Orit Hazzan and Yael Dubinsky works largely as an undergraduate textbook, dealing a lot with how to teach and educate Agile methodologies in a learning environment. The book is broken up into progressively more involved steps with each chapter ending with a summary and reflective questions. There are also breakdowns of running an agile workshop and dealing with planning, time estimation, reflection. As a whole, the book is quite useful as a beginner-intermediate approach to agile, and would likely work well for teachers & students, as a lot of the structure of the book is related to setting up and teaching agile in a classroom.

One of the more unique parts of the book is how the book references different aspects of agile using the HOT (Human, Organizational and Technological) perspectives. I found that looking at different agile concepts through these filters a useful way to see how agile works on many different levels. The book is also good at giving examples regarding globalization and how organizational culture can be dealt with in projects that deal with outsourced or contracted work, and how agile attempts to address these issues by providing additional transparency and interactions. The HOT perspectives I found were especially pertinent in those areas.

My favorite example, however, was on the the chapter on Trust using elements of game theory, and where the Prisoners Dilemma is applied to a work environment. This was applied specifically to where developers choose to cooperate or compete with one another and results in three possible scenarios:

1. If both developers cooperate they will be more likely to succeed.
2. If only one developer attempts to cooperate (by learning the others code, helping the other etc.) there may be a success of the project, but potentially the one developer that choose to cooperate may not receive as much recognition (since time was spent cooperating instead of competing).
3. If both developers compete (do not help each other, do not learn each others code) there will be no integration of efforts and the project will likely fail.

Obviously the ideal scenario is where both developers cooperate with one another and are awarded equally. As a lead or manager it is important to recognize when and where this type of scenario presents itself, and attempt to foster cooperation over competition.

The book also points to a link to code of ethics at the ACM (Association for Computing Machinery) and while I was not able to find the original link provided in the book at http://info.acm.org/server/se/code.htm I was able to find a similar document on the same site at http://www.acm.org/about/code-of-ethics

Please allow me to delve into a small tangent here: while I personally prefer to not have to mention ethics directly to my team (in hopes that everyone plays nice), there is often a significant amount of emotion that developers have for their work, and some developers can be very defensive about there work, to the point of being difficult to deal with. This attitude is unfortunate as every piece of code can be improved, every single piece of code! There is no such thing as perfect code, just as there is no such thing as a perfect poem, or a perfect painting or anything else that requires design. Things like structure, patterns, organization of the code, variable names, comments, formatting, optimization and more, are all subject to interpretation of what is good and thus can vary greatly from developer to developer. As a team lead, I need the developers to cooperate on the coding standards, but even then there can be a significant amount of hubris in some developers which can cause internal clashes. I have been lucky in my attempts to be like "lukewarm water" between "fire" and "ice", but through my experience feel like a code of ethics may also be a necessity, especially in larger teams.

Overall, the book definitely fits the niche well of being a good learning tool for those interested in Agile methodologies, and is probably best suited in a group environment as many of the reflective questions would likely garner some very interesting responses.

Until next time, I remain,
Michael Hubbard
http://michaelhubbard.ca

Thursday, October 14, 2010

Creating a Great Team

Comparison time: Great Hockey Team and Great Programmer Team.

While I am living in Toronto, I will instead focus on the Vancouver Canucks as a great team example. What do they have?

Star Players: Henrik and Daniel Sedin, Roberto Luongo.
Solid Reliable Players: Ryan Kesler, Alex Burrows, Christian Ehrhoff.
Good to Decent Players: the rest of the Canucks
Up and Coming Rookies: Cody Hodson, Cory Schneider
Parting ways with those that don't fit: Kyle Wellwood
Decent managment: Mike Gillis, Alain Vigneault

Some explanations:

Star Players: These are your guys leading the charge. They will have the most experience, most skill and most dedication to the project. Having a few of these guys will inspire the rest of the team to work harder and better, and will likely be the ones you have to rely upon in times of need.

Solid Reliable Players
: While the stars are working their magic, these are the workhorses of the team, dependable, reliable, and dedicated. While they might not have the skill and experience of the stars, it is unlikely you can get stars in every position. The better skilled these lines are, the better chance you will have of completing the project successfully.

Good to Decent Players:
These are the rest, and they should be of varying skill level, but all capable in some way, notice I didn't include any bad players. To be in the NHL, you have to have some talent, it should be exactly the same for your company. A player that has no talent will not survive in the NHL or on any good team, and your team should be no different. Give people a chance, give them training, but if they don't have the capability (or the heart), it will forever be a weak link that could be filled by someone else that would help the rest of the team more.

Up and Coming Rookies
: These are your juniors and new team members, there is always potential that these could develop into your next stars, or alternatively be sent back down to the minors. A sports team will often give a person an extended tryout, but that doesn't mean they have to make the team, if they aren't getting the job done.

Parting ways with those that don't fit: Sometimes a player can have some good skills, but just not be the right fit for the position they are looking for. Sadly at these times, it is important to do what is best for the team, and that often means parting ways with those individuals who could be useful (maybe with another project at the company) but not on the team as it is now.

Decent management: Just decent management? Sure, while I like the management in Vancouver, it is really the players that are the stars, not the management team. You need smart people at the top helping make the decisions, but unless they are helping score goals (or commit bug fixes) often it is better for them to just let the players do their job.

So what does this mean? Basically the best thing you can hope to do, is hire the best people you can at every position and turn them loose. If you build a team of just rookies and a couple of fourth liners, don't expect them to be able to compete in the big leagues. In the end, you always get way you pay for.

I keep hoping that Toronto will turn it around, but can't help but wonder where their super star players are?

Go Canucks, Go Leafs!
Michael Hubbard
http://michaelhubbard.ca

Tuesday, October 5, 2010

Twitter API, tweetsharp

For me, the most interesting thing about Twitter, is it allows those with interesting (albeit bite sized) data streams as wide an audience as they can reach, without the need for any sort of friend invite/accept protocol. The fact that it is almost completely one-way communication means there are not as many simultaneous conversation threads to deal with (between two users), as most information is either absorbed by followers or lost in cyberspace. The fact there is tons of data, some of it almost exclusive (until it is linked elsewhere) and the number of celebrities, (or other famous people) that the average joe would not be able to have any sort of connection to, seemingly breaks down the barriers and allows the twitterer to get information right from the horse's mouth, or whatever the case may be. This is much different from the Facebook model, where you can only really see your friends data streams. In twitter, you could follow more people than you could potentially meet in a lifetime. Naysayers should note, if Schwarzenegger is on Twitter, that should be good enough for almost everyone :P

Anyway, the big idea would be to code something using the Twitter API, and then tweet about it.

The first thing I noticed was how many examples on the web were out of date. Twitter's implementation of the OAuth (August 16, 2010), before it was much simpler, however safe is good too.

Some links I found useful, the main stuff from twipler is especially useful, and I use to show how to do a simple post based on the tweetsharp libs:

MAIN: http://blog.twipler.com/A34_Simple_Twitter_web_client_example.html
TWITTER SETUP: http://blog.twipler.com/A32_Registering_a_new_Twitter_application.html

LIBRARY:
http://dev.twitter.com/pages/basic_to_oauth
http://apiwiki.twitter.com/OAuth-Examples


Taking the twipler example we can add a small button to tweet.

1. In the interface file add another button (called btnTweet with the following values (if placed after the timeline button it will show up at line 18 of Default.aspx and will modify the designer page as well (go through the UI interface to avoid any issues.)

<asp:Button ID="btnTweet" runat="server" Text="Tweet" OnClick="btnTweet_Click">

2. Add the following code to Default.aspx.cs:
protected void btnTweet_Click(object sender, EventArgs e)
{
string response = Helper.Request(t => t.Statuses().Update(txtBox.Text), false);
litOutput.Text = "<h3>Tweet</h3>" + response;
}
3. Change the TweetSharpHelper.cs class CreateRequest method to public (instead of private).
public IFluentTwitter CreateRequest()
4. Set the ConsumerKey and ConsumerSecret if you haven't already.

5. Run the application and tweet away (type the message in the text box and hit tweet)!

Many will ask what this has to do with games... well, how many facebook games are there now? How many more with the Playstation Home and Mii's running around? Nearly every device will attempt to talk with every other eventually, and it will only be a matter of time before every headshot you register shows up on your twitter post, facebook wall and rss feed.

Follow me on twitter: http://twitter.com/#!/mayekol maybe I'll post something interesting sometime (but no promises :P)

Michael Hubbard
http://michaelhubbard.ca

Monday, October 4, 2010

Day in the Life of a Game Programmer/ Team Lead

In order to improve my time (and hack my life) I have to know what it has in it. Here is my breakdown of an average (quiet) day, although it is still 9:00-6:30. We have been pretty good about doing 9:00-5:00 hours and that is what most of our guys do, and I would prefer to keep it that way. For me, every day is slightly different, some days I have five hours worth of meetings. Other days I do not leave the office till 8:00pm or come in the next day at 7:00am. It is still better than the hours I have spent at other companies, and have been a member of the 100 hours a week club more times than I would like. But this is a pretty good indication of what I would consider average for now, and with the hope I can get back to more 9:00-5:00 days once the major deadlines have passed.

The day starts for me (usually) at 7:00 AM, and I also included the time it takes for each task in parenthesis.

07:00-07:09 (00:09) Wake up, and then really wake up.
07:09-07:32 (00:23) Morning routine, get dressed.
07:32-07:53 (00:21) Make and eat breakfast while checking email.
07:53-07:59 (00:06) Cleanup, brush teeth, put shoes on.
07:59-08:17 (00:18) Leave apartment start walking to bus stop.
08:17-08:54 (00:37) Get on bus to work, start reading a book.
08:54-08:58 (00:04) Get off bus walk to office and my desk.
08:58-09:29 (00:31) Update code, check email, handle admin duties.
09:29-09:45 (00:16) Stand up meeting for half the dev team.
09:45-10:00 (00:15) Stand up meeting for other half of the devs.
10:00-10:30 (00:30) Assist devs requiring clarification on tasks.
10:30-10:36 (00:06) Write emails to other departments on tasks.
10:36-10:39 (00:03) Grab a glass of water.
10:39-10:45 (00:06) Help a developer find a bug they are fixing.
10:45-11:01 (00:16) Read source code and documentation.
11:01-11:35 (00:34) Code.
11:35-11:38 (00:03) Answer a few questions on a feature.
11:38-12:05 (00:27) Code.
12:05-01:00 (00:55) Lunch.
01:00-01:57 (00:57) Code and test, submit some updated APIs.
01:57-02:10 (00:13) Discuss bug fixes and design strategy.
02:10-02:23 (00:13) Code.
02:23-02:33 (00:10) Work with developer finding bugs.
02:33-02:40 (00:07) Code.
02:40-03:01 (00:21) Work with developer solving issue.
03:01-03:09 (00:08) Code.
03:09-03:21 (00:12) Help developer fix bugs.
03:21-03:27 (00:06) Get some tea.
03:27-03:37 (00:10) Code.
03:37-03:41 (00:04) Answer questions about code base.
03:41-04:00 (00:19) Reply to email and write additional email.
04:00-04:12 (00:12) Work on bug fixes with developer.
04:12-04:28 (00:16) Code.
04:28-05:01 (00:33) Read code review from large feature drop.
05:01-05:06 (00:05) Discuss some updates to a part of the code.
05:06-05:31 (00:25) Finish code review and add comments.
05:31-05:55 (00:24) Read updates to code base from other devs.
05:55-06:24 (00:29) Code and submit.
06:24-06:33 (00:09) Shut down, start walking to bus stop.
06:33-07:15 (00:42) Get on bus, read book.
07:15-07:24 (00:09) Get off bus, start walking home.
07:24-07:31 (00:07) Freshen up.
07:31-07:53 (00:22) Check mail and email.
07:53-08:36 (00:43) Make dinner, eat and cleanup.
08:36-09:56 (01:20) Personal code, while listening to music or tv.
09:56-10:41 (00:45) Get changed for gym, pump iron and shower.
10:41-11:45 (01:04) Play around on computer, code, art or games.
11:45-11:51 (00:06) Get ready for bed.
11:51-12:05 (00:14) Read a bit in bed.
12:05-06:59 (06:55) Sleep (perchance to dream).


Breakdown:

Work:
Administration (email, delegating tasks): 1 hour 53 minutes.
Breaks (lunch, coffee): 1 hour and 4 minutes.
Coding: 4 hours and 2 minutes.
Helping Others: 1 hour and 56 minutes.
Meetings: 31 minutes.

Home:
Computing (web surfing, project coding): 3 hours 7 minutes.
Exercise: 45 minutes.
Personal Maintenance (food, shower, etc.): 1 hour 34 minutes.
Reading (including bus travel time): 1 hour 33 minutes.
Sleep: 6 hours 55 minutes.
Travel (not including bus reading time): 40 minutes.

Other Facts:
Read 39 emails.
Sent 5 emails.
Added 37 comments to code review.

A few noteworthy points to consider about the data. The travel time is actually 1 hour and 59 minutes total for me every day, but I only counted the parts I do not do anything else (i.e. not the reading time). Looking over the numbers, it is interesting to see how much time is spent on helping others and administrative tasks. When I first joined the company as a game programmer, I was able to spend almost my full-time on coding and sent very few emails. However as time went on and more people were hired, I spent more time helping the other programmers too. Still, it is interesting to see how the day breaks down like this, and on further investigation where I can improve my use of time (like maybe try exercising in the morning as part of the regular morning routine)? Or take 30 minute lunches instead of the full hour and leave at 6:00 PM? Or try and come in earlier instead of leaving later?

Another thing to consider is how long it takes a game developer to get "in the zone". Some people at work estimate it can take up to ten minutes to get in the zone, but only one second to get out of it. While it would be great to minimize interruptions, it is far better for me to help remove the other developers roadblocks whenever possible.

Anyway, I will think over the data and may update any changes I made to my day, in a future post.

Until then,
Michael Hubbard
http://michaelhubbard.ca

Tuesday, September 28, 2010

Game performance and optimization

The question of performance and what can benefit from additional work optimizing code (or art) is a very complex problem. To grossly oversimplify the questions of performance, it is worth looking at a few of the terms used to describe what involved in the actual measure of performance. Usually when talking with people about a game's performance in simplistic terms they will focus on fps (frames-per-second) but there are many other considerations memory usage, texture memory usage, cpu usage, gpu usage, network usage (if applicable) or things like load time, amount of loads, and more. To measure any of these it is worth focusing on testing different types of components, and the platform and/or engine and getting minimum specs for what you need (size of art assets, number of triangles, animations, size of textures, number of lights, number of postprocess fx, complexity of shaders, etc.), the list goes on and on.

One thing that is often talked about CPU bound vs GPU bound, meaning that one of these systems is a bottleneck for the other.

CPU bound could be for a lot of processing, physics calculations (on the CPU) or a lot of other complex calculations (searching or sorting large amounts of data or complex AI calculations like pathfinding).

GPU bound if a lot of polygons, high resolution textures, or dealing with a lot complex shaders (often with multiple lights and shadows).

The best approach is to find the first largest bottleneck and try to minimize it, then the next largest and so on. While it is sometimes worthwhile to look at the code and try and optimize it (things like replacing divides, multiplications ect. with bitshifts and attempting to precalculate as much as possible before actually hitting runtime), this won't solve the problem if the game falls over from huge art assets that the hardware does not easily support. It is often considered that premature optimization is a huge mistake, since it is often difficult to know if you are optimizing the right part of the code (and often the code becomes significantly more complex and more difficult to maintain if it is optimized). It is still worthwhile to make as efficient code as possible while writing it (considering order of magnitude algorthims (big O notation)), but if you don't actually test with real data, you may be wasting huge amounts of time optomizing the wrong thing.

In terms of performance and optimizations, I almost always come back to an example that a colleague of mine gave me that he saw on a previous project. The project was on a 3D handheld and the team wanted to add a depth of field postprocess render fx to the game, but were worried that the performance hit would be too expensive. They decided that the look was so cool, they had to try it and see what it would cost. What they found was that by adding this fx they actually got better performance! On this particular 3D handheld both the CPU and GPU shared the same bus, so by adding this postprocess fx the bus was freed up to handle a lot of CPU processing, while the GPU was dealing with this more complex rendering. Not an easy thing to estimate or plan for :P

There are a few useful tools in helping debug performance, PerfStudio for AMD http://developer.amd.com/gpu/PerfStudio/Pages/default.aspx and PerfHUD for NVIDIA http://developer.nvidia.com/object/nvperfhud_home.html are great tools for seeing the low level API calls to the graphics card as well as debugging tools for texture visualization and overrides which are great for shader debugging, checking mipmap level etc.

One of the things that really impressed me in the UDK was the use of performance and memory built right into the editor, as well as things like mipmap levels, shader complexity, that are great ways of helping artists debug scenes and texture sizes. Unity3D offers a few tools (like the stats window and profiler) to help see where some of the memory is being used, but I haven't seen it at the level of UDK anywhere (and will still need to look at XNA, but without an editor, it may be difficult to provide any of these kinds of tools).

In the end, it all comes down to trade offs, and what you can live with for performance for your game, usually at the expense of something else, and hopefully not too much of the visual look. It is worthwhile to develop some tests that check for significant changes in memory usage, video memory usage, frame rate, etc. on every commit (or at least everyday) to help catch these memory issues as soon as possible (especially on a large project).

Best of luck optimizing,
Michael Hubbard
http://michaelhubbard.ca

Book Review: Lifehacker 88 Tech Tricks to Turbocharge Your Day

With all the books on agile methodologies and planning and managing time I thought I would look at trying to improve some of the regular aspects of my day-to-day activities to try and find "more time". There are still 24 hours in a day, but it is helpful to realize how and where you are spending time on things, and how to improve your use of that time. I enjoy the website http://lifehacker.com/ and finished up reading Lifehacker 88 Tech Tricks to Turbocharge Your Day by Gina Trapani. Overall, the tips are quite interesting and range from the simple (like focusing your attention and emailing yourself things to do) to the more complex (such as setting up a personal wiki server).

I am a big fan of automating things, and love a really slick and complete pipeline. But sometimes the trick isn't to know how to automate something, it is what to automate. It would be easy to try and say "everything", but you really need to start somewhere.

The Lifehacker book really got me thinking about examining my day and thinking what can be improved, and what could be automated. This almost requires an outsider perspective to really examine the details of how tasks are accomplished and where time is spent in a given day. It can be from the complex (like how to push the game & server to another environment), to the trivial (every time you start your computer, do you always open the same three programs). While the Lifehacker book and website provide details on good tips (like carrying around a flash drive of your applications, "firewall" your attention to prevent distractions through a few sets of tasks, or tips for tuning your computer or search habits) the real benefit to this kind of book is taking some time to reflect on your own habits and think about improvements.

I will be keeping track of where I spend my time "in the life of a game programmer/technical artist/team lead and will look into where my time goes in a follow up blog post and try to examine how my time can be improved to increase overall productivity (and possibly sanity :P).

Until then,
Michael Hubbard
http://michaelhubbard.ca

Sunday, September 26, 2010

Book Review: C++ In Action Industrial Strength Programming Techniques

With all the posts on Agile methodologies I decided to take a break and follow up on more programming aspects. C++ In Action Industrial-Strength Programming by Bartosz Milewski is as the preface states "... is not a language reference or a collection of clever tricks and patterns. This book is about programming." The book offers some useful insights into those programming coming from a C or Java background, but ramps up with some important theory of programming, with useful examples for function pointer routers, techniques for data encapsulation with templates and the windows API.

One of the ideas that struck a chord with me was the case against "defensive programming". Often programmers want to be as careful as possible, but by writing very defensive code, they end up writing nice little places for nasty bugs to hide. Milewski recommends using asserts on parameter input instead of attempting to handle invalid input at all, which in turn prevents bugs from "covering their tracks". He goes on to recommend that programmers do not try and write code that works no matter what. By attempting to write such defensive programming the code becomes more obscure and prone to errors. Instead, programmers should take an approach of validating parameters (and using asserts) and not attempting to handle invalid data by passing on more invalid/garbage data.

I really can get behind this concept, and often will try and write code that does a lot of error checking with quick returns. I find that defensive programming often also goes along with lots of nested if statements and tree logic that ends up complicating the code. The programmer wants to be defensive but does not check the parameters ahead of time, and instead complicates the overall structure of the code. Here are two simple examples of stuff I see all the time. The "Foo" example (below) is not uncommon to see, even though the "Bar" is easier to read and does the exact same thing. By focusing on validating parameters you end up with shorter, cleaner code which is much easier to understand and debug.

// Foo function (with valid inputs do the foo).
int Foo(Object obj, int param1, int param2, bool param3)
{
if(obj != NULL)
{
if(param1 < param2)
{
if(param3)
{
// Do the foo.
return 0;
}
else
{
return -1;
}
}
else
{
return -1;
}
}
else
{
return -1;
}
}

// Bar function (with valid inputs do the foo).
int Bar(Object obj, int param1, int param2, bool param3)
{
if(obj == NULL || param1 > param2 || !param3)
{
return -1;
}

// Do the foo.
return 0;
}

The book is also available at http://www.relisoft.com/book/index.htm and while it does deal with a lot of different elements (like windows specifics, team work and porting code) the end result is an interesting read with some good looks on program transformation and improved structure.

I hope to get more real code examples up soon and will hopefully have some interesting content of my own. But until then I remain...
Michael Hubbard
http://michaelhubbard.ca

Thursday, September 16, 2010

Book Review: Agile Game Development With Scrum

Staying on this Agile game development interest, I was at the local book store the other day and ran across the book Agile Game Development With Scrum by Clinton Keith and thought, "how appropriate". Needless to say I was intrigued about the aspects of applying Agile directly to game development and immediately started reading it.

One of the elements that Keith keeps repeating throughout the book is that there are "no silver bullets" and adopting Agile won't save a team already on a death march schedule, nor will it allow a team to handle an impossible schedule. What Keith proposes instead, is that Agile will give a team a framework to learn about scheduling issues, current velocity and burn down charts as well as processes to detect potential issues with the schedule or design, far earlier in the project.

The main point that I found crucial to success, was the idea of cross domain teams. A couple artists, level designers, game programmers, server programmers etc. would form a single team responsible for part of the game, and would handle all aspects of that feature, while iterating over the different stages every two week sprint. Adopting this team structure, I feel, would be a huge boon to any game dev work place as the slowest part of any game is always (always!) cross team integration. Here are two examples of what could happen in a non-cross domain team:

1. An artist creates an asset and has no developer support to test it. Without integration they go on to make multiple assets the same way spending weeks (or months) of effort, only to find out later they need to modify all their assets now that dev has begun testing them.

2. Server team creates APIs for client, but no client team is available to test them. Server developers go off and create new APIs. Once the client begins testing the APIs they find they need to add additional parameters and/or bug fixes. This requires days of rewrites from a different developer, since the original author is working on another critical section and is not available to update the code.

I see these kinds of issues happening over and over again in large scale isolated teams, where individuals begin to make decisions that should require cross domain knowledge, but instead move forward without that knowledge, and often wastes significant time when having to fix their mistakes. This amount of rework can also result in a crunch time later to try and catch up on the previous wasted effort.

One of the other important things that Keith mentions is the issue of burnout during a crunch, and estimates that only two to three weeks actually show any improved velocity. Once a month (or more) of crunch has been put into effect the overall velocity is actually worse than that of a non-crunch velocity! People will get tired, frustrated and upset with extended overtime, and Keith points to the http://www.igda.org/quality-life-white-paper-info that measures the quality of life of game developers across multiple industries, which is a good read I will probably comment more on in a later post.

The book also goes on to describe a number of key aspects regarding Agile practices (running a scrum, user stories, minimizing up front documentation, continuous integration and working builds every sprint) but also concepts for implementing and introducing Agile and scrum to the workplace, shifting emphasis to a team responsibilities for setting schedules and working with Agile for Art, QA and Producers.

Overall the book offered some valuable insights and interesting experiences. I would love try and implement more Agile processes at my current workplace, but we shall see how strong the culture is resistant to change. There are a number of jokes about trying to change culture like Angry Monkeys if you are not familiar with the story (the link points to RTP Scrolls which gives a good example). It is a shame really, since I do feel that Agile is the way to go for any game, and that all management styles eventually move closer to Agile anyway, since it allows the most flexibility and creative changes to make the game more fun. I guess it is like that saying, that the only thing harder than learning something new, is forgetting something old.

But when it all comes down to it, game development should be fun too...
Bye for now,
Michael Hubbard
http://michaelhubbard.ca

Saturday, September 11, 2010

Extreme Programming

For those of you following, I have been delving into more & more practices and thought I would look into Extreme Programming (XP), created by Kent Beck (one of the original authors of the agile manifesto). Some of you (like me) may feel that the term "extreme" may be a bit overused (do our developers have to do barrel-rolls while coding at their desks?) All kidding aside, my interest in Agile development (especially Agile game development) is currently peaked and I will continue to delve into more an more aspects of it.

I decided to pick up Sams Teach Yourself Extreme Programming in 24 Hours by Stewart Baird as I had never really picked up a book that mentioned learning something so quickly (I remember the old programming post Teach Yourself Programming in 10 years by Peter Norvig :P). Nevertheless, I always feel it is important to start somewhere, and a book like Baird's gives a distilled view and framework to graft a suitable framework for extreme programming in a very concise and organized manner.

Baird's book, breaks down the XP aspects into twenty-four, "one-hour" chapters, each dealing with a different aspect of XP. There were a few chapters that were either Java or .NET based (especially related to Unit testing), so the relevance on those may vary depending on your interest, but the underlying principles are still important. But, of course the question "what makes it "EXTREME" Programming"? XP like many Agile methodologies focuses on "simplicity, communication, feedback and courage" (I feel like "planning" would also be important to add in the list, but XP focuses more on the developer than the leader/manager). XP has a lot of similarities to other agile requirements, but handles them in a slightly different manner, the most obvious being the focus on pair-programming.

The argument for pair-programming is the focus on quality, by having two sets of eyes on code as it is being written, the developers will end up with constant feedback, improved code quality and less bugs and design flaws. In Hour 11, Baird mentions the time lost upfront will be improved later as less bugs will be found and have to be fixed, thus actually saving development/bug fixing time. The trade-off being 15% slower dev times (to use pairs), but 15% less defects. Baird gives an example of how this will save time:

50,000 lines of code takes 1 individual 1000 hours (at 50 lines per hour).
50,000 lines of code takes a pair 1150 hours (at 50 lines per hour). 15% more time.

Baird uses Watts Humphrey's A Discipline for Software Engineering that 100 defects occur for every 1000 lines of code, but estimates that 30% could still remain after some software development "rigor" has been put forward, meaning 1500 defects in 50,000 lines of code. Baird mentions the total time could be 10 hours per defect to find and fix each one of them.

15,000 hours spent finding the 1500 defects from an individual developer.
12,750 hours spent finding the 1275 defects from a pair of developers. (15% less defects)
= A saving of 2,250 hours.

While I encourage the devs on my team to work together as much as possible, I am not as sure about implementing pair-programming. In my opinion, the most useful situations for pair programming would be when there is a lack of design, research is required in solving the problem, a lack of motivation from one of more developers or a real need for information sharing on a certain section of code. If the design is already in place and knows the path to take I would be less inclined to use pair-programming in those cases.

I will keep some of the extreme programming ideas in my back pocket and will think about them. I prefer the scrum approach for organizing, but do like some aspects of the book (including some benefits of pair-programming). For me simplicity is one of the best focuses, and in a professional work environment it is important to focus on YAGNI (You Ain't Gonna Need It) to prevent over-engineering (although I would encourage those interested in this type of engineering to experiment on their own home projects). I also like the idea of collective ownership of the code and of course the 40 hour work week (which is especially hard in games, as there are a lot more iterations than there are in other requirements (like instrumentation logic, or security systems, and I imagine some applications and systems too)).

I will continue investigating Agile methodologies as well as mention some of the "extreme" ideas to see if any of the other leads are interested in exploring these ideas. I would be curious to see how they work with the company, but think I may shelf them for the immediate future.

Until next time, I remain,
Michael Hubbard
http://michaelhubbard.ca

P.S. I am following up on the Extreme Programming RoadMap and will see if I can find much in the way of Extreme Game Dev (although I still think it may be hard to top Keith's Agile Game Development with Scrum (previous post)).

Art Books

Something a little different today, I'm having a look at more art related books. I feel like I always want to use "both sides" of my brain and enjoy the artistic aspects and results of the tools and games I create. For me, it was always important to get technical, so that when I had a great idea for something, I could cover all aspects of it without having to worry about technical or art abilities really getting in the way. That being said, I should focus a bit more on the artistic side (hard with so much programming these days) but, I got a few books from the library and had a good read.

Webcomics: Tools and Techniques for Digital Cartooning
by Steven Withrow & John Barber is good at describing different artists workflows of a number of cartoonist. I was interested in how some of the artists used of Poser, and while I have played with some demos, I realize this is still quite a useful tool for rapid prototyping many different characters, clothes, poses etc. From what I remember it was one of those programs that may add up with add ons, but could be useful as virtual artist dummy for things like lighting on a character, proportions, foreshortening and composition. One of the other interesting aspects of the book is the focus on the business aspects, such as page rates from publishers in print, subscription services, micro payments and merchandise. There is still some Flash work, and I feel that while there are a lot of competitors (HTML 5, or some more of the 3D programs in the browser like Unity) it is still going to be a while before something really competes with Flash on the web.

Drawing and Painting Fantasy Worlds by Finlay Cowan was a good book to inspire young artists or those who need a bit more motivation. Some of the quotes from other disciplines such as Thomas Mann's "Order and simplification are the first steps towards mastery of any subject" are sprinkled throughout the book (sidenote: for those that read Mann's Death in Venice I also liked the quote: "It is most certainly a good thing that the world knows only the beautiful opus but not its origins, not the conditions of its creation", and that is also a good warning quote for those looking at Death in Venice). Anyway, Cowan focuses on things like research and inspiration, using the internet, library, books, museums, mentors and other artists. I liked his idea of a focus on a "sacred space" that keeps you in the mindset for work and minimizes distractions (like the internet, or blog posts :P) There is a focus on figure drawing and life drawing (which I am a big fan of (here for some of my stuff)) and also work with 3D aspects using Poser. There are also more technical aspects such as digital painting starting in greys and and layers, importing different images to create depth in the work and collecting different things with unusual patterns like sun dollars, tree bark, poppyseed muffins etc. and incoprotating them in the work. There was also some of the necessary elements of one, two and three point perspectives and using grids for cities. I also really enjoyed the words "THERE ARE NO EXCUSES" (pg 118) adapt to whatever situation you have and try and be productive wherever you are. Be creative whenever and wherever you can.

Digital Manga Workshop: An Artists Guide to Creating Manga Illustrations on Your Computer by Jared Hodges and Lindsay Cibos deals with a number of elements in Photoshop and Corel Painter, with a few nods to Paint Shop Pro, openCanvas and GIMP. The tech for the book largely revolves around a wacom tablet, and has a few good tips here for some of the work. Choosing 300 DPI, dealing with transparent line art, digital inking for photoshop with a round hard brush, (5 pixel diameter, 0 angle, spacing < 25%, roundness and hardness 100, minimum diameter 1%, other options set to 0 and off, mode color burn, size control pen pressure, count 1). Also some tips like using hue and saturation to change hair color or shadow and highlights, is probably overlooked by those who use the hue/saturation for just initial color correction. Hodges and Cibos recommend using 4 different colors (for cel shading) a base, shadow, dark shadow, and highlight for hard materials and only two tone shading for soft color. The book also goes into airbrush color blending more colors or painting style (traditional or watercolor) and mentions coloring some lines (not all black) to give a softer look.

It was nice stepping away from the programming books, and while sometimes with art books the best part is looking at the artwork and getting ideas, it is still nice when some of the technique is mentioned on how to accomplish some of these visual results.

Until next time,
Michael Hubbard
http://michaelhubbard.ca

Agile Retrospectives (inspect and adapt)

Well, I said I would try and minimize the leadership aspects of this blog, but have been focusing on trying to help my team more with deadlines looming. One thing I will be doing with the team is trying to get them to help me help them :P Also, I suspect that I will be focusing on more and more Agile workflow models as I feel there are still a lot of hints of waterfall in the current system. Thus, my venture into trying to understand and apply these new ideas. I have some understanding of agile from my previous work at Leap In Entertainment where management there did a good job of applying Agile methodologies (without having to label everyone pigs or chickens).

I, however need a bit more information about agile as a whole (and will delve into other books on managing extreme programming etc.) and have begun absorbing the knowledge from online resources and library books. One that caught my attention was Ester Derby and Diana Larsen book Agile Retrospectives: Making Good Teams Great after I stumbled upon their video presentation Agile Retrospectives Talk. Of course "making good teams great" is exactly what I want to accomplish...

The need for retrospectives is to "inspect and adapt" the project and processes before the project is finished (i.e. while there is still time to improve the processes). Derby and Larsen recommend a commitment to doing retrospectives every iteration (in my case every two weeks) so that we can fix things before waiting until the end (and it being too late). Some of the things to investigate are the patterns that show how we work together, and determining what works and what doesn't, bringing up problems to solve and finding and reinforcing team strengths.

With this is mind I will try and focus on both the productivity and satisfaction of the team, and experiment, experiment, experiment. With each iteration hopefully we can see what worked from the previous retrospective and what did not. By finding out what does not work and recording it we will at least be less likely to keep repeating the same mistakes.

The breakdown that the book (and video suggest) are in five stages (page 19):

Set the stage
Gather Data
Generate insight
Decide what to do
Close the retrospective

The book also recommends allocating some shuffle time as well for very long iterations.

My focus for our next sprint/retrospective will be for trying to examine and explain what works and what does not, how well are we applying coding standards, how well are we refactoring, how well are working as a team and meeting our deadlines.

Wish me luck,
Michael
http://michaelhubbard.ca

P.S. The book is also on Scribd: Agile Retrospectives on Scribd

Monday, September 6, 2010

Herding Cats... as a Team Lead

Been quite busy, herding cats :P I won't be updating Game Programmer/Technical Artist to Team Lead/Game Programmer/Technical Artist, but will be mentioning a few new aspects as a new lead in a few blog posts. My main focus is still art and programming, but have been focusing on trying to run a good team as well. My favorite saying for leading programmers is that leading programmers is like herding cats... "You can get a cat to go anywhere you want, as long as the cat wants to go there too". This can be a lot trickier than it sounds, and in a lot of ways code can be easier than people :) I really enjoy the team I am with and have some really good guys, but dealing with people (especially with the often conflicting priorities across teams) can be tricky. I try and stay as close to the code as possible, and still try and write as much as I can, but find I need to delegate more of the smaller details as I have to be more responsible for more of the overall project and structure of the code base.

In order to help with my new responsibilities I have been reading up on a few management books, and am hoping to try a few of the management techniques to hopefully improve what I can. I finished reading Herding Cats: A Primer for Programmers who Lead Programmers by J. Hank Rainwater, which holds some interesting ideas about the challenges of programmer team management. Rainwater's main focus is "thinking and adapting" and constantly reading to learn new techniques and keep your passion for the craft (both the programming and management side). I really enjoy reading (so that is not a problem) but will extend my focus to management (agile, extreme programming etc.) to try and stay informed of some of these aspects as well.

One of the things that Rainwater mentions is "Don't let your email inbox govern your day" he recommends an email thread that is longer than three emails should be talked about on the phone or in person, and that feature creep should be caught as it can initially appear as email clarifications. He also mentions however, that it is important to only hold meetings that are useful, meetings that do not have any action items could usually be better accomplished through an email or wiki page update.

The book goes on to focus on some hiring practices, working with the different levels of an organization and looking in the mirror and evaluating yourself as a leader. I expect that I will re-read chapters of this book as it is filled with a lot of good ideas and common sense. When I was taking Business Management and Organization courses in university (oh so long ago) I often felt like compared with the technical (programming) books they perhaps did not offer as much new material as they did common sense. I think I was a little unfair in that assessment as I now feel like it is important to put those common sense theories into practice, which are not without their own challenges.

I have started reading more agile books (we are trying to get away from waterfall development as much as I can) and have begun implementing daily quick 10 minute stand up meetings in the morning to try and increase communication and sort out action items for the day (the big three questions, what did you do, what are you going to do today and what (if anything) is blocking you), and find that there are often more blocks than is easily anticipated. I have mentioned to the team a few times not to wait until the morning if they discover blocks, and always have something else for them to work on in the meantime if they have a block that requires other teams to work on first (often server or art). Hopefully the meetings are useful for these guys, I always get something out of it, and feel a few of them do too, but either way try and keep it quick.

Lately I have been working a lot of overtime (leading being my day job and programming being my night job), but after the last few months of leading I am (hopefully) getting a better hang of it and will try and put some more of these ideas (and hopefully some from the new books on my desk) into practice and get more coding/leading done in the day.

Until next time,
Michael
http://michaelhubbard.ca

Saturday, June 12, 2010

Book Review: Data Structures for Game Programmers

I finished reading Data Structures for Game Programmers by Ron Penton recently and while I enjoyed aspects of it, I felt that it was targeted more at beginner to intermediate programmers.

Penton deals with many good tips for using template classes and shows some good examples for setting them up. The focus is mainly on arrays, lists, trees and graphs as well as some of the algorithms related to sorting and searching these structures. The most advanced algorithms and the most useful for game programmers is the A* pathfinding algorithm and heuristics associated with setting up AI to use the pathfinding.

The book itself is quite long and fairly thorough (the focus on arrays to graphs are nearly six hundred pages) however, I felt it would be useful if there was more of a comparison between the Standard Template Library structures and the custom built ones, and why certain choices were made for the custom built data structures (if to improve performance, usability or whatever). There are lots of examples that are used throughout the book using Simple Directmedia Layer (SDL) and while I was interested in how quickly the examples could be put together, it sometimes felt that a fair bit of the code was related to SDL.

The data structure book that I was taught in the second-year university data structure course was Algorithm Design: Foundations, Analysis, and Internet Examples by Michael T. Goordrich and Roberto Tamassia, and while I feel it is more advanced, it may be a more complete look at data structures and algorithms. The Algorithm Design book actually deals with AVL trees and Red-Black trees that are mentioned in Penton's book but not discussed. While the Algorithm Design does not have extensive pathfinding examples and A* implementations (instead focusing on Dijkstra's shortest path algorithm) the book may have given a broader view to the data structures themselves, as well as computer science interests like Big O notation and NP-Completeness related to the different structures and algorithms.

For the beginner and intermediate (game) programmer, Penton's book does have some good insights, especially those related to pathfinding which are not often taught as often in data structure books. However, for the more advanced or professional game dev perhaps a more advanced book would be better suited, or perhaps if pathfinding is the only interest an AI book would be more appropriate.

Until next time,
Michael Hubbard
http://michaelhubbard.ca

Thursday, June 10, 2010

Terragen 2 (free version) Test



Finally got a chance to post some of the Terragen 2 tests I was looking into. I was very impressed with some of the quality that could be achieved with this program with minimal effort and intuitive interface. The image is from following the "First Scene" example in the Online Documentation. I thought the galleries were indeed better then my final result, but for only reading a few of the values and playing around a bit with the settings, I felt that the process was of a good quality and could likely be improved with practice.

Setting up the scene (following along the documentation) could likely be setup in only a couple of minutes, if I was to do the scene again I would estimate around five minutes of initial setup for getting the scene and then however long you needed to tweak values and colors.

It is not all silver lining unfortunately, I feel like the close up "grass" (just a green color) and some of the water would likely have to be touched up in photoshop as well as maybe adjusting some of the contrast and color. I think with some additional practice this could be improved in terragen, but I would need to do some trial and error to set appropriate color values for different lighting and atmosphere conditions.

On a whole, I think the result is very strong, the rendering time was a bit long (I had a number of other processes going simultaneously) but the total time for the 800x600 image was 1:16:34s with 2363295 micro-triangles. I am sure with a faster machine this would be greatly improved, although terragen documentation still suggests the rendering time could take 30+ minutes on even newer machines.

I will definitely be playing more with this tool in the future and will see what kind of results are generated from the mesh data that is exported into other 3d renderers.

Definitely worth a try if you enjoy terrains, and it is free for non-commercial use.

Until then, I remain.
Michael Hubbard
http://michaelhubbard.ca

Saturday, May 22, 2010

Book Review: Beginning XNA 3.0 Game Programming

Just finished reading Beginning XNA 3.0 Game Programming: From Novice to Professional by Alexandre Santos Lobao, Bruno Pereira Evangelista, José Antonio Leal de Farias and Riemer Grootjans and would definitely recommend it for the beginner looking at learning XNA.

The book dives right into examples that are well explained and I liked how quickly they introduced game networking into the game, a topic that is often overlooked in game programming books. It is my experience that it is far easier to incorporate networking calls as early as possible in the development cycle than it is to try and add effective networking on top of a game that is nearly complete.

The book also deals with a number of other factors regarding the XNA content pipeline, as well as referencing how to setup terrain and skeletal animations... XNA gives a lot of control for setting these things up, but appears to do less importing work than Unity (or UDK) does for importing other assets. I suspect this may be one of the more time consuming aspects of XNA development if there needs to be a lot of custom support for importing 3D assets into the engine.

I would have liked to see a bit more information on the shader tutorials (as shaders are one of my favorite things to play around with). I will have to delve into the source code examples although I did play a bit of their "Rock Rain" and 3D FPS example vs spiders (I found the control setup to be a bit unusual but works once you get into it).

One of the nicest "gems" from the book was the reference to the terrain generator Terragen which I will be playing with a bit more this weekend. It has been used in Star Trek: Nemesis, The Golden Compass, Serious Sam (game) and the Imaginarium of Doctor Parnassus. I have to say from what I've seen of it in the galleries it looks to be a fairly impressive program and is also has a free version for non-commercial use :)

Looks like some terrain generation tests ahead and I'll see if I can post some XNA examples.

Bye for now,
http://michaelhubbard.ca

Wednesday, May 19, 2010

UDK vs XNA: Part 2

Continuing on my testing of the Unreal Development Kit and XNA I focus a bit more on the development process between these two packages, this is focusing on tutorial and community strength and some asset pipeline use.

Given that the UDK is a much newer release than XNA it is somewhat surprising how quickly tutorials, youtube videos and articles are being written about UDK and all of its features. While the literature on UDK is still fairly small compared with XNA, I imagine that given this initial output it will not be very long before more people begin trying the UDK and expanding on its knowledge base.

XNA has a number of useful books and demos written for it, (still one chapter to go on the Beginning XNA 3.0 Game Programming from Novice to Professional (likely writing a review tomorrow)). The XNA community also seems to be quite established with lots of beginner tutorials as well as some more complex examples as well. Many of the examples often end up with demonstrating the ease of 2D elements in XNA as well as the usefulness of the content management pipeline for loading fonts, images, audio and models. The pipeline is quite intuitive and easy to setup your own images and content in the game using the customizable XNA content pipeline.

The UDK is a bit different, the Content Browser looks to be fairly flexible in managing the assets (as well as coming with a significant amount of import options) and appears to have a more complex overall structure to what elements it has to display. This I imagine has its strengths and weaknesses, in that it is easy to find things by grouping for larger projects but more rigid in having to use such a complex system (instead of a hierarchical directory structure). It appears that the Content Manager also supports a number of more complex file types like photoshop .psd files as well as all the unreal file packages.

All in all XNA is likely an easier choice for the beginner programmer to get started with from the community and content pipeline choice (if you don't know you are looking for the content manager button in the UDK you might not find it straight away). The UDK editor has a larger focus on giving artists the flexibility they are used to in 3D modeling packages, but this appears to add some complexity for the programmer as well.

Still lots of stuff to explore with both systems, I will see if I can read up on some UDK literature soon (and not just online tutorials) to expand my understanding of both systems.

Until next time,
http://michaelhubbard.ca

Tuesday, April 27, 2010

Book Review: Professional Refactoring in C#

I would recommend the book Professional Refactoring in C# & ASP.Net by Danijel Arsenovski for any developer interested in refactoring code. Arsenovski gives clear and documented refactoring improvements for a variety of common code structures, as well as some specific C# version 3.0 features. The book deals strongly with the concept of code "smells" which are the simple heuristics for finding and improving parts of code.

The code smells include finding and refactoring such things as dead code, overexposure, overburdened temporary variable, long methods, procedural design and many others. The book explores setting up a unit testing framework with NUnit to verify the same functionality after refactoring has taken place. This draws some parallels with the Foundations of Programming book I reviewed earlier on its approach to unit testing. While I do enjoy the appeal of a unit testing design and programming strategy it is a bit more difficult to apply to all aspects of game programming (such as player/object synchronization etc), as well as sometimes it can be an issue to test if you are dealing with a tool set that does not take the code directly (like the Unreal scripting).

This book would be useful for any intermediate to expert developers with an interest in C# as it will surely give you food for thought on how to improve your code (which I will see if I can apply to the XNA project (which I will be mentioning later).

Bye for now,
http://michaelhubbard.ca

Saturday, April 17, 2010

Book Review: Foundations of Programming

I finished reading the ebook Foundations of Programming: Building Better Software by Karl Seguin which is available at the previous link. The book focuses on concepts of improving coding practice using some Agile methodologies and incorporating ideas from both the .NET (MSDN) and ALT.NET (abstract concepts with more specific implementations). Seguin brings up good elements of Domain Driven Design vs Data Driven Design where the behavior of the system rather then on the data, which allows for a more flexible system that can be refactored with different data as the needs arise. The book also focuses on Dependency Injection as a way of decoupling certain systems and increasing the ease of Unit Testing those systems. Seguin also suggests that if something is difficult to unit test it is likely that method or function is likely tightly coupled and could be refactored to produce better code (code that is difficult to test means it is likely difficult to use).

While the book does deal with more application specific elements (Object Relational Matching using NHibernate and Rhinomock for the .Net framework) the concept of unit testing when applied to games does bring up some interesting comparisons. The data within games is difficult to test for a majority of reasons (if you ever talk to game testers/QA they will be spending a lot of their time repeating the same things over and over again with only slight alterations on each iteration). One of the difficulties is the complexity of data that comes in at runtime during a game, often relying on art for collision information, state information from a sync server and lots of potential for any sort of race conditions that a unit test would obviously not cover. This does not mean that we should dismiss unit testing but instead focus on testing what we can, by writing code that can be easily unit tested we are likely writing simpler and better encapsulated code then would likely be produced without unit testing. This is also a large benefit for testing (and refactoring) the core elements of the system that do not rely on as much art, server or runtime information. While testing does take time, it is invaluable in helping find issues before they become bugs and incredibly useful in verifying the same behaviors during any refactiong.

As a whole it was a quick and informative read and would recommend this to anyone interested in improving code and an interest in unit testing (plus the price is right :P)

This got me interested in more specific refactoring so I may be looking at more code quality books in the near future.

Bye for now,
Michael Hubbard
http://michaelhubbard.ca

Sunday, April 11, 2010

XNA C# Item Class Template

I thought I would share a simple class template for XNA classes that are not Game Components (for classes that do not require an update etc.). If you copy the code below to a new (empty class called Class1.cs) and then go to File->Export Template (choose Item Template in the wizard) and the class you copied the template info to. After restarting Visual Studio you can now use the template ( Add->New Item-> My Templates).

//-----------------------------------------------------------------------
// Copyright (c) "$safeitemname$". All rights reserved.
// $guid1$

// $username$ $time$
//
// $safeitemname$ :
//

//-----------------------------------------------------------------------

using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using Microsoft.Xna.Framework;
using Microsoft.Xna.Framework.Audio;
using Microsoft.Xna.Framework.Content;
using Microsoft.Xna.Framework.GamerServices;
using Microsoft.Xna.Framework.Graphics;
using Microsoft.Xna.Framework.Input;
using Microsoft.Xna.Framework.Media;
using Microsoft.Xna.Framework.Net;
using Microsoft.Xna.Framework.Storage;

namespace XNAGame
{
class Class1
{
}
}

The above "code" is free and can be modified, updated and used without consent.

Bye for now,
Michael Hubbard
http://michaelhubbard.ca

Tuesday, April 6, 2010

UDK vs XNA: Part 1

In order to give this blog (and my own inane ramblings) some focus I will be looking at comparing the Unreal Development Kit vs XNA from someone in the professional industry, but coming at developing for both of these packages as a newb (my professional experience has mostly been in C/C++ and OpenGL and C# with Unity).

Coming from a Unity background the XNA project was fairly intuitive, with only slight modifications (mostly in options and casing). Unity uses lower case for accessors and variables: Vector2.zero, while XNA uses uppercase: Vector2.Zero. I was surprised by the number of options the XNA color value had (like Color.Wheat), otherwise I felt very at home with Visual Studio (Express) and the C# code. Working through some tutorial code I was able to setup the build of the game to run a simple image to the game window. The major difference up to this point is workflow, without an in-game editor developing for XNA will likely require iterative testing of values to tweak certain elements of the game play instead of runtime tweaking (we could of course write functionality that would expose setting some of these values at runtime, but it would have to be done for every case).

Getting into the UDK was a bit more of a learning curve. The editor itself was fairly intuitive (it is similar in some ways to Maya) but as a newb to Unreal there were of course things that I would have to look up the documentation for. It is far less of a jump in and start coding experience with the UDK and much more of an explore the editor and figuring out the directory structure of a project (as well as installing nFringe) and learning the different tools (Unreal FrontEnd versus Unreal Editor). Reading the documentation on a lack of debugger in the UDK is a bit of an issue, which would slightly favor XNA development (with breakpoints etc) versus a lot of debug statements. Also having to learn the syntax of a new language UnrealScript will likely take some getting used to.

From someone coming from a Unity background XNA seems like a much easier initial conversion to begin coding (especially if using C# with Unity). Howerver Unreal does offer some intriguing options that both XNA and Unity do not appear to have. With a bit more practice with the build process and Unreal editor setup I will try and create some small examples to compare the differences in creating different game play elements.

Until next time...
Michael Hubbard
http://michaelhubbard.ca

End of Part 1.

Monday, April 5, 2010

Book Review: The Making of Second Life

Greetings (true believers :P),

I thought I would include some book reviews of things I have been reading related to technical interests, I try and read a book a week, although sometimes they are more fluff then tech. Somewhere in-between is The Making of Second Life: Notes from the New World by Wagner James Au. The book is an anecdotal approach to the world of Second Life, often giving stories of the more interesting players (those who play as a group in a nursing home, those who make a steady income as in-game Second Life designers, and a lot of talk about furies as well).

I would have preferred a more technical approach to the subject (like how to manage some of the user generated content/customization), but the book does deal with some aspects of 3D worlds that are trying to be replicated in many other projects, namely what kind/how much user customization should be included?. The book does focus on some of the aspects when given complete customization control (like the red-light districts or unusual character customizations) and there is a definite trend towards more 3D virtual world interaction/customization in many projects. Some of those similar (yet different) besides the obvious Sims include JustLeapIn, Hangout.Net and parts of the Google Lively experiment.

While the book does not specifically tackle the issue of how much customization is enough (or too much), it does state that the game is based around this model. Second Life provides options for downloading and modifying textures through skin templates, modeling buildings and setting up new animation, as seen in their Creation Portal. Some of the Second Life players in face can make their entire income through in-game design (and potentially through in-game trading as well).

This book did raise a few concepts I have been thinking about personally as well, such as the industry trend of more levels of user customization. It is difficult (from a developers perspective) to draw a line at how much technical artist expertise is expected of the users... do players really want to modify vertices and set blend weights on their character? If so what players would be willing to spend the time doing this, and would they want to use in game tools, or a third party (professional or freeware) package they are already familiar with? With this growing trend and the impressive technological results of a game like Spore it could only be a matter of time before we see games ambitious enough to try and recreate Maya, XSI or 3DsMax in the browser. While such a feat would be quite interesting, unless it was specifically a game centered around creativity, I would find that level of control would be likely wasted on all but the smallest minority of players (likely the same percentage that are already familiar with other tools).

All-in-all the book was an interesting read, perhaps designated more to those readers interested in the history of Second Life and some more of the odd characters, than those interested in the technical challenges of creating such a virtual world.

Bye for now,
Michael Hubbard
http://michaelhubbard.ca

Sunday, April 4, 2010

Tortoise SVN Client Hooks

Hello,

Been a while and a move (to warm Toronto from cold Vancouver) a new job, place to live, adventure etc. As a game programmer/ technical artist I have an interest in improving pipeline tasks and automating any repetitive tasks. Needless to say working on a computer a lot it sometimes takes a bit of thinking to know where you can improve time and where you need to spend it. For any home project I do (be it big or small) I usually like to keep track of my changes, and spend a lot of time looking at subversion logs.

I use TortoiseSVN as a subversion
client for a while (though I still like the old command line Subversion for Linux)
and I will be looking at some of the other (mostly free) version control systems in
a later post. Tortoise is great in that it is simple to use and is smoothly embedded in the Windows Explorer window/menu system once it has been installed. I don't usually spend much time on the server side (although I imagine that will eventually change) and work with VisualSVN standard server for setting up a simple svn server on my home machine (it would be really good to have something at a different machine though).

Anyway, I thought there were lots of client side svn hooks I would like to hookup (updating a wikipage, trigger a test build, and formatting svn messages). In order to do this I started from the example at the tortoise svn source website: Pre-Commit Hook Example. Thanks to YNot for the SyntaxHighlighter tip.


// License: This code is free software and can be modified, updated, commented
//and sold verbatim without constent.
// The code has no warranty and the author accepts no liability for any issues
//or damage this code may cause, including: hardware, software, wetware, financial,
// psychological, emotional, mental or physical.
//
// Author: Michael Hubbard
//
// This script is a client side tortoise svn pre-commit hook script.
// The script creates a commit message of the following format:
// The FolderDirectory structure is not the whole path and
// splits on the SplitFolderPath (currently set to "Scripts/").
//
// "_FolderDirectory/SubDirectory(s)_"
// "- Message1"
// "- Message2"
// " ... "
// "- MessageN"
//
// To use the script set the TortoiseSvn->Settings->Hook Scripts->Add
// Set client hook scripts to a pre-commit hook
//(this will set the four command line arguments):
// Set the Command line to Execute to: "WScript
//absolutePathToThisScript/SvnFormatMessageHook.js"
//(the WScript is important (Windows Script Host))
// Check the "Wait for the script to finish" checkbox.
//
// Code based on tortoise svn examples at:
// http://code.google.com/p/tortoisesvn/source/browse/branches/1.6.x/
// contrib/hook-scripts/client-side/PreCommit.js.tmpl

// The root path of the scripts (remove any path information
//before this path when formatting directories).
var SplitFolderPath = "Scripts/";

// The minimum number of characters for a title description.
var MinTitleLength = 3;

// Read the file at path into an array .
function ReadPaths(path)
{
var retPaths = new Array();
var fs = new ActiveXObject("Scripting.FileSystemObject");
if (fs.FileExists(path))
{
var file = fs.OpenTextFile(path, 1, false);
var i = 0;
while (!file.AtEndOfStream)
{
var line = file.ReadLine();
retPaths[i] = line;
i++;
}

file.Close();
}

return retPaths;
}

// Write the formatted message to the message path.
function WriteFormattedMessage(formattedMessage, formatMessagePath)
{
var fs = new ActiveXObject("Scripting.FileSystemObject");
if (fs.FileExists(formatMessagePath))
{
var file = fs.OpenTextFile(formatMessagePath, 2, false);
file.write(formattedMessage);
file.Close();
}
}

// Check if the message is already in the formatted type.
function IsValidMessage(commitMessages)
{
if(null == commitMessages || 0 == commitMessages.length)
{
return false;
}

var titleMessage = commitMessages[0];
if(titleMessage.length < MinTitleLength)
{
return false;
}

return '_' == titleMessage.charAt(0) && '_' == titleMessage.charAt(titleMessage.length -1);
}

// Get the formatted message.
function GetFormatMessage(commitMessages, formattedDirectories)
{
if(null == commitMessages || null == formattedDirectories)
{
return null;
}

// Make sure the title has _ before and after the title it.
var title = "_" + formattedDirectories[0] + "_\n";
for(var i = 0; i < commitMessages.length; i++)
{
if("- " != commitMessages[i].substring(0,2))
{
title += "- ";
}

title += commitMessages[i] + "\n";
}

return title;
}

// Get the formatted directories.
function GetFormattedDirectories(commitFiles)
{
var formattedDirectories = new Array();

// The files to commit.
for(var i = 0; i < commitFiles.length; i++)
{
formattedDirectories[i] = GetDirectoryName(commitFiles[i]);
}

return formattedDirectories;
}

// Get the formated sub directory name (not including the file name).
function GetDirectoryName(file)
{
var splitIndex = file.indexOf(SplitFolderPath);
if(splitIndex == -1)
{
return file;
}

// Get the subdirectory (after SplitFolderPath).
var subDirectory = file.substring(splitIndex + SplitFolderPath.length);

// Remove the file (just keep the subdirectory).
subDirectory = subDirectory.substring(0, subDirectory.lastIndexOf("/"));
return subDirectory;
}

// Write a valid message from the commit files (Quit with 0 for success, 1 for error).
function Main()
{
// The script command line arguments from tortoise.
var objArgs = WScript.Arguments;

// Check the number of script arguments is valid or exit.
if (objArgs.length != 4)
{
WScript.Echo("Usage: [CScript | WScript]
SVNFormatMessageHook.js path/to/pathsfile depth path/to/messagefile path/to/CWD");
WScript.Quit(1);
}

var commitFiles = ReadPaths(objArgs(0));
var commitMessages = ReadPaths(objArgs(2));

// Get the formatted directories.
var formattedDirectories = GetFormattedDirectories(commitFiles);

if(!IsValidMessage(commitMessages))
{
var formattedMessage = GetFormatMessage(commitMessages, formattedDirectories);
WriteFormattedMessage(formattedMessage, objArgs(2));
}

WScript.Quit(0);
}

// Entry point to the program.
Main();

Bye for now,
Michael Hubbard

michaelhubbard.ca