Friday, June 16, 2006

About the world of Video Game Testing

(Before you start reading, please accept my apologies...this turned into a sort of a rant and is quite lengthy...maybe I'll break it out into smaller blogs later to explore different points)

This article is actually a pretty good representation of what it's like in the video game testing world. They get opinions and quotes from a handful of different testers, each sharing their views.

I must say, I honestly enjoyed my time doing games QA. A lot of the process and situations were frustrating, but the people were (usually) great to work with and the job itself was very enjoyable to me. QA outside of games (as I've been learning for the past ~month) is a different world, but a lot of things are still similar. The instincts and basics of the job are the same. The "crunch time" mentality may still exist, but deadlines are generally different outside of the game world and the mentality overall seems a little different. Case in point, even thinking of working more than a 40 hour week here is taboo...the idea happens from time to time but it's definitely not the norm and not expected on a project.

One thing I definitely agree with from the article (which can be applied to all QA jobs...and actually can be applied outside of QA too) is the blurb talking about staffing. They talk about publishers bringing on a hundred "kids" to "test" the game in the last weeks/months of the project. This actually seems to be the route a lot of the big studios are looking at. The comments that they can do BETTER with 20 skilled testers over the life of the project than with 100 ad hoc junkies at the end of a project "should" be common sense, but sadly it's not.

Just look at the real world and compare Amped 3 & Top Spin 2 with the 2K Sports lineup for the same time frame....Amped & Top Spin were each tested with test teams of 6-10 VERY skilled testers over the life of the project (team size grew as the project neared release) while receiving "help" from a resource QA force from 2K sports...the other 2K games were tested by HUGE teams of ~40 people for only the last few weeks of the project. 2K games had tons of reports of hard locks, crashes, corrupt data and other issues at launch. Amped & Top Spin had reports of a handful of collision issues or bad looking textures...miniscule by comparison. The 2K games had to deal with the bad rep of having games that crashed...and then had to devote dev time to release "patches" for the issues they decided they could fix. Yet strangely, the "shared resource" testing methodology won out and they decided to give QA responsibility for all the projects to the same team that put out shoddy games last fall.

I do need to back up just a little bit and apologize a touch...because I do also agree with the painful truth brought out in the article that QA is blamed for EVERY problem with the game, regardless of whether or not it was entirely their fault. Ideally, a QA team WILL catch all major and minor issues. Admitedly, we miss some things. However, sometimes some of the minor (and even some of the major) issues we DO find are allowed to ship because of deadlines, resources and other considerations. The decission is usually made by a committee and QA (sadly) agrees, knowing all along that it will reflect on them. So, I do apologize if it sounds like I'm ignorantly ranting against the QA team that will be taking over for all of 2K Games testing. I'm sure there were some issues that were beyond their control and shipped despite their efforts.

That said, I am dubious of their efforts and the efforts of ANY consolidated/shared test team...including (and based mainly on knowledge of) the 2K games shared team (a lot of the issues I did see come through were laughable...though admitedly, some of the drudgerous tasks they took on were very relieving for me, since it meant I could devote my team to more important tasks). [Two examples: 1) Allowing resource QA to test collision/level issues opened up chances for my team to do deep abuse testing against they code while feeling like the levels would get a good looking at. 2) On a different note, there was some testing going on where internal and external test teams were doing the same tests...their team of 20, our team of 2...their team reporting 100% success over the course of 4-5 attempts, our team reporting 75% failure (all different high profile issues) over 50+ attempts]

Their lack of knowledge with the product...their lack of knowledge and training for their job...their lack of good supervision and interaction with the dev team....their lack of ability...their lack of instinct...their lack of adequate tools/resources/time...ALL of these lead to the same result...a bunch of monkeys sitting in a room, banging on controllers and telling the dev team things they already know or don't care about...if they tell them anything at all (as in the case above where they found NO PROBLEMS while we found dozens in the same area).

I cringe at the thought of most major publishers going with this idealogy...because it is far from ideal and means that we will likely see decreased quality in future releases. Sad...

No comments: