Image via Wikipedia
I've always admired small, highly effective teams. The reality is that it is much simpler to be effective when dealing with small teams. The smaller the more effective. If we take this to the ultimate extreme, say a team of one person, there are no arguments, work is done and understood by everyone on the team.In reality, most projects aren't done by just one person. Generally, communication and understanding becomes exponentially diminished with greater numbers of people on a project. Imagine running a project with 50 people. Regardless of all the design documentation, meetings and communication, there are bound to be misunderstandings and an assortment of problems.
That is why its important to constantly keep the code base in check. Sure, that's what QA is there for. However, what I've found is that with very few exceptions most QA teams have a heavy emphasis on functional qa testing, and often that is performed manually over and over again.
There is undisputed value in functional testing. But what is often overlooked, is the need to capture those functional tests and automate them. If you are on a large project, and haven't implemented test automation software, you are wasting money... lots of it.
However, that alone isn't enough. The old saying regarding quality testing is that "you have to build it into your product, not just test it at the end." And so, we now face the modern-age tester: the developer, the need for test driven development, and continuous integration.
TDD has developers create tests before code is ever written. Naturally, if you run the test before anything is written, it should fail. Once the unit of code is completed, the developer can run the same test and prove that his code works.
To accomplish this, Developers, especially those on large projects where code is flying around everywhere, need to develop xUnit test (e.g. jUnit for java, etc.) that prove to them, and others, that their code is working. Once the unit of code is developed and the junit test passes, they should migrate the code to another development environment: The Continuous Integration Environment (if you don't have one of these, you need to go procure one).
The continuous integration environment(s) are used to repeatedly run the entire suite of tests developed to prove the code works. This is critical in large projects, because undoubtedly at some point, someone will change existing code and will introduce regression errors somewhere else in the application. In fact, this is generally why teams begin to develop "snowflakes" (see my blog post on snowflakes) which is even worse.
Regression errors can be so reputation damaging to your team, that you ultimately would be wise to avoid regression altogether and invest in continuous integration. No one likes to hear from their users that something that was working fine no longer works.
Developers should immediately promote their code into a continuous integration environment after completing a unit of work and making sure that the x-unit tests pass. Continuous integration should run tests repeatedly many times per day. The person in the role of continuous integration tester, should promote the code to the next scheduled run, and include the tests the developer created to verify his code. This way, the developer can be assured that the continuous integration testing resource will know if other developers break his code.
If the test run passes, all is good and the developer moves on to the next task. If not, the continuous integration tester, demotes the code and sends the developer back to the drawing board.
Now, one common problem I've seen is when testers start changing tests to accomodate sub-optimal (i.e. crappy) code. HUGE MISTAKE... DON'T DO IT! This ultimately will ruin your project. It's absolutely critical keep the code pristine. Reject the bad code, and get that integration build right back to being pristine.... Ostracize that developer like a medieval noble tossing a scarlet fevered serf over the castle walls. Out! Now! Before he infects everyone.
Obviously, you're not going to fire the guy (at least not yet), but he needs to get the message that the code has to work without breaking the pristine code already in place.
Occasionally, you will come across changes that cause breaks (which is also normal, and is part of the cost of support automation and integrated testing in place). This will require the continuous integration tester to remove tests. However, don't ever do that carte blanche. Each test needs to be tagged with the person who created it, and serious discussions need to occur if the continuous integration suite tests are going to removed. Bring in the person whose test has been violated, and get their buy in, as well as any analyst that can validate the necessary change. Never just listen to the guy who broke the test... we know what he'll tell you!
What challenges have you experienced in your code when working large projects? I'd like to know.