Writing multithreaded code is like juggling chainsaws; amazing when it works and truly sucky when it doesn't.
Right now at my job I am writing the foundation for a transaction processing cluster in Java, so I'm immersed in lots and lots of threads and interacting applications. When you are processing 8000 of something per second, any problems in your approach or in your choice of frameworks is magnified.
In job interviews, a popular question is "what is the major problem you have to solve in writing multithreaded code?" Generally, if they have read a little about it, they often say "avoiding deadlocks". If they have done a bit of thread coding, maybe in Swing, they might say "protected shared data". Only the truly experienced in complex threaded coding will say "avoiding doing nothing".
What's so bad about nothing? Assuming you can avoid deadlocks (generally not hard if you're disciplined) and understand when to protect data, when working in a complex high speed system you want to accomplish both of those basic requirements without slowing down the real work your threads are doing. In a Swing app, so a couple threads block for a while, or even less of a problem is a few waiting around for something to happen. In a large cluster of servers, having threads sitting around doing nothing is a waste of money. If Google is processing millions of searches daily, and half of their cpu's capacity is basically blocked waiting for some resource, it costs a lot of money and power: instead of 50,000 servers you need 100,000. That a hefty price to pay for poor thread coding.
Sure, my company will probably only use 100, and they could afford 200 servers, but why waste resources just to save a few brain cells.
For example, one of my trials used Jetty on one application, and Apache HttpClient on another. With two worker threads on the HttpClient I was able to process my test transactions. When I increased the worker threads to 7, they fell behind; at 10 it was almost dead in the water. WTF? Doing some debugging (with Yourkit Profiler, very nice) the Jetty side was mostly sitting around waiting, but the worker threads in the other app were mostly blocking (as much of 95% of the cpu time). Ah the joys of threaded coding.
The issue turned out to be a synchronized method in HttpClient that deals with Http Parameters, for each parameter for every call on every thread it would pass through this method, creating a common sync point for all the threads. Something that might seem OK for a couple threads was fatal with 10 threads running full tilt at 1000 somethings per second. The chainsaws can be really brutal.
I finally settled on a couple of technologies that work pretty well, on my developer PC (4 core Core Duo something) I was handling 8000 somethings per second at about 85% CPU.
That's another sign of properly avoided nothing, if you slam your creation with unfettered requests and the CPU rises close to 80-95% then you are seeing proper behavior. In my HTTPClient test the CPU rarely got above 25%; mostly a lot of nothing was going on. This was clear with Yourkit. Nothing is not your friend. Don't stop coding when nothing is broken; that's only step one.
Multithreaded coding in a complex application requires a lot of discipline (never assume anything, test everything), experimentation (there is no one way to make it work) and patience (sometimes lots of negative problems lead you to see the solution). It's not easy and it's not quick (never mind the project manager).
Then again neither is juggling chainsaws; plus the downside is pretty nasty.