Support Bloggers' Rights!
I came across a Wikipedia page on the Fallacies of Distributed Computing. Read these and tell me if they ring any bells:
Great aren’t they? If you’ve worked in IT for any measuable amount of time, you’re bound to has assumed, or seen someone assume, one of the fallacies above. But they will always come back and haunt you!
Nigel, one of my colleagues, is a firm believer that things were better in the mainframe days. Things were simpler, more controlled and infintely more reliable. His favourite anecdote recounts a speaker at a conference who went up on stage bouncing a basketball. He compared this to a mainframe, easy to control. He then pulls out a bucket of ping pong balls, throws it into the audience and then asks them how then can control that. It think that comparison is a bit harsh, however, it does outline the non-linearity that distributed computing suffers from when it comes to issues of control, manageability and fault control.
Personally, I think it’s a matter of market/technology granularity. Yes, a mainframe may seem more dependable, but that’s mainly because you’re dealing with one supplier who? gets called in when there’s an issue. The end-user never sees the individual components that make up a system, it’s a black-box that someone else manages, so yes, it’s surely going to look like an easier environment. Distributed apps are likely to be running on disperate hardware, across multiple operating systems, supplier and supported by different vendors, all with their own support procedures. And this is where non-linearity steps in. In mathematics, a nonlinear system is one whose behavior can’t be expressed as a sum of the behaviors of its parts (or of their multiples.) (more on Wikipedia). The challenge software/system/technical architects? today face is expressing this non-linearity in terms of risk so that the business can understand it. It still is a thorny issue though.
Do you come across these problems in your day job?