Fallacies of Distributed Computing

I came across a Wikipedia page on the Fallacies of Distributed Computing. Read these and tell me if they ring any bells:

  1. The network is reliable.
  2. Latency is zero.
  3. Bandwidth is infinite.
  4. The network is secure.
  5. Topology doesn’t change.
  6. There is one administrator.
  7. Transport cost is zero.
  8. The network is homogeneous.

Great aren’t they? If you’ve worked in IT for any measuable amount of time, you’re bound to has assumed, or seen someone assume, one of the fallacies above. But they will always come back and haunt you!

Nigel, one of my colleagues, is a firm believer that things were better in the mainframe days. Things were simpler, more controlled and infintely more reliable. His favourite anecdote recounts a speaker at a conference who went up on stage bouncing a basketball. He compared this to a mainframe, easy to control. He then pulls out a bucket of ping pong balls, throws it into the audience and then asks them how then can control that. It think that comparison is a bit harsh, however, it does outline the non-linearity that distributed computing suffers from when it comes to issues of control, manageability and fault control.

Personally, I think it’s a matter of market/technology granularity. Yes, a mainframe may seem more dependable, but that’s mainly because you’re dealing with one supplier who? gets called in when there’s an issue. The end-user never sees the individual components that make up a system, it’s a black-box that someone else manages, so yes, it’s surely going to look like an easier environment. Distributed apps are likely to be running on disperate hardware, across multiple operating systems, supplier and supported by different vendors, all with their own support procedures. And this is where non-linearity steps in. In mathematics, a nonlinear system is one whose behavior can’t be expressed as a sum of the behaviors of its parts (or of their multiples.) (more on Wikipedia). The challenge software/system/technical architects? today face is expressing this non-linearity in terms of risk so that the business can understand it. It still is a thorny issue though.

Do you come across these problems in your day job?

3 comments

  1. bells ringing – I perfectly agree with you on this one. I still remember the fallacies of distributed computing from uni :). What I want to add is that despite being distributed, the distribution should remain transparent at all times! That is one of the requirements of a distributed system. Def: Distributed systems should hide its distributed nature from its users, appearing and functioning as a normal centralized system 🙂 Transparency is a big topic and it incorporates access transparency, location, migration, concurrency, persistence, failure and security transparency. These are worth thinking about when designing a distributed system

  2. Dear sir – When I saw your first name, at a comment at the latest entry at Venomous Kate’s weblog, for a moment I thought that it’s possibly perhaps a possibility that Owen Courreges’s weblog was back online… after like 3 to 4 years. As I implied in the comment posted at that entry of Kate’s – and as I’ve stated before – I am not a fan of change — and that includes the Blogosphere and WWW.

    Is this a weblog focused on computing and technology issues? I am interested in that, and have been involved in awhile… but not at the level that some people, in the Blogosphere, are.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.