[BBLISA] "How to Really Scare Microsoft"
Tom Metro
tmetro+bblisa at vl.com
Thu Nov 10 22:21:20 EST 2005
Adam S. Moskowitz wrote (off list):
> I think you missed one of Marcus' central points . . .
> For the average user -- and I don't mean "average technical user" but
> rather your mother or your department's secretary -- Linux is an even
> bigger pile of crap than Windows is! Hand-edited X config files?
Perhaps it is, but a big source of that mess is the need to support a
near infinite variety of hardware, combined with a lack of friendly
automation that would allow it to magically know how to work with that
near infinite variety of hardware.
If we're starting with the premise that these systems will be built upon
a small set of possible hardware variations, then the software comes
with an X config burnt onto the CD that works correctly with the hardware.
Once you get past making the software run correctly on the user's
hardware, I think the desktop GUI experience is comparable between the
two platforms, or at least for a limited set of applications. There are
Linux distros available today that would permit a non-technical user to
use Firefox, Thunderbird, OpenOffice, and browse a directory without
really knowing which OS they were using.
> ...I tried several times to install either FreeBSD
> or SuSE or Ubunto on my desktop box -- and it never worked.
But again, you're asking the software to support an infinite combination
of hardware.
>> As a friend pointed out after the talk, that extra layer of abstraction
>> is of great benefit to programmers and the ability to maintain the code.
>
> A bigger benefit and more helpful to maintainability is reducing the
> code base by ~50 percent...
OK, so lets throw out the drivers for all but the two video cards that
we intend to support, all but the 3 Ethernet cards, etc. That should
equal or exceed the 50% mark.
> Making it easier for the programmer is
> the wrong thing to do: The programmers are the smart ones -- let *them*
> "suffer" through programming without an abstraction layer. *All* that
> matters is how easy the system is for the end user.
But the piece that is missing in this argument is that how does making
the job harder on the programmer (and more costly for the vendor
developing the code) make it easier on the end-user? I'm not seeing the
connecting thread here that leads to greater stability or greater
performance.
Any reduction in code can theoretically lead to greater stability, but
having well defined APIs can also be a tool to reduce bugs and
discourage bad programming (layer violations that lead to buggy,
unmaintainable code).
Performance is hardly even worth mentioning as a benefit when we're
talking about CPUs that are several times faster - if not an order of
magnitude faster - than what the typical user needs.
> Why, because for every programmer...there will be...maybe even
> 100,000 end users...
The real party that should "suffer" in this equation is the CPU.
Programming labor is expensive. CPU time is cheap. As long as stability
is maintained, optimize for programmer efficiency.
> I'd *much* rather have a small system to have to learn than a gigantic one
> that supposedly makes it easier for me to write my code.
True, providing it is a *much* smaller system.
Something to consider here is whether the abstraction layers are
excessively bloated by their need to support a near infinite combination
of hardware. If so, then ditching the unnecessary drivers doesn't help
much. But I don't see this as necessarily the case, if the abstraction
layers are well designed.
An appropriate amount of abstraction doesn't necessarily make the
programmer visible API larger. In fact, once you do start introducing
some hardware variation, it makes it smaller.
> Besides, if there's only one kind of hardware, who needs an
> abstraction layer? Bah!
I'm sure my friend who has worked in embedded development will back me
up on this...I believe it is not unusual to use abstraction layers on
embedded systems, even when there is low probability that elements of
the hardware will change. It's just a matter of good design.
In any case, unless there is a strong argument saying that the stability
of the currently available Linux kernel and drivers is inadequate to
support this concept, then the idea of an ideal OS with optimally small
(or no) abstraction layer isn't that important. Constraining the
hardware choices largely makes this issue moot.
That's not to say that someone shouldn't go off and develop such an OS,
and that there might be justifiable gains from it, but in the short term
you can get most of the benefits without that development effort, and it
raises the question of whether the effort will ever be justified.
-Tom
--
Tom Metro
Venture Logic, Newton, MA, USA
"Enterprise solutions through open source."
Professional Profile: http://tmetro.venturelogic.com/
More information about the bblisa
mailing list