[prev] [thread] [next] [lurker] [Date index for 2006/12/28]
Peter da Silva wrote: > > Publicly Publicly Easily > System Defined? Controlled? Implemented? > > C# and .NET Yes No No > C and UNIX Yes Yes Yes > > I don't know exactly how .NET is controlled, but when a definition is publicly defined, the control only affects your ability to call the definition with your changes a ".NET version", not your ability to make changes. For example, I think most people today mean "C89", not "C99" when they say "C". The reason is not who controls the meaning of the word "C" - a single vendor or a bunch of them - but the fact that people prefer compatibility with C89 implementations to the new features of C99. As to implementation simplicity, one reason Unix is easy to implement (a single person can do it) is the small amount of functionality. This is also true for C. Both are big achievements and landmarks as far as portability is concerned. But I find C way more useful than Unix: a machine with a C compiler is way more useful than bare metal, and there's very few overhead, and there's no competition (except maybe Forth, which is slower and more toxic if your project really scales, which it shouldn't according to most Forth philosophies). On the other hand, today (as opposed to maybe the 70's) there's a lot of smaller systems with better support for things like real-time scheduling. So I wouldn't litter an embedded target with a copy of embedded Linux. So much for the "Unix is the kernel" definition. And if you have a sizable machine with a display and a mouse attached to them, you see X and a shitload of "windowing toolkits" or the like, and these are not easy to implement and barely specified and worse than any competition. That's what I have to say about "Unix is what I see when I run Unix" definition. So either it's not that useful or hard to implement and still not that useful. Unless what I need is a stable TCP stack, in which case I might litter an embedded target with a copy of embedded Linux. But I never cared about that and thus never bothered to check the competition. Oh, and there's the compatibility issue. Perl runs on what they call "75 platforms". Most of these call themselves "Unix". It's easy to implement "a Unix" all right. I wonder what's "the Unix" though. AFAIK SVR4/BSD is just the major fork. I'm talking about things like mmap, not the location of /bin/env, which is maybe even more important since for God's sake, I deal with the system, not the kernel. > This is not fantasy. It's documented in material from the trials and > independently by people like Marlin Eller (ex-Microsoft exec). People are > concerned about Microsoft's control of C# and .NET not because of what > Microsoft could *theoretically* do, but because lock-in has been Microsoft's > business model for over 20 years. > Of course lock-in is their business model. If it weren't, they'd publish the formats of MS Office documents. I also heard of all kinds of barely legal things like Windows 3.11 auto-detecting underlying DR-DOS and refusing to work. Most vendors dream about lock-in. Some can afford locking customers into their products and some can't. Sometimes an evil vendor decides to go for a lock-in. And sometimes the evil vendor can't get away with it, so he goes for the next best option: an open standard defined by the evil vendor, so that the evil vendor will be the best at implementing it. I think x86 is a good example of an open standard which everyone with a 2 billion project budget can freely implement. And then again, "the vendor is evil" is one thing, and "the product sucks" is a different thing. The road to the open system Perl, you know, is paved with good intentions. And why did Windows win the desktop market, despite it's lock-in strategy (you can normally afford to go for a lock-in when people have no choice, not when you only start to gain markets)? Because it displayed *windows*. On top of a horrible, bug-ridden, dysfunctional, primitive "kernel" (no, make that "a shitty filesystem and an ISR"). But before it crashed, the user could work with those windows at sane speed. And what did Unix give it's users? A kernel and X Windows, running on top of it's stable TCP stack at speeds that made it a little bit too costly for a PC used in the early 90s. Or was there some other reason that about 20 Unix implementations couldn't compete with a single shitty filesystem with an ISR? The attitude "the OS is the kernel, not what the user sees" is very productive indeed.There's stuff above here
Generated at 03:02 on 01 Jan 2007 by mariachi 0.52