The thing is that it's a bad idea to use the same code to display remote content from the Internet as for the local user interface. Remote content, by nature, can't be trusted - you don't know who provided it or what it does. So programs displaying that content need to be really paranoid and careful, and you ought to run them in a self-contained, isolated process that can't affect anything else that's going on.
Local content, on the other hand - like a list of the contents of your own hard disk - you have to trust. The user interface of the machine must be able to view and manipulate anything on the machine, or it can't do its job.
These are two different roles. Using the same code for both is a bad plan, for several reasons.
For one, any exploits or vulnerabilities in the web browser automatically become vulnerabilities of the whole machine when that browser is part of the OS and always running. If the baddies can somehow sneak a dodgy file onto your computer, then even if you're disconnected from the Internet, if you open that file, your box is owned; the vulnerability is always present. You can't really firewall a computer from itself.
Secondly, the two roles demand different functions and different ways of working. The local interface should be fast and slim and sleek, because there's no delay in retrieving the information to be displayed: it's right there on the machine already, and up to a point, it can therefore be trusted. Faster PCs are making it harder to spot, but on Windows 98 machines, you could often catch a glimpse of Explorer showing a grid of blank icons before it fetched and rendered the actual images - just like watching a web page display slowly over dial-up. The fact that better performance makes this invisible doesn't excuse it being there in the first place when it shouldn't be.
The remote interface needs to be careful, paranoid, isolated from local storage, and it needs to cope with delays in stuff appearing. It needs to be complex and capable, to handle elaborate, fancy web sites, whereas the local file browser is only going to be displaying simple, known quantities - lists of files, previews of images and so on.
It may sound like having two sets of code to render and display images and so on is needless duplication, but that happens a lot on computers - it's a fact of life. Your kidneys and liver both perform functions of filtering your blood and removing nasties from it, but they're different enough versions of the superficially similar job that it's worth having two different types of organ to do it.
The web browser ought to be a separate subsystem with no connection to the machine's own user interface, freeing it to be large and clever. The local file browser should be simple, fast and responsive. There's no need to turn the view of the local filesystem into HTML, then pass that HTML through the web browser. Yes, it makes it easier to have fancy, customisable views with task-specific bars down the side and so on, but this is an inefficient way of doing it.
(And yes, I know that KDE does the same thing with local and remote browsing, but then, firstly, KDE runs on a proper, secure OS with restricted privilege levels, secondly, it has no worries about ActiveX to contend with, and thirdly, its developers are considering replacing Konqueror as the file manager in the next version. Personally, I shifted to GNOME years ago, anyway.)
Windows 95 didn't include IE, so the Windows Explorer didn't do any of this HTML stuff. And the same Explorer powered NT4, too. Yet it was the same basic GUI as we have today - task bar, Start menu, folder windows and all. The Windows 98 Explorer brought in lots of handy extras - customisable toolbars and window views, thumbnail icons, JPEG wallpapers, drag and drop Start menu editing, all that sort of thing. Some of its features are really hard to live without, like multithreaded file operations - while it's copying or moving files, you can get on with something else. The old Win95/NT4 Explorer froze up until the operation was finished. The old progress bars didn't work, either - they showed the progress of each file, not the whole operation, so all you saw during multi-file operations was a blurred, flickering bar that was constantly being redrawn - telling you precisely nothing about how far the job had got.
But fixing the old Explorer wouldn't be that hard. Adding in working progress bars, multithreading, thumbnails and customisation and so on does not have to mean using IE to display everything. Netscape is dead and gone and the Mozilla Foundation doesn't care; nobody is going to force Microsoft to remove IE from Windows now, or demonstrate that it's an integral part. It is, now, and has been for nearly a decade.
A simpler, cleaned-up IE with no ActiveX, which played no part in the local GUI, would make Windows safer against attack. A simpler renderer for local JPEG and HTML content, so that HTML Help files and so on still work, would be an easy job. At the same time, the use of the IE renderer to display everything from Windows Messenger chats to emails in Outlook and the display in Media Player brings IE's vulnerabilities to all those programs as well. They ought to be using a simpler, safe renderer too, rather than the full-on IE browsing code.
In an ideal world, I'd like to see IE completely replaced. There are enough HTML rendering engines out there - Mozilla's Gecko, KDE's Webkit, as used in Apple's Safari browser, Opera's code and so on. No need to use an open-source one - just buy Opera or hire the developer of the Apple browser iCab, say. There were too many bad decisions made in the course of IE's development, some of them motivated by commercial concerns like the Netscape lawsuit, rather than technical considerations - which should rule such a sensitive, critical piece of software.
Yes, getting rid of IE would break some websites, but then, IE7 does that anyway, and IE8 will doubtless break more. It would be a price worth paying.
These are the sorts of changes that could have made a really big difference to Vista. Not bolting on new code to hedge around the dangerous bits with warning messages and reduced execution privileges, but adopting the same models that everyone else uses - limited user accounts, hidden or inaccessible admin logins, and strict isolation of untrusted remote content from local files, handled by different rendering engines.
These are big changes and they would have caused lots of problems, but that always happens anyway. It's unavoidable. They wouldn't have by any means fixed Windows altogether and made it a completely safe system, but they would be big steps in the right direction.
It's never going to happen now - it's too late for Vista, and after this, there will probably never be such a big change in Windows again, until it's replaced with something new.
But here's a fun thought. What if Microsoft were held legally responsible for all those vulnerable, insecure Windows installations out there? You may not have heard of it, but there's a special lightweight edition of XP for turning old PCs into thin clients. It's called Windows XP Fundamentals. It's the core of XP with almost all the features removed - it can't even "ping" an IP address - but it's still the same old XP. Only this version can run on pretty much any old box that will run Windows 98: 64MB of RAM and a few hundred meg of disk is enough.
How about a special free update of Windows for all those people who won't or can't upgrade to Vista, given away as a free upgrade, just like Outlook 98 was given away free on magazine cover disks to get people to replace the woefully buggy Outlook 97. An enhanced "Vista Fundamentals", with a fixed, safe Explorer, no privileged accounts and a sanitized IE, and none of Vista's whizzy new features, given away for nothing to anybody with an existing version of Windows. That would go a long way toward persuading people to finally abandon all the old versions of Windows and upgrade. And lots of those people might then be tempted into buying the full Vista. No, I know, it'll never happen. But wouldn't it be nice if...?
3/3
The INQuirer
|