In another thread, yet another person has a broken Windoze machine, this one because of a virus. There was also an earlier thread about problems with Vista. Both are painfully familiar refrains.
Quite a few people in China people cannot do Microsoft updates because their "pirate" copy does not pass MS-update's validation checks (though some do, I think). Windows can be tolerably secure with the patches, but wandering about the net with an unpatched Windows box is distinctly risky, approximately as sensible as frequenting whorehouses without taking condoms. If you must run Windows, you absolutely must get access to the updates! If that means paying Microsoft, rough luck.
To me, the obvious solution is to get rid of Windows. One way is just get a Mac; I'd say almost anyone who is buying a new computer anyway should consider that.
But running a Microsoft-free PC is also quite possible. There are many choices of free operating systems for PCs, all in some ways superior to Microsoft products, and all free or cheap. Combine one of these with cheap PC hardware and you have the best bang-for-the-buck solution in sight.
For anyone who wants to cut to the chase, what I use and would recommend for non-technical home or laptop users is Xubuntu. http://www.xubuntu.org/
This post describes most of the available choices in possibly excruciating detail. Every opinion I express below is debatable; some would start a "religious war" in hacker circles.
You can form your own opinions, too. Every system described below has a web site; look at a few and you may see one or more where you say "I like their style." Most provide a "live CD" version of their software; you can download it, burn a CD and boot from the CD to try it without changing anything on your hard drive. If you don't like it, or if some hardware in your PC does not work with it, give away or throw away the CD. Or go back to the web site looking for help with your system/hardware issue; maybe it is easily fixed.
The free Unix systems are all complete solutions for common computing needs. With Windows, you may be able to get a cheap "Home" version, or may find it included with your laptop. However, that won't work right on a Windows-based office network; you need to "upgrade" to the "Professional" version to be able to use network authentication. The machine that provides the authentication service has to have the "Server" version. In contrast, every system described below (except perhaps Mac; I do not know how Apple packages things) comes with all the above built in. If you need some "server" features, including a database manager, you can just turn them on.
No version of Windows comes with a programming language, or database management software, or a suite of office tools. Those are separate products at a few $100 each. There are package deals that include Windows and one or more other products, cheaper than buying everything separately but still a significant chunk of change.
For any Unix, if you need a programming language, there are several built in and many others are easily added. Many distributions include office tools and a database, If not, adding them is straightforward, often just a few menu clicks.
Unix has been around in various forms since the early 70s, and Microsoft has copied a lot from it. The original Unix was written at Bell Labs, the research facility that invented the transistor and the laser. It was designed for research work, in a computer science research group; there was plenty of innovation and some of the best documentation of the era, but no hand-holding for amateurs. The old joke "Unix is user-friendly; it is just really choosy about who its friends are" has more than a grain of truth.
On the other hand, if you have to muck about at low levels of the system, Windows is arguably even worse; ever tried to edit the registry?
Unix is a wonderful system for software developers, a good platform to build other things on top of. Quite a few have built various user-friendly environments. The most obvious example is the Mac, which has FreeBSD (a Unix) under the hood. Every system discussed below has some sort of nice window-based front end as well; most offer a choice of several different ones.
There are three big groups of Open Source Unices (plural of "Unix"). Linux is best-known, but various *BSD distributions and Open Solaris are also worth considering. To describe the differences, I need to wax historical for a while. Bear with me, or skip paragraphs as seems appropriate.
Unix started off at Bell Labs, early 70s. By the 7th Edition, 1979, Unix was quite widespread throughout AT&T and in universities (who got it cheap) and was in use elsewhere. All modern Unices have more-or-less everything 7th Edition did, but things diverge from there.
ARPA, the Advanced Research Projects Agency of the US Defense Department built something called the ARPAnet, starting in 1969. Prime contractor was BBN, a consulting company that started off as acoustical physics profs designing opera houses but -- via wartime sonar work and postwar research on the sound barrier -- eventually ended up with a stable of top computer people and lots of military contracts.
Eventually, ARPAnet evolved into the Internet. A key change was replacing BBN's original low-level protocols -- designed to link a few dozen computer centers -- with a much more flexible system called TCP/IP. That's still what we use today.
The first operating system with TCP/IP support, around 1980, was Berkeley Unix. It was by no means all written at Berkeley; ARPA paid for it, and BBN and several other universities were involved, but it was a Berkeley Software Distribution, BSD 4.2.
BSD quickly became the standard for the many vendors springing up with engineering workstations; networking was essential for them. These were powerful machines for the day: Sun has never had a monitor resolution lower than 1152*864, or a machine without ethernet, since it started around 1980; they had 64-bit CPUs by the early 90s. Silicon Graphics was founded by the guy whose thesis invented the graphics co-processor. PCs have generally been roughly 10 years behind the workstation market, though the gap seems to be shrinking. Arguably, if you're going to use hardware based on innovations pioneered in those systems, you might as well use the software too. That would be a BSD Unix.
Today, there are several free BSD descendants available. There has been some fragmentation; when developers cannot agree on where the system should go next, they sometimes just form two teams which each take it in a different direction.
One split was over whether to try and be very portable and run on lots of platforms (NetBSD) or concentrate more on making it work really well on PCs (FreeBSD). The current MacOS is based on FreeBSD, and I'd say FreeBSD is the obvious choice among BSDs for PCs.
Consider NetBSD if you want to use anything but a PC.
OpenBSD, a NetBSD spinoff, really emphasizes security and correctness; they won't include code they have not audited. When Whizbang 8.0 comes out and everyone switches over for the wonderful new features, OpenBSD does not include it until the audit is done and it seems mostly bug-free. They might continue with Whizbang 7.3 or they might just not support Whizbang at all, leaving it to users to download and build it if they need it. If a manufacturer of a graphics card, ethernet card or whatever does not release source code of drivers, or at least detailed device specs so a driver can be written, then OpenBSD does not support the device. Not something I'd want for my desktop, but definitely the first choice for an office firewall.
Dragonfly BSD is a new spinoff of FreeBSD, redesigning much of the low-level OS to better support multiple processors and load sharing over the network. For now, anyone except developers interested in those issues should steer well clear, but it may be more broadly interesting later. Intel's next generation, due late this year, go up to 8 cores on a chip, each hyperthreaded so the OS thinks it has 16 processors. By the time those become cheap and common, or the next generation turns up with even more cores, Dragonfly may be just the thing to run on them.
For more on any of those, see freebsd.org, netbsd.org etc.
Back to 1980 or so: AT&T decided Unix could be a product, and it should be spelled UNIX. They came up with System III and then System V. I remember a Unix conference mid-80s where AT&T marketers all had buttons on their suits "System V: Consider it Standard". They did pretty well with that line, too; quite a few big players had sysV-based products. However, others kept with BSD-based systems. Somewhere I think I still have a "4.2 > V" poster, from the same conference; you do need networking and sysV did not have TCP/IP built in.
There were efforts to sort the conflicts out. The IEEE came up with a standard called Posix, listing everything any Unix should have. Also, Sun and AT&T co-operated to produce System V Release 4, a sort of "best of both worlds" hybrid.
Sun's SysVr4 product was called Solaris, replacing the pure BSD SunOS. Today, Open Solaris is free for download http://www.sun.com/software/solaris/index.jsp
Definitely the choice if your feel better having a big company behind a product, though on the other hand HP and IBM are heavily involved in Linux, and Sun is a player there too. Worth considering in any case.
Then there's Linux, also out of academia but by a somewhat different route. In the early 80s most universities had Unix, including the source code, but the Unix license does not let you put Unix source on your slides or in your textbook. If you are teaching an operating systems course, you need code there. So a Dutch prof named Andrew Tanenbaum wrote Minix, basically a 7th Edition clone, that ran on PCs. His textbook, using Minix for all the examples, was very widely used. Minix was not free software, but you could buy the disks from his publisher.
The original PC and AT had 16-bit CPUs. Minix ran on those, or in 16-bit mode on Intel's 80386, and on the 32-bit Mac, but not in 32-bit mode on the 386. Tanenbaum showed little interest in fixing that; it would run just fine on a 386 in 16-bit mode, all you need in a teaching tool. Why make major changes to the code, especially if they mean you have to revise the book?
Nobody could distribute a 386 Minix; the publisher's copyright prevented that. What you could distribute were patch files that let anyone with a copy of the 16-bit version (presumably duly licensed) convert it to the 386 bit version. This was complicated and inconvenient in a bunch of ways.
Then a student doing a course based on Tanenbaum's book wrote a standalone 386 kernel that used the Minix file system and utilities. He announced it on the comp.os.minix newsgroup in 1991. Some of the reactions were along the lines of "Go away, kid. We're trying to do serious work here, getting these blasted patches to install", but other people reacted much better, in fact pitched in and started improving it. The kid was named Linus, so he called his kernel Linux.
Meanwhile, the Free Software Foundation had been developing a system called GNU (GNU's Not Unix), designed to be Unix-like and Posix-compliant but entirely free. Free in the sense of "free speech" not "free beer"; the issue is the user's freedom to modify, not cost. Their kernel project was bogged down in complexity, nowhere near ready, but they had a nice collection of other tools. These were designed for portability to more-or-less any 32-bit system, but most would not work on 16-bit systems.
Linux was a free 32-bit system. Add the GNU stuff (much of which the BSDs also use), some other things like the X Window system developed at MIT, and some borrowed BSD stuff. Presto; you've got a complete system.
Various organisations build Linux distributions, neatly packaged systems with all the above and maybe some of their own add-ons. Each has a somewhat different method, combination of features and target market. To some extent, they all co-operate and quite a few use code developed by other distros.
Debian, Gentoo and Slackware are distros that seem to be more BSD-like, aiming at something small, clean and solid that you can add to as needed rather than a big complex system with everything you might need. Most experts seem to prefer a BSD, Solaris or one of these.
Other distros aim more at plug-and-play completeness. Red Hat, whose current system is called Fedora, appear to be biggest in the US. The German company, SuSe (recently bought by American Novell) have a big piece of the market in Europe. Mandriva are an almagamation of European Mandrake and Conectiva, the main vendor in Latin America. The Japanese distro Turbolinux sells all over Asia. There's even at least one Chinese distro, Red Flag. http://www.redflag-linux.com/eindex.html
Then there's Ubuntu http://www.ubuntu.com/
, a relatively new distribution based on Debian. It uses the Debian package management system so adding more-or-less anything you need is straightforward. There are three variants using different window managers, so here comes a digression to explain what a Window manager in X is.
X is a distributed Window system; the X server runs on your workstation, controls the keyboard and mouse, and provides display services to client programs. Each window can have a different client and the clients can be anywhere. You can be at your desktop but running your word processor on some other machine, compiling code on another and running a simulation on a third; all can display on your desktop machine.
The X documentation says "We provide mechanism, not policy." None of the choices about how to display each window -- borders, buttons, etc. -- are hardwired into the X server. Most systems use a Window manager, a program that accepts requests from client programs, adds all the extra details and passes them on to the X server. There are several dozen different Window managers available, including ones that emulate Windows or the Mac quite well. A catalog is at http://xwinman.org/
Beyond that, there are a dozen or so more ambitious systems available, more-or-less complete desktop environments for X each with all the accessories -- a calendar program, an email handler, MP3 player and so on -- plus a programmer's interface so people can build more.
The GNU project have a desktop environment called Gnome, large and complex but popular; the normal version of Unbuntu uses that. Kubuntu uses KDE, somewhat smaller but still a rich and complex system, also quite popular.
Neither of those runs well on slower or smaller machines. For those, Ubuntu provide Xubuntu which uses XFCE, a much smaller and lighter window manager built with stripped-down Gnome code and using the Gnome programming interfaces so most or all Gnome add-ons can be used with it.
I have quite a powerful PC, but Xubuntu is a good compromise for me. On good hardware it is fast enough. though not nearly as quick as just picking a lightweight Window manager and dispensing with all the overhead of a desktop environment. That's what most hackers would do, but I want the convenience.