Welcome

Photos of Larryblakeley
http://www.royblakeley.name/larry_blakeley/larryblakeley_photos_jpeg.htm

(Contact Info: larry at larryblakeley dot com)

Important Note: You will need to click this icon to download the free needed to view most of the images on this Web site - just a couple of clicks and you're "good to go."

Executive Summary

Much ado has been made about whether or not Linux is truly more secure than Windows. We compared Windows vs. Linux by examining the following metrics in the 40 most recent patches/vulnerabilities listed for Microsoft Windows Server 2003 vs. Red Hat Enterprise Linux AS v.3:

The severity of security vulnerabilities, derived from the following metrics:

1. damage potential (how much damage is possible?)

- exploitation potential (how easy is it to exploit?)

- exposure potential (what kind of access is necessary to exploit the vulnerability?)

2. The number of critically severe vulnerabilities

We queried the United States Computer Emergency Readiness Team (CERT) database ... 39 of the first 40 entries in the CERT database for Windows are rated above the CERT threshold for a severe alert. Only three of the first 40 entries were above the threshold when we queried the database about Red Hat. When we queried the CERT database about Linux, only 6 of the first 40 entries were above the threshold.

So why have there been so many credible-sounding claims to the contrary, that Linux is actually less secure than Windows? There are glaring logical holes in the reasoning behind the conclusion that Linux is less secure. It takes only a little scrutiny to debunk the myths and logical errors behind the following oft-repeated axioms:

So why have there been so many credible-sounding claims to the contrary, that Linux is actually less secure than Windows? There are glaring logical holes in the reasoning behind the conclusion that Linux is less secure. It takes only a little scrutiny to debunk the myths and logical errors behind the following oft-repeated axioms:

1. Windows only suffers so many attacks because there are more Windows installations than Linux, therefore Linux would be just as vulnerable if it had as many installations

2. Open source is inherently less secure because malicious hackers can find flaws more easily

3. There are more security alerts for Linux than for Windows, therefore Linux is less secure than Windows

4. There is a longer time between the discovery of a flaw and a patch for the flaw with Linux than with Windows

Myth: There's Safety In Small Numbers

hile this may be true, at least in part, the intentional implication is not necessarily true: That Linux and Linux applications are no more secure than Windows and Windows applications, but Linux is simply too trifling a target to bother attacking.

This reasoning backfires when one considers that Apache is by far the most popular web server software on the Internet. According to the September 2004 Netcraft Web site survey, [1] 68% of Web sites run the Apache web server. Only 21% of Web sites run Microsoft IIS. If security problems boil down to the simple fact that malicious hackers target the largest installed base, it follows that we should see more worms, viruses, and other malware targeting Apache and the underlying operating systems for Apache than for Windows and IIS. Furthermore, we should see more successful attacks against Apache than against IIS, since the implication of the myth is that the problem is one of numbers, not vulnerabilities.

Yet this is precisely the opposite of what we find, historically. IIS has long been the primary target for worms and other attacks, and these attacks have been largely successful.

Perhaps this is why, according to Netcraft, 47 of the top 50 Web sites with the longest running uptime (times between reboots) run Apache. [2] None of the top 50 Web sites runs Windows or Microsoft IIS. So if it is true that malicious hackers attack the most numerous software platforms, that raises the question as to why hackers are so successful at breaking into the most popular desktop software and operating system, infect 300,000 IIS servers, but are unable to do similar damage to the most popular web server and its operating systems?

Astute observers who examine the Netcraft Web site URL will note that all 50 servers in the Netcraft uptime list are running a form of BSD, mostly BSD/OS. None of them are running Windows, and none of them are running Linux. The longest uptime in the top 50 is 1,768 consecutive days, or almost 5 years.

This appears to make BSD look superior to all operating systems in terms of reliability, but the Netcraft information is unintentionally misleading. Netcraft monitors the uptime of operating systems based on how those operating systems keep track of uptime. Linux, Solaris, HP-UX, and some versions of FreeBSD only record up to 497 days of uptime, after which their uptime counters are reset to zero and start again. So all Web sites based on machines running Linux, Solaris, HP-UX and in some cases FreeBSD "appear" to reboot every 497 days even if they run for years. The Netcraft survey can never record a longer uptime than 497 days for any of these operating systems, even if they have been running for years without a reboot, which is why they never appear in the top 50.

That may explain why it is impossible for Linux, Solaris and HP-UX to show up with as impressive numbers of consecutive days of uptime as BSD -- even if these operating systems actually run for years without a reboot. But it does not explain why Windows is nowhere to be found in the top 50 list. Windows does not reset its uptime counter. Obviously, no Windows-based Web site has been able to run long enough without rebooting to rank among the top 50 for uptime.

Given the 497-rollover quirk, it is difficult to compare Linux uptimes vs. Windows uptimes from publicly available Netcraft data.

Two data points are statistically insignificant, but they are somewhat telling, given that one of them concerns the Microsoft website. As of September 2004, the average uptime of the Windows web servers that run Microsoft's own Web site (www.microsoft.com) is roughly 59 days. The maximum uptime for Windows Server 2003 at the same site is 111 days, and the minimum is 5 days. Compare this to www.linux.com (a sample site that runs on Linux), which has had both an average and maximum uptime of 348 days. Since the average uptime is exactly equal to the maximum uptime, either these servers reached 497 days of uptime and reset to zero 348 days ago, or these servers were first put on-line or rebooted 348 days ago.

The bottom line is that quality, not quantity, is the determining factor when evaluating the number of successful attacks against software.

Myth: Open Source is Inherently Dangerous
The impressive uptime record for Apache also casts doubt on another popular myth: That open source code (where the blueprints for the applications are made public) is more dangerous than proprietary source code (where the blueprints are secret) because hackers can use the source code to find and exploit flaws.

The evidence begs to differ. The number of effective Windows-specific viruses, Trojans, spyware, worms and malicious programs is enormous, and the number of machines repeatedly infected by any combination of the above is so large it is difficult to quantify in realistic terms. Malicious software is so rampant that the average time it takes for an unpatched Windows XP to be compromised after connecting it directly to the Internet is 16 minutes -- less time than it takes to download and install the patches that would help protect that PC. [3]

Myths: Conclusions Based on Single Metrics

The remaining popular myths regarding the relative security of Windows vs. Linux are flawed by the fact that they are based only on a single metric - a single aspect of measuring security. This is true whether the data comes from actual research, anecdotal information or even urban myth.

One popular claim is that, "there are more security alerts for Linux than for Windows, and therefore Linux is less secure than Windows". Another is, "The average time that elapses between discovery of a flaw and when a patch for that flaw is released is greater for Linux than it is for Windows, and therefore Linux is less secure than Windows."

The latter is the most mysterious of all. It is an imponderable mystery how anyone can reach the conclusion that Microsoft's average response time between discovery of a flaw and releasing the fix for that flaw is superior to that of anycompeting operating system, let alone superior to Linux. Microsoft took seven months to fix one of its most serious security vulnerabilities (Microsoft Security Bulletin MS04-007 ASN.1 Vulnerability, eEye Digital Security publishes the delay in advisory AD20040210), and there are flaws Microsoft has openly stated it will never repair. The Microsoft Security Bulletin MS03-010 about the Denial Of Service vulnerability in Windows NT says this will never be repaired. More recently, Microsoft stated that it would not repair Internet Explorer vulnerabilities for any operating systems older than Windows XP. Statistically speaking, seven months between discovery and fix might not have an overly dramatic effect on the average response time if you can find enough samples of excellent response times to offset anomalies like this, assuming they are anomalies. But it only takes one case of "never" to upset the statistical average beyond recovery.

This unsolvable mystery aside, consider whether it is meaningful to suggest that Linux is a greater security risk than Windows because the average time between the discovery of vulnerability and the release of a patch is greater with Linux than with Windows. Ask yourself this question: If you experienced a heart attack at this very moment, to which hospital emergency room would you rather be taken? Would you want to go to the one with the best average response time from check-in to medical treatment? Or would you rather be taken to an emergency room with a poor record for average response time, but where the patients with the most severe medical problems always get immediate attention?

One would obviously choose the latter, but not necessarily because the above information proves it is the better emergency room. The latter choice is preferable because it includes two metrics, one of which is more important to you at that precise moment.

It would be inexcusably irresponsible to recommend an emergency room for a heart attack based only on a single metric such as the average response time for all medical emergencies, especially when the other important information that would lead to a more ideal choice is readily available.

It is equally irrational and irresponsible to make a recommendation or a serious business decision based solely on a single metric such as the average elapsed time between a flaw's detection and fix for a given operating system, or the number of security alerts for any given product.

Any single metric is misleading in terms of

*******************************************************************************************

 

ance. Let's consider the statement that there are more alerts for Linux software than Windows. This statistic is meaningless because it leaves the most important questions unanswered. Of all the security alerts, how many of the reported flaws represent a tangible risk? How severe are those risks? How likely are they to expose your systems to serious damage? These questions are important. Which is preferable: An operating system with 100 flaws that expose your systems to little or no damage and cannot be exploited by anyone except local users with a valid login account and physical access to your machine? Or would you prefer an operating system with 1 critical flaw that allows any malicious hacker on the Internet to wipe out all of the information on your server? Clearly, the number of alerts alone is not a meaningful metric for the security of one operating system over another.

Windows vs. Linux Design
It is possible that email and browser-based viruses, Trojans and worms are the source of the myth that Windows is attacked more often than Linux. Clearly there are more desktop installations of Windows than Linux. It is certainly possible, if not probable, that Windows desktop software is attacked more often because Windows dominates the desktop. But this leaves an important question unanswered. Do the attacks so often succeed on Windows because the attacks are so numerous, or because there are inherent design flaws and poor design decisions in Windows?

Many, if not most of the viruses, Trojans, worms and other malware that infect Windows machines do so through vulnerabilities in Microsoft Outlook and Internet Explorer. To put the question another way, given the same type of desktop software on Linux (the most often used web browsers, email, word processors, etc.), Are there as many security vulnerabilities on Linux as Windows?

Windows Design
Viruses, Trojans and other malware make it onto Windows desktops for a number of reasons familiar to Windows and foreign to Linux:

1. Windows has only recently evolved from a single-user design to a multi-user model

2. Windows is monolithic, not modular, by design Windows depends too heavily on an RPC model

3. Windows focuses on its familiar graphical desktop interface

4. Windows has only recently evolved from a single-user design to a multi-user model

Critics of Linux are fond of saying that Linux is "old" technology. Ironically, one of the biggest problems with Windows is that it hasn't been able to escape its "old" legacy single-user design. Windows has long been hampered by its origin as a single-user system. Windows was originally designed to allow both users and applications free access to the entire system, which means anyone could tamper with a critical system program or file. It also means viruses, Trojans and other malware could tamper with any critical system program or file, because Windows did not isolate users or applications from these sensitive areas of the operating system.

Windows is Monolithic by Design, not Modular
A monolithic system is one where most features are integrated into a single unit. The antithesis of a monolithic system is one where features are separated out into distinct layers, each layer having limited access to the other layers.

While some of the shortcomings of Windows are due to its ties to its original single-user design, other shortcomings are the direct result of deliberate design decisions, such as its monolithic design (integrating too many features into the core of the operating system). Microsoft made the Netscape browser irrelevant by integrating Internet Explorer so tightly into its operating system that it is almost impossible not to use IE. Like it or not, you invoke Internet Explorer when you use the Windows help system, Outlook, and many other Microsoft and third-party applications. Granted, it is in the best business interest of Microsoft to make it difficult to use anything but Internet Explorer. Microsoft successfully makes competing products irrelevant by integrating more and more of the services they provide into its operating system. But this approach creates a monster of inextricably interdependent services (which is, by definition, a monolithic system).

Interdependencies like these have two unfortunate cascading side effects. First, in a monolithic system, every flaw in a piece of that system is exposed through all of the services and applications that depend on that piece of the system. When Microsoft integrated Internet Explorer into the operating system, Microsoft created a system where any flaw in Internet Explorer could expose your Windows desktop to risks that go far beyond what you do with your browser. A single flaw in Internet Explorer is therefore exposed in countless other applications, many of which may use Internet Explorer in a way that is not obvious to the user, giving the user a false sense of security.

This architectural model has far deeper implications that most people may find difficult to grasp, one being that a monolithic system tends to make security vulnerabilities more critical than they need to be.

Perhaps an admittedly oversimplified visual analogy may help. Think of an ideally designed operating system as being comprised of three spheres, one in the center, another larger sphere that envelops the first, and a third sphere that envelope the inner two. The end-user only sees the outermost sphere. This is the layer where you run applications, like word processors. The word processors make use of commonly needed features provided by the second sphere, such as the ability to render graphical images or format text. This second sphere (usually referred to as "userland" by technical geeks) cannot access vulnerable parts of the system directly. It must request permission from the innermost sphere in order to do its work. The innermost sphere has the most important job, and therefore has the most direct access to all the vulnerable parts of your system. It controls your computer's disks, memory, and everything else. This sphere is called the "kernel"., and is the heart of the operating system.

In the above architecture, a flaw in the graphics rendering routines cannot do global damage to your computer because the rendering functions do not have direct access to the most vulnerable system areas. So even if you can convince a user to load an image with an embedded virus into the word processor, the virus cannot damage anything except the user's own files, because the graphical rendering feature lies outside the innermost sphere, and does not have

permission to access any of the critical system areas.

The problem with Windows is that it does not follow sensible design practices in separating out its features into the appropriate layers represented by the spheres described above. Windows puts far too many features into the core, central sphere, where the most damage can be done. For example, if one integrates the graphics rendering features into the innermost sphere (the kernel), it gives the graphical rendering feature the ability to damage the entire system. Thus, when someone finds a flaw in a graphics-rendering scheme, the overly integrated architecture of Windows makes it easy to exploit that flaw to take complete control of the system, or destroy the entire system.

Finally, a monolithic system is unstable by nature. When you design a system that has too many interdependencies, you introduce numerous risks when you change one piece of the system. One change may (and usually does) have a cascading effect on all of the services and applications that depend on that piece of the system. This is why Windows users cringe at the thought of applying patches and updates. Updates that fix one part of Windows often break other existing services and applications. Case and point: The Windows XP service pack 2 already has a growing history of causing existing third-party applications to fail. This is the natural consequence of a monolithic system - any changes to one part of the machine affect the whole machine, and all of the applications that depend on the machine.

indows Depends Too Heavily on the RPC model
RPC stands for Remote Procedure Call. Simply put, an RPC is what happens when one program sends a message over a network to tell another program to do something. For example, one program can use an RPC to tell another program to calculate the average cost of tea in China and return the answer. The reason it's called a remote procedure call is because it doesn't matter if the other program is running on the same machine, another machine in the next cube, or somewhere on the Internet.

RPCs are potential security risks because they are designed to let other computers somewhere on a network to tell your computer what to do. Whenever someone discovers a flaw in an RPC-enabled program, there is the potential for someone with a network-connected computer to exploit the flaw in order to tell your computer what to do ... Ironically, some of the most serious vulnerabilities in Windows Server 2003 (see table in section below) are due to flaws in the Windows RPC functions themselves, rather than the applications that use them. The most common way to exploit an RPC-related vulnerability is to attack the service that uses RPC, not RPC itself.

We raise the issue of database servers because the Slammer worm, one of the most profoundly dangerous worms ever to hit the Internet, exploited one of the most inappropriate uses of RPC-like network communications ever implemented by Microsoft. Slammer infected so many systems so quickly that it practically brought the Internet to a standstill.

The Slammer worm caused havoc by exploiting two flaws in Microsoft SQL Server, a client/server SQL database server. One flaw was a most improbable feature of Microsoft SQL Server - one that allows you to run more than one instance of the database server at a time on a single machine. Here is why it is improbable. If you're not familiar with database servers, picture it this way. Under normal conditions, it makes no sense to run multiple instances of a database server on a single machine, because one instance is all that is needed, even if many different applications use it. One would be as likely to want to run two copies of Windows XP on a single machine at the same time as want to run multiple database servers on a single machine at the same time. One rarely runs multiple instances of a database server on purpose, except in high-end applications or for testing and development. [4]

The easy way to allow multiple instances of SQL Server to run simultaneously without interfering with one another is to create an RPC mechanism that sorts out requests for data, so that a fax application queries its own copy of SQL Server, and a time-billing application queries yet another copy of SQL Server. To complicate matters, Microsoft development tools encourage the same monolithic approach Microsoft uses, so a broad range of applications - time-billing software, fax software, project management - almost 200 applications, many of them desktop applications, use the unnecessarily vulnerable SQL Server engine. As a result, hundreds of thousands, if not millions, of people use desktop applications that depend on the SQL Server engine with multiple network services enabled, many of which are exposed to the Internet. One could hardly concoct a better recipe for disaster.

As a result, Slammer found countless machines to attack because these features are enabled by default on every SQL Server engine. While SQL Server is not yet integrated into Windows, its ubiquity across applications from fax software to time billing software made it effectively a part of a larger monolithic system, thus opening the way to an attack path that is symptomatic of a monolithic system. Unfortunately, SQL Server is likely to be tightly integrated into the upcoming new Windows File System WinFS originally slated for Longhorn. If anyone thinks integrating SQL Server into the operating system is a good idea, they should consider what happened with the Slammer worm.

Windows focuses on its familiar graphical desktop interface
Microsoft considers its familiar Windows interface as the number one benefit for using Windows Server 2003. [5] To quote from the Microsoft Web site, "With its familiar Windows interface, Windows Server 2003 is easy to use. New streamlined wizards simplify the setup of specific server roles and routine server management tasks..."

By advocating this type of usage, Microsoft invites administrators to work with Windows Server 2003 at the server itself, logged in with Administrator privileges. This makes the Windows administrator most vulnerable to security flaws, because using vulnerable programs such as Internet Explorer expose the server to security risks.

Linux Design
According to the Summer 2004 Evans Data Linux Developers Survey, 93% of Linux developers have experienced two or fewer incidents where a Linux machine was compromised. Eighty-seven percent had experienced only one such incident, and 78% have never had a cracker break into a Linux machine. In the few cases where intruders succeeded, the primary cause was inadequately configured security settings.

More relevant to this discussion, however, is the fact that 92% of those surveyed have never experienced a virus, Trojan, or other malware infection on Linux.

Linux is based on a long history of well fleshed-out multi-user design
Linux does not have a history of being a single-user system. Therefore it has been designed from the ground-up to isolate users from applications, files and directories that affect the entire operating system. Each user is given a user directory where all of the user's data files and configuration files are stored. When a user runs an application, such as a word processor, that word processor runs with the restricted privileges of the user. It can only write to the user's own home directory. It cannot write to a system file or even to another user's directory unless the administrator explicitly gives the user permission to do so.

Even more important, Linux provides almost all capabilities, such as the rendering of JPEG images, as modular libraries. As a result, when a word processor renders JPEG images, the JPEG rendering functions will run with the same restricted privileges as the word processor itself. If there is a flaw in the JPEG rendering routines, a malicious hacker can only exploit this flaw to gain the same privileges as the user, thus limiting the potential damage. This is the benefit of a modular system, and it follows more closely the spherical analogy of an ideally designed operating system (see the section Windows is Monolithic by Design, not Modular).

Given the default restrictions in the modular nature of Linux; it is nearly impossible to send an email to a Linux user that will infect the entire machine with a virus. It doesn't matter how poorly the email client is designed or how badly it may behave - it only has the privileges to infect or damage the user's own files. Linux browsers do not support inherently insecure objects such as ActiveX controls, but even if they did, a malicious ActiveX control would only run with the privileges of the user who is running the browser. Once again, the most damage it could do is infect or delete the user's own files.

In sharp contrast, Windows was originally designed to allow all users and applications to have administrator access to every file on the system. Windows has only gradually been re-worked to isolate users and what they do from the rest of the system. Windows Server 2003 is close to achieving this goal, but the methodology Microsoft has employed to create this barrier between user and system is still largely composed of constantly changing hacks to the existing design, rather than a fundamental redesign with multi-user capability and security as the foundational concept behind the system.

Linux is Modular by Design, not Monolithic
Linux is for the most part a modularly designed operating system, from the kernel (the core "brains" of Linux) to the applications. Almost nothing in Linux is inextricably intertwined with anything else. There is no single browser engine used by help systems or email programs. Indeed, it is easy to configure most email programs to use a built-in browser engine to render HTML messages, or launch any browser you wish to view HTML documents or jump to links included in an email message. Therefore a flaw in one browser engine does not necessarily present a danger to any other application on the system, because few if any other applications besides the browser itself must depend on a single browser engine.

The Linux kernel supports modular drivers, but it is essentially a monolithic kernel where services in the kernel are interdependent. Any adverse impact of this monolithic approach is minimized by the fact that the Linux kernel is designed to be as minimal a part of the system as possible. Linux follows the following philosophy almost to a point of fanaticism: "Whenever a task can be done outside the kernel, it must be done outside the kernel." This means that almost every useful feature in Linux ("useful" as perceived by an end user) is a feature that does not have access to the vulnerable parts of a Linux system.

In contrast, bugs in graphics card drivers are a common cause of the Windows blue-screen-of-death. That's because Windows integrates graphics into the kernel, where a bug can cause a system failure. With only a few proprietary exceptions (such as the third-party NVidia graphics driver), Linux forces all graphics drivers to run outside the kernel. A bug in a graphics driver may cause the graphical desktop to fail, but not cause the entire system to fail. If this happens, one simply restarts the graphical desktop. One does not need to reboot the computer.

Linux is Not Constrained by an RPC Model
As stated above in the section on Windows, RPC stands for Remote Procedure Call. Simply put, an RPC allows one program to tell another program to do something, even if that other program resides on another computer. For example, one program can use an RPC to tell another program to calculate the average cost of tea in China and return the answer. The reason it's called a remote procedure call is because it doesn't matter if the other program is running on the same machine, another machine in the next cube, or somewhere on the Internet.

Even when Linux applications use the network by default, they are most often configured to respond only to the local machine and ignore any requests from other machines on the network.

Unlike Windows Server 2003, you can disable virtually all network-related RPC services on a Linux machine and still have a perfectly functional desktop.

Linux servers are ideal for headless non-local administration
A Linux server can be installed, and often should be installed as a "headless" system (no monitor is connected) and administered remotely. This is often the ideal type of installation for servers because a remotely administered server is not exposed to the same risks as a locally administered server.

For example, you can log into your desktop computer as a normal user with restricted privileges and administer the Linux server through a browser-based administration interface. Even the most critical browser-based security vulnerability affects only your local user-level account on the desktop, leaving the server untouched by the security hole.

This may be one of the most important differentiating factors between Linux and Windows, because it virtually negates most of the critical security vulnerabilities that are common to both Linux and Windows systems, such as the vulnerabilities of the Mozilla browser vs. the Internet Explorer browser.

Realistic Security and Severity Metrics
One needs to examine many metrics in order to evaluate properly the risks involved in adopting one operating system over another for any given task. Metrics are sometimes cumulative; at other times they offset each other.

There are three very important metrics, represented as risk factors, which have a profound effect on one another. The combination of the three can have a dramatic impact on the overall severity of a security flaw. These three risk factors are damage potential, exploitation potential, and exposure potential.

Elements of an Overall Severity Metric
Damage potential of any given discovered security vulnerability is a measurement of the potential harm done. A vulnerability that exposes all your administrator passwords has a high damage potential. A flaw that makes your screen flicker would have a much lower damage potential, raised only if that particular damage is difficult to repair.

Exploitation potential describes how easy or difficult it is to exploit the vulnerability. Does it require expert programming skills to exploit this flaw, or can almost anyone with rudimentary computer experience use it for mischief?

Exposure potential describes the amount of access necessary to exploit a given vulnerability. If any hotshot hacker (commonly referred to as "script kiddies") on the Internet can exploit a flaw on a server you have protected by a firewall, that flaw has a very high exposure potential. If it is only possible to exploit the flaw if you are an employee within the company with a valid login ID, using a computer inside the company building, the exposure potential of that flaw is significantly less severe.

Overall Severity Metric and Interaction Between the Three Key Metrics
One or more of these risk factors can have a profound affect on the overall severity of a security hole. Assume for a moment that you are the CIO for a business based on a web eCommerce site. Your security analyst informs you that someone has found a flaw in the operating system your servers are running. A malicious hacker could exploit this flaw to erase every disk on every server on which the company depends.

The damage potential of this flaw is catastrophic.

Worse, he adds that it is trivially easy from a technical perspective to exploit this flaw. The exploitation potential is critical.

Time to press the panic button, right? Now suppose he then adds this vital bit of information. Someone can only exploit this flaw with a key to the server room, because this particular security vulnerability requires physical access to the machines. This one key metric, if you'll pardon the pun, makes a dramatic difference in the overall severity of the risk associated with this particular flaw. The extremely low exposure potential shifts the needle on the severity meter from "panic" to "imminently manageable".

Conversely, another security vulnerability might be exposed to every script kiddy on the Internet, but still be considered of negligible severity because the damage potential for this flaw is inconsequential.

Perhaps you can begin to appreciate why it is misleading, if not outright irresponsible to measure security based on a single metric like the number of security alerts. At the very least, one must also consider these three risk factors. Would you rather rely on an operating system with a history of hundreds of flaws of negligible severity, or one with a history of dozens of flaws with catastrophic severity? Unless you factor the overall severity of the flaws into the evaluation, the number of flaws is irrelevant at best, misleading at worst.

The Exception To The Rule
The Overall Severity Metric has three aforementioned "main" ingredients. We've demonstrated how a low damage potential or a low exposure potential can effectively negate the other high risk factors. The exploitation potential is an exception to this rule. A flaw that requires expert programming skills to exploit does far less to offset a high damage potential or a high exposure potential.

The reason for this is simple. If one must break into a computer room in order to exploit a flaw, that problem is not only difficult to overcome, any attempt to break into the computer room increases the risk for the intruder to get caught. That is also why a flaw that can only be exploited by an employee within the company who must log in to a local computer with his valid login ID is less severe than a flaw that can be exploited by any script kiddy on the Internet. The employee is far more likely to get caught.

On the other hand, anonymous malicious hackers with only mediocre programming skills can spend weeks or months developing a program to exploit a security hole with little or no risk of getting caught. The only significant challenge presented to such an intruder is how to activate the malicious program without having its origin traced back to its creator.

One look at the current state of malicious software should make this exception self-evident. Not many people blast their way into a computer room with a bazooka in order to crack into the servers. But there are countless Trojans, worms, and viruses that are still infecting systems today, in part because programmers, talented or not, were willing to tackle the technical challenge of writing malicious code or re-writing the malicious code of others. Technical difficulty obviously does not necessarily offset an otherwise high-risk flaw.

Applying The Overall Severity Metric
Once you can evaluate the overall severity of any given flaw, you can begin to add meaning to metrics such as "how many security alerts does Windows have vs. Linux", or "how long does one have to wait for a fix after a flaw is discovered when using Windows vs. Linux".

Suppose one operating system has far more security alerts than another. The only reason that metric may have meaning is if it also has more security alerts that point to flaws with a high overall severity level. It is one thing to be plagued on a regular basis by a myriad of minor low-risk annoyances, quite another to be plagued on a regular basis by only a few flaws that put your entire company at risk.

Suppose one operating system has a better record for time to delivery of a fix once a flaw is discovered. Once again, the only reason this metric may have meaning is if the delays are related to flaws with a high overall severity level. It is one thing to wait months for a fix to an exploit that would cause little or no damage on a few computers. It is quite another to wait months for a fix for a flaw that puts your entire company at risk.

Means Of Evaluating Metrics

Exposure Potential
This metric takes into account the measures one must take to access a machine in order to exploit security vulnerabilities.

Exploitation Potential
This metric takes into account the technical difficulty involved in exploiting a security flaw.

Damage Potential
This metric is the most difficult to quantify. It requires at least two separate sets of categories. First, it takes into account how much damage potential a flaw presents to an application or the computer system. Second, the damage potential must be measured in terms of "what it means" to the company affected.

Overall Severity Risk
Given the above three factors, the overall severity risks range from minimal to catastrophic.

Additional Considerations

Application Imbalance
One factor that is often overlooked in the grand debate about the superiority of one operating system over another hinges on the fact that security vulnerabilities almost always revolve around applications. This presents a problem when comparing Windows to Linux, because the two are not at all equal with respect to application portability and availability.

On the one hand, most of the popular Microsoft Windows applications are Microsoft applications, and they only run on Windows. When a flaw is found in Microsoft Exchange, one can be reasonably certain that this problem only affects Windows customers. Microsoft Exchange does not run on Linux, Solaris, or anything else but Windows.

The Apache web server, on the other hand, may be most often associated with Linux, UNIX or other UNIX-like systems, but Apache does run on Windows, as well. So when one compares the overall security of Windows vs. Linux, is a flaw in Apache a blemish on Linux only? Or does it reflect negatively on both Linux and Windows?

To complicate matters, there are several cases where a flaw in Apache poses little or no danger on Linux, but is a serious vulnerability on Windows. The reverse is rarely, if ever, the case. Should the overall security ranking of Windows suffer because it is more adversely affected than Linux when using software that is most commonly associated with Linux?

One is obligated to question if any of these factors have been considered when comparing the security of Windows to Linux.

Setup and Administration
Finally, the difference between the Linux philosophy to server setup and administration vs. the Windows philosophy to setup and administration is, as stated earlier, perhaps the most critical differentiating factor between the two operating systems.

Windows encourages you to use the familiar interface, which means administering Windows Server 2003 at the server itself. Linux does not rely on or encourage local use of a graphical interface, in part because it is an unnecessary waste of resources to run a graphical desktop at the server, and in part because it increases security risks at the server. For example, any server that encourages you to use the graphical interface at the server machine also invites you to perform similar operations, such as use the browser at the server. This exposes that server to any browser security holes. Any server that encourages you to administer it remotely removes this risk. If you administer a Linux server remotely from a desktop user account, a browser flaw exposes only the remote desktop user account to security holes, not the server. This is why a browser security hole in Windows Server 2003 is potentially more serious than a browser security hole in Red Hat Enterprise Server AS.

A Comparison of 40 Recent Security Patches
The following sections document the 40 most recent patches to security vulnerabilities in Windows Server 2003 (arguably the most secure version of Windows) and Linux Red Hat Enterprise AS v.3 (arguably the competitive equivalent of Windows Server 2003). The data for the Windows Server 2003 patches and vulnerabilities was taken directly from the Microsoft Web site, and the data for Red Hat Enterprise AS v.3 was taken from the Red Hat Web site.

Windows Server 2003 has experienced the most severe security holes.

In sharp contrast, of the 40 vulnerabilities listed by Red Hat, only 4 are rated as Critical by our metrics (Red Hat does not list a severity rank for its alerts).

Patches and Vulnerabilities Affecting Microsoft Windows Server 2003

The following table contains information about the vulnerabilities from the 40 most recent security patches made available by Microsoft.

http://www.theregister.co.uk/security/security_report_windows_vs_linux/#comparison

Patches and Vulnerabilities Affecting Red Hat Enterprise Linux AS v.3

The following table contains information about the vulnerabilities from the 45 most recent security patches made available by Microsoft.

http://www.theregister.co.uk/security/security_report_windows_vs_linux/#comparison

CERT Vulnerability Notes Database Results
The United States Computer Emergency Readiness Team (CERT) uses its own set of metrics to evaluate the severity of any given security flaw. A number between 0 and 180 expresses the final metric, where the number 180 represents the most serious vulnerability. The ranking is not linear. In other words, a vulnerability ranked 100 is not twice as serious as a vulnerability ranked at 50.

CERT considers any vulnerability with a score of 40 or higher to be serious enough to be a candidate for a special CERT Advisory and US-CERT technical alert.

We queried the CERT database using the search terms "Microsoft", "Red Hat", and "Linux". [9] While the CERT web search capabilities do not produce perfectly desirable results in terms of granularity or longevity. This is especially true for the search results for "Red Hat" and "Linux". The "Linux" search results include a number of Oracle security vulnerabilities that are common to Linux, UNIX, and Windows. The details of the most severe "Red Hat" entry does not even list Red Hat as a vulnerable system. The results for the "Microsoft" search seem to be almost entirely accurate, inasmuch as both the details and entries refer to flaws in Microsoft-specific software. As a result, the results are somewhat unfairly skewed against Linux and Red Hat. Nevertheless, even if one takes the results at face value and ignores the skewed results for Linux and Red Hat, Microsoft still produces the most entries in the CERT database, and the list of entries contain the most severe flaws.

The CERT results for "Microsoft" returned 250 entries, with the top two entries containing the severity metric of 94.5. Thirty-nine entries have a severity rating of 40 or greater. The average severity rating for the top 40 entries is 54.67. (We chose to average 40 entries instead of 50 or more because the Red Hat search only returned 49 results.)

The CERT results for "Red Hat" returned 46 entries. The top entry has a severity metric of 108.16. Only 3 (vs. 39 for Microsoft) entries have a metric of 40 or greater. The average severity for the top 40 entries is 17.96.

The CERT results for the "Linux" search returned 100 entries. The top entry has a severity metric of 87.72. Only 6 of the entries carry a severity metric of 40 or greater. The average severity for the top 40 entries is 28.48.

These results cannot be expected to mirror our own analysis of recent vulnerability patches. The CERT search criteria and date ordering is different, and the CERT search does not confine the products to Windows Server 2003 and Red Hat Enterprise Linux AS v.3. But the CERT results reflect how Windows security flaws tend to be far more frequently severe than those of Linux, which echoes our conclusions.

- "Security Report: Windows vs Linux," Nicholas Petreley, October 22, 2004, http://www.theregister.co.uk/security/security_report_windows_vs_linux/

Footnotes
[1] See References section below for the Netcraft URLs from which this data was drawn.

[2] See References section below for the Netcraft URL for this data

[3] Unpatched PC "Survival Time" Just 16 Minutes, by Greek Keiser, TechWeb News. See references section below for URL.

[4] We suspect we know why Microsoft chose to implement this as the default behavior of SQL Server. Many third-party applications use the SQL Server engine by default. If everyone who wrote applications for SQL Server assumed that there would be a single instance of SQL Server running on the machine, Microsoft would have to provide an easy way for the installation programs to detect that SQL Server was already installed and running, and then provide an easy way to install, integrate and administer the applications' specific requirements for its own database and tables running on the existing server. This is the elegant solution, and it uses up a minimum of resources because only one instance of SQL Server is ever needed. But this approach would require a good deal of extra work on the part of Microsoft or on the part of the third-party developers. It was much easier to design a way to allow third party applications to avoid bothering with the issue of whether or not SQL Server is already installed. Given the design Microsoft implemented, any third party can simply install its own copy of SQL Server without worrying about whether or not SQL Server already exists on the target machine, what version of SQL Server is already installed, or how the SQL Server is already configured. In short, rather than do things right, and in an effort to entice third parties to use SQL Server, Microsoft took the lazy way out and designed a system where any application could install its own private copy of SQL Server without its operation interfering with the other copies of SQL Server running on the same system. This led to the desire to run several instances of SQL Server with RPC enabled, which should actually have a very narrow audience. This lazy approach had terribly unfortunate consequences. If Microsoft had designed SQL Server to run as a single instance without network connections by default, the Slammer worm would not have been able to find enough machines running SQL Server to do any significant damage.

[5] See References section for URL to the "Top 10 Benefits of Windows Server 2003" page at the Microsoft Web site.

[6] See Resources for URL for page from which data was extracted

[7] See Resources for URL for page from which text is quoted

[8] See Resources for URL for page from which data was extracted

[9] See the References section below for the full URLs we used to perform these searches.

References
Netcraft Web Survey for September 2004 http://news.netcraft.com/archives/2004/08/31/September_2004_seb_server_survey.html

Netcraft Top 50 Servers With Longest Uptime (results may differ since the information changes daily) http://uptime.netcraft.com/up/today/top.avg.html

Unpatched PC "Survival Time" Just 16 Minutes, Gregg Keizer, TechWeb News http://www.internetweek.com/breakingNews/showArticle.jhtml?articleID=29106061

Top 10 Benefits of Windows Server 2003 http://www.microsoft.com/windowsserver2003/evaluation/whyupgrade/top10best.mspx

Microsoft Security Bulletin, Current Downloads http://www.microsoft.com/technet/security/CurrentDL.aspx

Default Settings Different on Windows Server 2003 These settings are enumerated on several alert pages under "Frequently Asked Questions, What is Internet Explorer Enhanced Security Configuration?" The following is one such URL. http://www.microsoft.com/technet/security/bulletin/ms03-032.mspx

Red Hat Enterprise Linux Advance Server v.3 Security Advisories https://rhn.redhat.com/errata/rhel3as-errata-security.html

CERT search for Microsoft Alerts http://www.kb.cert.org/vuls/bymetric?searchview&query=microsoft&searchorder=4&count=100

CERT search for Red Hat Alerts http://www.kb.cert.org/vuls/bymetric?searchview&query=red*hat&searchorder=4&count=100

CERT search for Linux Alerts http://www.kb.cert.org/vuls/bymetric?searchview&query=linux&searchorder=4&count=100