Free Republic
Browse · Search
General/Chat
Topics · Post Article

Skip to comments.

Why Linux Is More Secure Than Windows
PC World ^ | Tue Aug 3, 2:49 pm ET | Katherine Noyes Katherine Noyes

Posted on 08/04/2010 10:58:19 PM PDT by Ernest_at_the_Beach

"Security through obscurity" may be a catchy phrase, but it's not the only thing that's catching among Windows users.

The expression is intended to suggest that proprietary software is more secure by virtue of its closed nature. If hackers can't see the code, then it's harder for them to create exploits for it--or so the thinking goes.

Unfortunately for Windows users, that's just not true--as evidenced by the never-ending parade of patches coming out of Redmond. In fact, one of Linux's many advantages over Windows is that it is more secure--much more. For small businesses and other organizations without a dedicated staff of security experts, that benefit can be particularly critical.

Five key factors underlie Linux's superior security:

****************************EXCERPT*****************************

1. Privileges

2. Social Engineering

3. The Monoculture Effect

4. Audience Size

5. How Many Eyeballs

(Excerpt) Read more at news.yahoo.com ...


TOPICS: Business/Economy; Computers/Internet
KEYWORDS: linux; malware
Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-68 next last
To: NVDave

Can’t disagree with anything that you said...


21 posted on 08/05/2010 12:42:07 AM PDT by Deagle
[ Post Reply | Private Reply | To 18 | View Replies]

To: Deagle

A really secure OS and working environment will require someone manage it. There’s no way around this.

The reason why is that the OS, applications, etc can’t do anything more than give you tools to implement your security model. There’s no way you can automate the creation and maintenance of your security model on a computer.

I’ll give an example: Let’s say we work together on some project. Doesn’t matter what. Let’s say you’re my boss, and you’re a member of the design/review team of some classified product or something where we don’t want everyone in the company seeing what we’re working on.

OK, we need to set up a group for all the members of the project group, so that all members of the group can read/write the files. Let’s say that someone high up in marketing wants to be able to read the files, but we don’t want to allow him to make changes. Well, we now need to have read/write/modify privs for every member of the group, and he doesn’t get write/modify privs.

You, as a manager, need to be able to see management stuff, performance reviews, etc - so those files and programs need another security ID that the workers bees don’t have.

With this trivial example, you see why security will never be something that can be delivered as a turn-key solution, no matter the OS. Both Windows and Unix can do what needs to be done in the example above, with very similar concepts, but someone needs to implement this security model using the tools provided by the OS. The difference is that so many Windows systems run with the user as an Administrator to make sure everything “just works” and with all users as “administrator” they can get around much of the security - and running as Administrator means that any exploit code given to them runs with full access to everything. In Unix, most of the time I’m running as a plain user, and when I need root access, I “su” or “sudo” to perform what needs doing.

Most of the exploits on Windows would be stopped or mitigated if the Windows security model would be set up by more people to eliminate users running as Administrator. Microsoft could help out a lot here too, if they’d make it more difficult (or annoying) for users to run as Administrator by default.

When it comes to vulnerabilities and exploits, I view many of these as a software reliability issue. The #1 issue that I’d change if I had a magic wand to wave over the US software industry to improve s/w reliability (and therefore reduce security exploits) is to banish C, C#, C++, etc from the software industry and require a language like Ada be used. In the old days, we had a saying that expressed how we felt about software reliability from various programming languages:

“C is a language for consenting adults, Pascal is a language for children and Ada is a language for hardened criminals.”

In today’s computing environment, there’s lots of for-real criminals with for-real criminal gangs behind them. If we really want reliable software, we need to ditch the “hippy languages[1]” of the late 60’s and early 70’s and start using modern languages with very tight and explicit specification requirements.

What the US needs now is a model OS written in Ada, with the source code available to all comers. Just put it out there as a easily licensed product of the US government, much as TCP/IP started in the 80’s.

[1] I call C/C#/C++/etc “hippy languages” for their lack of enforcing discipline on the programmer. Want to use an array name as a pointer? “Do it if it feels good, man!” Want to cast a number into a pointer? Groovy, baby. Both Unix and Windows fail in this aspect. VMS (the older VMS, before OpenVMS) was written in a proprietary language called “Bliss-32” which was more explicit about what was happening to pointers. The most secure OS I’ve known of, Multics, was written in IBM’s PL/I, which was a powerful language which was around before Ada, but PL/I had the type, bounds and other checking that I associate with software reliability.


22 posted on 08/05/2010 1:14:15 AM PDT by NVDave
[ Post Reply | Private Reply | To 19 | View Replies]

To: Ernest_at_the_Beach

>> Why Linux Is More Secure Than Windows

Because Windows users are more insecure.


23 posted on 08/05/2010 1:17:01 AM PDT by Gene Eric (Your Hope has been redistributed. Here's your Change.)
[ Post Reply | Private Reply | To 1 | View Replies]

To: NVDave

Wow... never thought of Ada as any kind of secure language..heh. Now, C, C++ maybe - less people seemed to know there way around, but Ada, much like Pascal - too English like... Not really sure how you can compare that to secutiry problems...but hey... Each to their own perceptions...

Now you are correct that an Administrator needs to have control for security reasons and does in most large companies (I was one of those awful fellows with a bunch of C, C++. and Assembly programmers) but I did manage to keep things safe (now maybe it was because I was a programmer myself). Not so sure that it would have been any different in a Unix world.

You do seem to be talking a bit out of line...or maybe out of your butt here...


24 posted on 08/05/2010 1:27:56 AM PDT by Deagle
[ Post Reply | Private Reply | To 22 | View Replies]

To: NVDave

Hey, where did you go NVDave? Was looking forward to a good conversation on programming languages etc... You seem so knowledgeable about them, I was looking forward to gaining additional experience.

After all, have no experience in Ada, only Assembly, C, C++, Pascal, Fortran, DBMS, Databus, and a few others (Oh and have written and had published or should I say provided purchased programs in several languages). Might be interesting to talk.


25 posted on 08/05/2010 1:51:15 AM PDT by Deagle
[ Post Reply | Private Reply | To 22 | View Replies]

To: Ernest_at_the_Beach

for later


26 posted on 08/05/2010 2:28:34 AM PDT by politicianslie (A taxpayer voting for Obama is like a chicken voting for Colonel Sanders)
[ Post Reply | Private Reply | To 1 | View Replies]

To: Deagle

Many of the most pernicious security exploits in modern software arise from buffer overflows, especially those buffers or variables that are automatically allocated on stacks.

The problem in C (et al) is the concept of an “data-bounded” string, ie, the NULL termination of strings. This runs counter to how strings are allocated in most higher-order languages, where there is a string descriptor or array that the compiler knows is “X so big” and attempts to cram too many characters into the string are caught both at compile time and at run time. In C, there’s no actual bound to the size of a string that the compiler really can enforce. If you allocate an array type for the string, you can use a pointer reference to access the string and write off the end of the string without any warning or exception.

This gives you an idea of how widespread the problem is:

http://web.nvd.nist.gov/view/vuln/search?execution=e2s1

Enter “buffer” in the “Keyword search” box and hit “Search All” - and wait a little while, because it is searching a pretty big DB. You’ll see two things:

1. There are buffer overflows all over the product/platform space. MS, Apple, Google... everyone has buffer overflows.

2. The severity level is usually “high” - meaning that the error can usually be exploited to gain access.

When a buffer overflow is coupled with a system like Windows where the typical user is running as an Administrator.... it’s game over. The exploit can then do pretty much anything it wants.

So how did we get here?

In the 80’s, software vendors were looking for a language that would allow rapid development of software that could touch hardware on microprocessor based systems. Enter C, which was really nothing more than a high level assembler on Unix’s initial implementation on PDP-11’s. For those of us who started on PDP-11’s, you can show us a line of C and we can write out what the assembly code will look like as a result of that line - and 99% of the time, I’ll have it nailed down to the instruction and order of instructions. With modern chipsets, I can’t do that, but C started on PDP-11’s and the PDP-11’s addressing modes influenced the design of C quite a bit.

Being that C is little more than a high-level assembler, there is little to no checking of constraints at run time, because there are few constraints encoded in the source. The whole of C/Unix was sort of a “up yours” response to the Multics OS, which was much larger, much more complicated and written in PL/I. Multics was, however, quite secure.

There is no run-time array bounds checking in C and C-derived languages unless you add in the facility yourself. You can add this with some programming in C++, but it is clunky and not a part of the base language. If we’re going to have run-time bounds checking, it has to be mandatory - not a compile-time switch that has to be enabled (or that can be disabled), and it has to be uniform. We can’t have one shop define their C++ bounds-checked strings one way, and another shop do their “smart strings” differently.

Ada provides run-time constraint checking; when you try to overwrite the end of strings, the run time system throws an exception. There is a good exception handling facility in the language for the programmer to handle other exceptions within the software - in C, there’s only setjmp()/longjmp() and in C++, there’s try... both very mediocre (at best) exception handling.

Here’s a great little example of how mediocre:

http://stackoverflow.com/questions/167199/what-c-c-tools-can-check-for-buffer-overflows

A half-dozen different approaches for something the compiler just does in many other languages.

What Ada brings to the table is strong type checking, strong run time checking and strong exception handling. For a C programmer converting to Ada, getting his first program to compile and link is like getting a root canal. Ada wants things specified down to a gnat’s ass, and C programmers are used to playing fast and loose with type conversions/promotions, interfaces, etc, getting their programs to compile quickly and easily.... and then they start discovering their errors at run time.

When you develop with Ada, the compiler barks at you again and again, requiring that the interfaces, data visibility, types, etc are consistent throughout the program. Very tight compile-time checking is the rule of the day. It can take quite a long time to get the source to compile... but then when it finally does, the program often “just works” without the usual experience C programmers are used to, where it starts running and then there’s a run-time blowup that has to be found and debugged.

As I said, I view security exploits as software reliability failures. There was a bug, someone found it and exploited it to their end. In Windows, for example, there are buffer overruns a-plenty, and again, coupled with the number of people running as Admin... it makes for a very easily exploited system. Unix certainly has the same buffer overflow issues, but unless the exploited program is running with suid, or the user is running as root, the escalation of the exploit is limited.

I’ve developed s/w for more than 25 years, in over a dozen languages and on a dozen+ operating systems. I’ve been around this block a few times. Software errors become security problems very quickly now, and on better systems with better programming languages, the types of exploits we commonly see today were rare by today’s standards.


27 posted on 08/05/2010 3:11:35 AM PDT by NVDave
[ Post Reply | Private Reply | To 24 | View Replies]

To: Deagle
The expression is intended to suggest that proprietary software is more secure by virtue of its closed nature. If hackers can't see the code, then it's harder for them to create exploits for it--or so the thinking goes.

A Trojan Horse made out of glass would be useless, and open source means that all the planks needed for building one are glass.

There's more to security vulnerabilities than market dominance. Windows, at its heart of hearts, is a glorified DOS shell that was never intended to be secure because the only way into the system was through the PS-2 keyboard and CRT monitor.

Its security efforts amount to layers of band-aids over this fundamental design flaw, and band-aids over the band-aids. The DoD basic security configuration guide for Windows is about 135 pages long, because of the incredible Rube-Goldberg nature of the Windows registry and the DLL-blob of an OS which it governs.

Windows:

Linux:

Okay, okay, this is probably an exaggeration, but not by too much.

28 posted on 08/05/2010 3:30:09 AM PDT by mvpel (Michael Pelletier)
[ Post Reply | Private Reply | To 6 | View Replies]

To: George Varnum

TCP/IP networking is TCP/IP networking. If you plug the right IPs in the right places, it shouldn’t matter one bit what OS you’re using to connect, so long as you have a working NIC with drivers and it supports TCP/IP correctly.

Our old ISP “didn’t support Macs”...by which they meant their installer cds didn’t. If you set up the connection manually, it worked just fine.


29 posted on 08/05/2010 4:05:22 AM PDT by Fire_on_High (Stupid should hurt.)
[ Post Reply | Private Reply | To 8 | View Replies]

To: re_nortex

Just noticed that kernal update on my Arch system,have been holding off downloading it till I find out more

That sounds good about the ability to turn off things not needed


30 posted on 08/05/2010 4:37:47 AM PDT by Harold Shea (RVN `70 - `71)
[ Post Reply | Private Reply | To 4 | View Replies]

To: Fire_on_High
If you plug the right IPs in the right places, it shouldn’t matter one bit what OS you’re using to connect, so long as you have a working NIC with drivers and it supports TCP/IP correctly.

Windows 98 might not have L2TP/IPSec drivers available, for example. Or maybe ISP is using IPv6, which has only the barest sliver of support on Windows 98.

31 posted on 08/05/2010 5:28:00 AM PDT by mvpel (Michael Pelletier)
[ Post Reply | Private Reply | To 29 | View Replies]

To: NVDave
Linux is a derivative of Unix

Actually, though Linux is said to be "unix-like," it's more accurate to say that Linux is derivative of Minix (Andrew Tanenbaum's original micro-kernel based teaching OS), and the history between the two is a good read:

http://en.wikipedia.org/wiki/MINIX

Minix might be short for "mini unix" but it's not derived from Unix (Unix started out as a monolithic kernel, not a microkernel)

http://en.wikipedia.org/wiki/Microkernel

32 posted on 08/05/2010 5:49:02 AM PDT by krb (Obama is a miserable failure.)
[ Post Reply | Private Reply | To 17 | View Replies]

To: rdb3; Calvinist_Dark_Lord; GodGunsandGuts; CyberCowboy777; Salo; Bobsat; JosephW; ...

33 posted on 08/05/2010 5:50:35 AM PDT by ShadowAce (Linux -- The Ultimate Windows Service Pack)
[ Post Reply | Private Reply | To 1 | View Replies]

To: D-fendr
Now what would be more of a challenge is to write an article about an OS that is *not* more secure than Windows.

Here you go

34 posted on 08/05/2010 6:00:56 AM PDT by ShadowAce (Linux -- The Ultimate Windows Service Pack)
[ Post Reply | Private Reply | To 20 | View Replies]

To: krb

FWIW, I’m talking in broad terms of the functionality and command line interface for general users. They see the same broad functionality and issues regardless of whether we’re talking Linux, *BSD, OS X (at the command line), Solaris, HP-UX, SysVR2, etc. For the user who is used to GUI-based environments like Windows, Mac, etc - the first time they’re tasked with doing anything significant on a Unix system without a GUI, the sounds they make might as well come from someone who has been kicked in the groin.

The sounds don’t change at all (in my experience) if you change the particular flavor of Unix, or the shell (zsh, bash, sh, csh, tcsh, etc).

That said, re: your uKernel comments: even tho Linux took inspiration from Minix, it ended up a pretty conventional monolithic Unix kernel, and I remember all of the flames flying between Tanenbaum and Torvalds the first time. There’s certainly differences in exact implementations and source code inside Linux from a AT&T or BSD kernel, but on the whole, if you have crawled around inside SysV or BSD kernels, the Linux kernel doesn’t look at all alien. About the only other thing I can say is that Linux favors POSIX API’s, and therefore would appear to favor the SysV approach to interfaces and functional partitioning.

The “monolithic vs. uKernel” debate is an old one in operating systems, one I’ve had to personally fight in my career. The advantages of the uKernel are really nice, on paper and in reality if you’re willing to give up some performance.

But when it comes time to count up the cost of all the context switching and message passing latency.... well, let’s just say that the crap hits the fan the first time one comes up to the boss with a benchmark of a uKernel in a memory mapped environment.

Linus undoubtedly saw the same sorts of issues come up, so that’s why he tossed the Minix uKernel design overboard and went back to the conventional monolithic Unix kernel design.

If you ever get a chance, have a look at the source for Seventh Edition Unix on a PDP-11. Very clean, near, sparse and to-the-point.

http://minnie.tuhs.org/cgi-bin/utree.pl

After V7, the Great Schism happened and the flamewars started in earnest.


35 posted on 08/05/2010 6:16:06 AM PDT by NVDave
[ Post Reply | Private Reply | To 32 | View Replies]

To: mvpel

Didn’t think any ISPs were *enforcing* use of IPv6 yet....I’ll grant the rest, but...router?


36 posted on 08/05/2010 6:29:21 AM PDT by Fire_on_High (Stupid should hurt.)
[ Post Reply | Private Reply | To 31 | View Replies]

To: NVDave
When you get these tarballs, you uncompress them (if they were compressed) and then feed them into the tar program to unpack all the files out onto the disk, with a command like:

"$ tar xvf tarball.tar"

and you’ll see the ‘tar’ program extract all the files.

If the tarball were compressed (eg, with a .gz suffix), you’re uncompress the tarball and then feed it into tar:

"$ zcat tarball.tar.gz | tar xvf -"

and so on. The “zcat” command uncompressed the .gz file and writes the result to the “standard output.” If you didn’t re-direct the output into the tar program with the pipe “|” command, you’d get a huge screenful of stuff and then you’d be left with the original .gz file and no net results.


Do you think zcat and pipe are needed? They're not needed and unnecessarily complicated. tar itself handles .gz files, avoiding zcat and piping.

Using tar's z option will first uncompress (actually, gunzip, since compress creates .Z files, not .gz files) the gzipped tarball. This is true for gnu tar, FreeBSD tar, Mac OS X tar, Solaris tar, and probably others.

tar's z option avoids the need to zcat and pipe, or avoids the need to run gzip -d on the file.

Similarly, if the tarball was bzipped, tar's j option will bunzip the tarball.

So, in the case of a .tar.gz, a simple

$ tar zxvf tarball.tar.gz

does the trick.

And, if you have a bzipped tarball

$ tar jxvf tarball.tar.bz2

does the trick.


37 posted on 08/05/2010 6:32:31 AM PDT by Mike Fieschko (et numquam abrogatam)
[ Post Reply | Private Reply | To 17 | View Replies]

To: Mike Fieschko

On Linux, you’re absolutely right.

I’m an old-school Unix guy, and when I’m tossing off examples like this, I’ll use what I know are methods that will work on most any flavor of Unix, going back years and years...

In a couple of years, I’ll be suitably aged to wave my cane at kids and yell at them to “get off my network!”


38 posted on 08/05/2010 6:40:00 AM PDT by NVDave
[ Post Reply | Private Reply | To 37 | View Replies]

To: NVDave; George Varnum
I don't think it's helpful to compare 80s Unix systems to current desktop Linux distros. I agree about the user-friendliness of Unix (I started with BSD 4.2 on a VAX ca. 1983), but that experience is very different from today's desktops. Sort of like comparing the starter systems of a Model T Ford and a current Hyundai [whatever]: Ford bad, Hyundai good!

It sounds like George has some older Linux distro; I've not had to deal with tarballs for many years now. In fact, given that he's trying to run it on an old Win98 machine it is very likely it's an old distro, because more recent distros require more capability, though I'm running Xubuntu 9.04 on an old corporate discard machine.

And I actually use command line (or Run) on Windows more often than I open a Terminal window in Linux.

I've been trying out Linux desktops from 1998 on. At first it was on really underpowered hardware, which didn't help. And it required an expert to install -- there were tons of questions you had to answer. SuSE got close with 9.0 or 10.0; once up it was fine but installation still had problems. My first seamless install came with Kubuntu 7; no expert knowledge required, it identified the hardware and came up and ran.

Will Linux displace Windows? I think not, for reasons other posters have given. Not to mention there is one (series of) Windows and a wide variety of Linux distros, though it seems that the Ubuntu family (including Mint) is becoming the dominant Linux desktop system.

But something else I've seen in the corporate world is a gradual encroachment by Apple. Eight years ago all the laptops in my technical committees' meetings were Windows. But the number of Apples has been increasing, to maybe 10% of the machines today. (One Linux laptop for at least six years now and no, it's not mine.)

39 posted on 08/05/2010 6:45:26 AM PDT by sionnsar (IranAzadi|5yst3m 0wn3d-it's N0t Y0ur5:SONY|TV--it's NOT news you can trust)
[ Post Reply | Private Reply | To 17 | View Replies]

To: George Varnum; NVDave
A friend gave me a LINUX emulator CD, and it was pretty interesting - connected on line, too. When I tried to install a version of LINUX that I downloaded to CD tough, I was too old and stupid to install it; something about “tarballs” (???).
I guess Linux and such is only for smart people, and that leaves me out. Pity; had my hopes up for a while there.
I may be too old and stupid to set up Linux - my son did set up a version of it on a memory stick for my netbook - but I'm not too poor to get a Mac. And OS X runs under Unix.

Works for me . . .


40 posted on 08/05/2010 6:49:33 AM PDT by conservatism_IS_compassion ( DRAFT PALIN)
[ Post Reply | Private Reply | To 8 | View Replies]


Navigation: use the links below to view more comments.
first previous 1-2021-4041-6061-68 next last

Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.

Free Republic
Browse · Search
General/Chat
Topics · Post Article

FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson