Posted on 07/25/2003 7:16:17 PM PDT by HAL9000
Apache/Win32 and Perl/Win32 are both available for Windows. I use both. Seems that I've heard that PHP and MySQL are also both available for Win32.
Ugly surprises await the PHP developer who tests on Windows and deploys on UNIX or linux, or vice versa. There are subtle differences in library calls that jump out and bite you at deployment time. There are also functions that are available on one system but not the other, due to different functionality in the underlying OS. There's not a whole lot of that, but "the best surprise is no surprise."
I happen to like HomeSite, which is a Win32-only program. The emacs zealots have already explained to me that I'm hosed, but I don't care. So what I do is use HomeSite's built-in ftp to edit files that are on a linux box ten feet away, on a 100Mbit LAN. No surprises at deployment time that way, because the testing is all Apache-on-linux, which is what the deployment servers are.
You have it exactly 180o backwards. At the SANS Institute courses and seminars, the instructors stress that open source has extremely STRONG advantages over closed source security wise, and they gave multiple examples where that was the case in the area of cryptography.
In order for a cryptographic algorithm to be accepted, it MUST be open source, and there are prizes offered to whoever can detect flaws in it and break it. The actual code that breaks the algorithm MUST be published as well...
I've found that writing code in a manner that makes it work on multiple platforms makes the code more portable. Even uglier surprises await those who write their code in a way that locks them into one platform.
Perl on Win32 behaves differently than Perl on Linux. But with a little extra effort, a program can be written to work identically on both platforms.
BTW... The Linux zealots are always quick to point out that quirks in open source tools (like Perl and PHP) can be easily fixed. Where are those zealots now? Zealots? Where are you?
Thank you very much for clarifying this, I was disapointed when I first saw that simply due to the large number of personal attacks one must endure when one questions anything Linux on message board websites.
The "bad" people in your scenerio are people who keep their findings private for their own exploitation, and that can just as easily happen in the closed source world as well-- we know closed source is frequently milked for its vulnerabilities, all you need is a debugger/disassembler and time.
Agreed about the "bad" people as the most severe risks are those that have not or will not be reported and corrected. However I think your statement of "can just as easily happen" was possibly contradicted by the rest of your statement that continued on to say "all you need is a debugger/disassembler AND time", time being an additional element (REQUIRED to disassemble and/or debug) that is not required to access open source products as you can immediately download the source code (and have it in your hands as soon as the download completes)?
If code review is suspended in a closed source project (and there is no economic reason to continue funding it after the product is deemed bug free), it's not very likely an undivulged vulnerability will ever be discovered, not enough people go hunting for those kinds of things in compiled code.
I'm not sure that is necessarily so, since there is no conclusive proof that open source programmers are necessarily attracted to pouring over old code themselves to look for vulnerabilities. This is a very boring and time consuming process, as the code quantity continues to multiply, your ability to continually increment the sufficient programming resources required from a volunteer force to match the number of "bad eyes" viewing the product is questionable. As my earlier posts indicate, there is a "tipping point" at which the quantity of code becomes overwhelming to constantly be reviewed by volunteer secrity experts attempting to equal the hacker reviews and exploits. This potential problem is further exasperated by the limited income Linux distributions generate, meaning your ability to continually provide sufficient 'free' "good eyes" becomes increasingly unlikely.
As for outproducing: there is a mighty difference between user interface programming and core functionality. Microsoft puts a large focus on the user interface side, often letting core functionality slip. Open source, with their many cooks approach, tends to have a more complete core functionality with many useful options, but the hacker mentality often lacks a thoroughly planned user interface.
While I agree with your assesment that M$ products certainly offer a far more polished interface, I disagree that the internal mechanisms of open source's flagship product Linux is internally superior to Windows. Yes you can run a naked version of Linux to streamline a single ignorant process, but those applications are rare. More commonly is the requirement to imbed additional functionality though applications and/or programming interfaces to interact directly with the O/S which require kernel interaction. Linux uses a standard monolithic kernel, whereas the latest versions of Windows use the far more advanced micro kernel configuration, which offers not only faster and more efficient processing, but also allows users to much more freely integrate new functions into the kernel without the requirement for manual recompiles. I believe there are ongoing attempts to add Micro-Kernel capability to Linux, such as in some Networking components, but what could possibly be a benefit seems to be clouded by the fact that most networking configuration options across all versions of Linux are suprisingly dissimilar.
More so, open source allows afforable software customization, done in-house or contracted, it's much more likely to happen than begging the closed source/shrinkwrap vendor to tweak its product. (Though you're likely to have much more success w/customizations the smaller the closed source company. I've had great success persuading small 2-3 man companies to implement my ideas. Whereas I doubt a human even read my mail sent to some of the larger ones.)
Thanks for your clarification regarding this, because as you apparently know deploying ad-hoc core O/S modifications in a large enterprise environment can easily bite you in the rear when other applications or processes become upset, quite possibly manifesting themselves as apparent 'bugs' starting to appear in other unrelated areas of the software that may be difficult to diagnose/correlate. This is an important point as many OSS advocates are quick to claim this capability as a 'feature', which it may be in some cases such as the small deployments you cite, or enterprises so large their support structure can approach the capability of the original software vendor, but in actuallity is a rarely used method that can have significant drawbacks in most other environments.
Oh, rough edges? Have you used Microsoft's command line?Rough around the edges is the best you could say for it. Give me a bash prompt and a good terminal program any day. I've gone as far as installing Cygwin (Unix for Windows), bash, sshd, and using SecureCRT to ssh into my Windows XP box, just to avoid that blasted Microsoft command line and its awkward DOS box.
Command Line is a required feature in some cases (I still have to occasionally use it for SQL and/or network analysis), but the future of computing will continue to evolve within the GUI. The GUI tools such as originated with Xerox and Apple II WYSIWYG (and are what nudged me into the IT industry full time all those years ago, as an EE I started out purely hardware and supporting text only RS232 connections back to VMS, but hacked on my Apple II at home constantly) are much easier to navigate, and having a complete enterprise list readily available in something like Active Directory makes organizing your objects as well as widescale distribution of changes much more manageable. Command lines are still required, but are limited in what they can sometimes accomplish, and managing wide scale enterprise resources via command line would be particularly crippling for my current operations.
Outsourcing: don't be surprised if in the coming years most software development scurries off to the far corners of the earth. There is no economical reason it will not.
Whether it's inevitable is an entirely different question, but if it is, why should we hurry it along by promoting an operating system that can apparently take (knowingly or unknowingly) currently confidential US technology away from our private sector businesses and 'GPL' them for worldwide public use? I am much more interested in protecting the US commercial software market than I am providing free code to the rest of the world.
Thank you very much for your time, I certainly enjoyed your comments and look forward to hearing from you again.
If obscurity is the only security method employed, an application (or OS) isn't going to be secure. But it doesn't naturally follow that an application's obscurity makes it insecure. Just as OSS's openness doesn't make it secure.
IMO, MS's security problems have traditionally related to the following:
1. Trusting user input too much, resulting in a buffer overrun vulnerabilities
2. "Everything runs as root", resulting in small breaches becoming major breaches.
3. A largely untrained user base.
The fixes, that I've followed over the past few years, looked this way to me...
1. MS's software (like most closed-source and open-source software), had plenty of problems with buffer overrun vulnerabilities. Programmer training is the best way to stop these problems. It doesn't matter how many eyes are reviewing a program if they don't know what to look out for.
2. Windows2000 and WindowsXP are set up with multiple privilege levels for users, processes, and files... much like UNIX.
3. The OS vendor only has limited control here... but turning off unnecessary features and setting strong security defaults in Windows2000/XP/MSIE/Outlook helps. Linux hasn't suffered from this much, since its difficult user interface keeps the riff-raff away.
Regarding your point about the openness of encryption... To my knowledge, MS uses industry-standard encryption algorithms. They have not implemented their own 'obscure' encryption algorithms. If you have evidence otherwise, please post.
While I won't deny that cryptography is (apparently) currently successful with their peer review process, in general crytographic algorithms are relatively small amounts of code that can be sufficiently analyzed by a large group of engineers.
However, you cannot compare that overall tiny process when compared to the millions of lines of code currently contained in Linux operating systems, especially considering the vast number of flavors currently available.
Just because a small cup of water can be easily microanalyzed, does not mean a large lake or ocean can be analyzed to anywhere near the same level of thoroughness.
Because I'm a Perl junkie. :-)
Don't know VB. I like Perl because its text-processing capability is incredible, and it has so many CGI tools. Also, it is highly portable across platforms.
Instead of throwing a spear at you for this, I am demanding a free pass on the next stupid thing that I say.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.