Free Republic
Browse · Search
General/Chat
Topics · Post Article


1 posted on 05/16/2013 6:39:16 AM PDT by ShadowAce
[ Post Reply | Private Reply | View Replies ]


To: rdb3; Calvinist_Dark_Lord; Salo; JosephW; Only1choice____Freedom; amigatec; stylin_geek; ...

2 posted on 05/16/2013 6:39:35 AM PDT by ShadowAce (Linux -- The Ultimate Windows Service Pack)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

“it’s”?


3 posted on 05/16/2013 6:42:52 AM PDT by Clint N. Suhks (Barack Hussein 0bama: Benghazi was a bump in the road a sideshow and a circus.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce
" That Graham’s solution for building systems that scale for the next decade is to bypass the OS entirely and talk directly to the network and hardware"

Sounds like DOS.

5 posted on 05/16/2013 6:46:19 AM PDT by Paladin2
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

LOL!

By way of comparison, Windows XP’s kernel has been patched so much that it is almost 95% unavailable except through a few secure channel APIs. This is a 15 year old OS. Unix, on the other hand, has been around for a very long time and is reaching the end of its useful lifetime due to an inherent programmatic shortfall from its very core utility.

The latest generation of Windows and Apple kernels are actually isolated behind an abstraction layer that performs all of the calls to the kernel, essentially freeing up the kernel for higher-order processing. I’m amused that Unix would have this shortcoming considering how fortuitous it’s been as an OS, but I suppose even giants must eventually fall.


7 posted on 05/16/2013 6:50:20 AM PDT by rarestia (It's time to water the Tree of Liberty.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

Yes assembly/machine language can provide most performance but was a nightmare to write and maintain. Sounds like someone is just fishing for funding. Why would you not scale up to support that many connections with the added benefit of balancing and redundancy.


9 posted on 05/16/2013 6:52:04 AM PDT by xenob
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

In my not so humble opinion, this is a bad approach. The vast majority of errors, overruns and causes for BSOD are in the driver layer. The better approach is stability first and then performance improvements. To that end, Minix 3 would be the better approach.

As for the socket to app issue, there are a number of approaches that can be taken. An index of sockets with dynamic index based upon most recent traffic would shorten the lookup time for most high concurrency situations.

However, I would argue that putting such a load on a single box is not at all a wise choice. Lose that box and lose 10,000 connections !?!?!? Sounds risky to me. Far better to break the load up over a larger base of servers.


12 posted on 05/16/2013 7:00:57 AM PDT by taxcontrol
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

Genode is interesting.

http://genode.org/documentation/architecture/index

http://genode.org/documentation/general-overview/index


14 posted on 05/16/2013 7:04:23 AM PDT by Ray76 (Do you reject Obama? And all his works? And all his empty promises?)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

Hey I know! Replace that old worn out operating system with one of the new modern ones, you know, like Windows 8. That should fix all the problems, right?


15 posted on 05/16/2013 7:10:22 AM PDT by catnipman (Cat Nipman: Vote Republican in 2012 and only be called racist one more time!)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

There is a reason why we are where we are now. This particular theoretical approach would eventually morph into an OS as people incorporated more base support into each iteration. I think that we are doing a grave disservice to new technologists by not teaching them the technological history of the last 40 decades.


17 posted on 05/16/2013 7:20:30 AM PDT by Durus (You can avoid reality, but you cannot avoid the consequences of avoiding reality. Ayn Rand)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

This article is an enormous pile of crap.

It’s talking about a problem that has no practical significance.

ANY excuse to beat up on Unix, because there are FREE versions of it.

The real problem is from a business point of view. Computers are so fast today that massive applications can be run on a $500 PC. Well, IF (big IF) they are well-written.

We find business customers simply NOT NEEDING more computing power.

What to do ?

Create new buzzwords, create new things for them to do.

Everyone needs a “data warehouse”.

Buy new servers, create copies of your database on them in different forms. Mix and mash the data 6 ways to Sunday. Write all sorts of reporting tools, do all sorts of analysis (business intelligence buzzword). Spend millions on hardware and consultants to do things like tweek your pricing structure. Now to customer can come up with math to verify for senior management that they are improving top and bottom line results. And global warming is definitely man made.

Back to just how fast computers now are.

Let’s say you have a machine that executes a BILLION instructions every second. Oh, and it has 4 of those processors on its one chip.

It has a network card that will transfer a GIGABYTE every second.

It has 96 GIGABYTES of main memory, i.e., RAM.

It has a TERABYTE of disk storage and can transfer HUNDREDS of MEGABYTES per second to and from it.

And the dang machine costs $600.

Sounds like to me a single person could write ridiculously high-volume applications compared to even ten years ago, and run them on their PC.

Once you get into the realm of businesses that have some capital to throw around, and you can EASILY and CHEAPLY have racks of servers, with LOAD BALANCING, to support millions of simultaneous online users.

Now, if your programming staff is a bunch of FREAKING MORONS, you’ll be buried in problems, to be sure. But then again, that’s always been true and always will be.

But if you have a small team of SMART PEOPLE, who RTFM and program accordingly, performance SHOULD BE a non-issue.

It boggles the mind how stupid people can be.


20 posted on 05/16/2013 7:28:51 AM PDT by PieterCasparzen (We have to fix things ourselves)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

Today’s application programmers are not capable of writing their own device drivers or handling the stack.

They wouldn’t know malloc() if it bit them in the @ss and they couldn’t free() themselves from a paper bag.


21 posted on 05/16/2013 7:29:01 AM PDT by BuckeyeTexan (There are those that break and bend. I'm the other kind. ~Steve Earle)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

Here’s an idiot excerpt...

“The talk touches on a concept I’ve been mulling over for months, the inherent complexity of modern data centers. If you are virtualizing, and you probably are, for your application to get to the hardware there are most likely several layers of abstraction that need to be unpacked before the code it is trying to execute actually gets to the CPU, or the data is written to disk. Does virtualization actually solve the problem we have, or is it an approach built from spending far too long in the box?”

A data center needs to be as complex as it needs to be, no more, no less. Operating systems today all essentially do the SAME things at the bottom end; they allow for sharing of hardware resources between multiple user processes.

Some Information Technology Department (IT) shops are more well managed than others. Shops with serious problems have human management problems that cause great difficulty overcoming hurdles in managing their servers. As far as server administration goes, while M$ products will “run right out of the box”, it’s a costly mistake to think that managing them will be easier than managing Unix servers, since M$ products historically typically have default settings and functionality that are inherently the wrong choice, while Unix basically requires the server adminstrator to visit the configuration, understand all the options and make their choices. If one could have a purely Unix server environment, and one spent the time to have every option choice well-thought out, instead of neglecting the “details”, the pure Unix environment would be far more secure than the pure M$ environment. Inevitably today server environments are mixed, as dictated by the needs of particular applications that IT is required to support. This, of course, makes the labor cost of server administration far greater in smaller shops.

The impetus behind virtualization...

Used to be IT shops would gradually keep adding servers. Some departments in the company would have their own file server. There are email servers. Then, applications would be purchased, and a new set of servers would be purchased; development, test, production for the app. This is just life.

But you’d find mistakes being made. Performance problem ? Don’t correctly tune the application and revisit the design and understand what you’re trying to achieve and how best to do that - no, buy a faster server.

Due to how MOST of CPU time is spent IDLE, we wind up with millions in capital investment in server hardware sitting there depreciating; unable to run fast enough to satisfy users when the poorly-tuned apps run, but sitting idle the rest of the time.

With software advertising being ubiqitous, every department started screaming for new applications that they just had to have - and getting approval directly from the top with IT all but cut out of the loop. Thus the crucial factors of “what present capabilities and plans do we have in terms of our existing IT staff and infrastructure” and “what external directions are there and how will they affect our shop” (i.e., should be be moving in this or that technology direction in terms of both hardware and IT training) are all too often not given enough consideration; perhaps lipservice, perhaps IT actually liked the idea of new apps and architectures themselves. But instead of preparing by ensuring IT staff expertise FIRST, the business would plunge into new technology unaware, outsource the required core expertise and (typically) allow selected IT staff to have at the juicy new project from a backseat role. All too often these staffers would turn around and leave to catapult their careers higher with their newfound “expertise”.

So the “glass tower” of IT was overrun. No longer could IT dictate when the changes or new reports were to be completed who had access to what, etc.

Now, every department finds out what the most popular software is for their tasks, and says to senior executives - “why aren’t we doing that ?”. The senior executives all start asking the same question. The salesman is called in for the dog and pony show, and IT gets their marching orders, and new servers come rolling in.

Thus we have IT shops with hundreds and oftentimes thousands of physical servers; maintaining it represents work that must be done (installing upgrades, installing new machines, removing old machines, etc.).

Thus we see the drive for virtualization of servers.

You want hundreds, thousands of servers ? Well, IT went out and bought virtualization software, so they can provision you a set of new servers without having to purchase, wait for, and set up new hardware. Just clickety-click, bada bing, there’s your new servers, let’s install this new software.

Is there overhead to the virtualization - sure. But sorry to say for the people who created this article, it’s no show stopper with today’s hardware performance.

The inevitable downside ? Of course - since it’s not that much easier to create servers - the decision to create new servers is made MUCH more easily today, with the predictable result that the number of virtual servers increases much faster than the number of physical servers used to increase. So IT departments continue buying hardware and continue struggling to keep up. The virtualization itself provides no direct help for keeping software updates applied to all these virtual servers, so IT can get buried trying to maintain them all. And to solve this problem there is the age-old solution of software-based automation and good old-fashioned figuring out efficient ways to manage the configurations of the software applications running on all those virtual servers.


29 posted on 05/16/2013 8:22:23 AM PDT by PieterCasparzen (We have to fix things ourselves)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce
Sounds great, as long as you don't have to worry about:

Hmm ... come to think of it, seems like he is suggesting Windows to me ;)

30 posted on 05/16/2013 8:23:12 AM PDT by SecondAmendment (Restoring our Republic at 9.8357x10^8 FPS)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce

An article of which author reveals his illiteracy in the very title is not worth reading.


45 posted on 05/16/2013 1:48:29 PM PDT by Revolting cat! (Bad things are wrong! Ice cream is delicious!)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce
Mirage is, as stated, very academic, and currently very alpha quality,

What's the big deal? Most of Linux is alpha quality, too.

56 posted on 05/17/2013 9:18:41 AM PDT by Moonman62 (The US has become a government with a country, rather than a country with a government.)
[ Post Reply | Private Reply | To 1 | View Replies ]

To: ShadowAce
"Our prototype (dubbed Mirage) is unashamedly academic; it extends the Objective Caml language with storage extensions and a custom run-time to emit binaries that execute as a guest operating system under Xen."

As a typical windows user it reads like this:

Πρωτότυπο μας (που ονομάστηκε Mirage) είναι απίστευτα ακαδημαϊκή? Επεκτείνει το στόχο Caml γλώσσα με επεκτάσεις αποθήκευσης και ένα προσαρμοσμένο run-time για να εκπέμπουν εκτελέσιμα που εκτελούν ως φιλοξενούμενο λειτουργικό σύστημα κάτω από το Xen.

57 posted on 05/17/2013 9:38:06 AM PDT by Rebelbase (1929-1950's, 20+years for full recovery. How long this time?)
[ Post Reply | Private Reply | To 1 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson