Posted on 03/26/2015 8:27:11 PM PDT by Utilizer
Its a commonly held belief among software developers that avoiding disk access in favor of doing as much work as possible in-memory will results in shorter runtimes. The growth of big data has made time saving techniques such as performing operations in-memory more attractive than ever for programmers. New research, though, challenges the notion that in-memory operations are always faster than disk-access approaches and reinforces the need for developers to better understand system-level software.
These findings were recently presented by researchers from the University of Calgary and the University of British Columbia in a paper titled When In-Memory Computing is Slower than Heavy Disk Usage. They tested this assumption that working in-memory is necessarily faster than doing lots of disk writes using a simple example. Specifically, they compared the efficiency of alternative ways to create a 1MB string and write it to disk. An in-memory version concatenated strings of fixed sizes (first 1 byte then 10 then 1,000 then 1,000,000 bytes) in-memory, then wrote the result to disk (a single write). The disk-only approach wrote the strings directly to disk (e.g., 1,000,000 writes of 1 bytes strings, 100,000 writes of 10 byte strings, etc.).
(Excerpt) Read more at itworld.com ...
So cache is not necessarily king?
True, but since some systems use SSD drives I think this article brings up some valid points.
Especially considering how fast hard drives of today have become.
Good post. I have to ask, out of all the devs you’ve run into, what percentage would you say fall into the “good coders” group?
This has been a continuing problem over the years. More and more coders are relying upon greater amounts of memory to accomplish their goals instead of streamlining the processes.
Wait, you mean writing smart, efficient code rules the day? Imagine that.
I STILL code in Assembly. :) Thanks for the plug, mate. You want fast, small, and efficient? Assembly code is the way to go!
Not sure if they’ve improved much since then, but my experience with SSDs has been that they’re significantly slower than disks. That was a few months ago, though...
Yep, assembler is wonderful unless you need the software to be ready before the device is obsolete ;)
It helps to have an OS that knows not to write overmuch to the disk.
If millions of instances are going to be run for years, it makes sense to spend more time on the programming to get it efficient. Management is often concerned with having just one or two guys knowlegeable about the magic.....
Spagetti code was done for good reason.
SSDs are faster than disk now? I’ve been out of the loop too long...
L1<L2<L3?
I’ve been in the programming racket for 25 years now and have only met about half a dozen programmers who know how to produce efficient code. The volume of kludges that get rolled out in mission-critical applications is truly frightening.
Game programmers had all sorts of tricks to speed up frequently run code.
Exception handling is always an issue.
Yep, the worst are the exception handlers that include the “ignore” option.
There is one off brand name SSD in the case and it gets no software benefits.
Regardless of that, clocking my i7 CPU to 4.2ghz from even 4.0ghz gives a large increase
in speed that can be seen. The bottleneck is still the same for Video cards, and it's
always been the CPU.
Road to ruin.....
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.