Posted on 04/02/2013 1:16:51 PM PDT by Ernest_at_the_Beach
Steve McIntyre and the Linaro Enterprise Group recently analyzed Ubuntu and Fedora software packages to see what software was still relying upon hand-written Assembly code. This was done to see how much real Assembly is being used, to see what the code was used for, and whether it was worth porting to 64-bit ARM / AArch64 / ARMv8.
Their findings found that most software doesn't need porting for 64-bit ARM but that "most software in the typical Linux distribution does not contain any assembly." For Ubuntu 13.04, only about 6% of the packages contained any Assembly code.
With packages found containing Assembly, a lot was found to already support ARMv7 and ARMv8 capabilities while some could be better-tuned to take advantage of ARM architectural features.
It was also determined that most of the Assembly code has little value. Packages still using Assembly were doing for either low-level purposes with hardware control or for performance reasons (e.g. SIMD for multimedia). Small amounts of the Assembly being used was for handling atomics, embedded library code, or controlling symbol access.
Among the Linux software packages with Assembly code that will see some love from the Linaro Enterprise Group includes the Linux kernel, GCC, glibc, gdb, binutils, LLVM, OpenJDK, and GMP. Some other important but less critical packages include GRUB2, PHP, PostgreSQL, libaio, zlib, and libjpeg.
For those wanting to see the lengthy Linaro Enterprise Group report on Assembly use in full, it can be viewed via the Linaro Wiki.
(Excerpt) Read more at phoronix.com ...
Got to have assemby code to get performance.
/johnny
Great for writing obscure malware?
In small microcontrollers, my company still uses assembly for sections that must run fast. Also, we have a entire code base that’s about 14 years old, written entirely in assembly on a TI microcontroller, that we still maintain and still ship in products.
I still use asm in my interrupt code.
You can squeeze a lot out of even an 8 bit AVR if you code in asm. There is little time to fool around in an interrupt that runs many thousands of times a second..asm is a must for that.
GCC for everything else.
“It was also determined that most of the Assembly code has little value. “
I guessing he meant it is not worth porting since the code itself obviously has a lot of value for the architecture it was written for and they already have code for the target arch.
But other than that sentence, no surprises. Most of the assembly code is in the kernel or in low level libraries like gstreamer and ffmpeg.
So there’s dozens of multimedia apps for Linux, but none of them have assembly code because they all use some low level library that handles all the nasty performance issues.
I forgot to mention, the real nastiness of porting to another architecture usually isn’t the assembly code, it’s the differences in endianness and type sizes of the compilers.
Unless the original author of the code went out of his way to insure that the code could be ported to a compiler for an arch that has different type sizes and endianness, the code will be filled full of bugs when ported. This is especially true of C-code.
I’ve worked on a project where we were required to write C-code that worked regardless of the endianness and the type sizes so from first-hand experience I can say it’s a lot of extra effort. OTOH, I’d recommend everyone still do it because it actually makes your code a lot more robust. You’ll be amazed at how many little bugs get exposed when you port your code to a CPU with a different word size and endianness.
Dumb article. It’s a bit like saying that you measured all the tolerances in your car and only 5% of the tolerances were in the micrometer range. Of course the clearance between your tires and your wheel wells won’t be in that range but the piston, valve and crankshaft bearing clearances will be in that range.
So what does that prove? That we don’t need micrometers any more?
No it means that you get very precise in those few areas where you need to be and where you don’t need to be you don’t. And guess what? It’s always been this way since compilers were invented.
L,U A0,$CAS(’YEP!”)
LXI,U A0,0101
ER APRINT$
. For those unisys masm hacks like me
Haha.
For some made up scenario you might have some code that does audio or video encoding or decoding and in the inner, inner loop you might have a couple to ten lines of assembly that leverage the SIMD instructions of whatever chip you’re running on. Those few lines won’t amount to much in terms of the total number of lines of code for your program but they may mean everything in terms of how well and how fast the program executes.
The fallacy of course is static LOC count vs. dynamic LOC. Static counts the LOC in the source file. Dynamic counts them in terms of how often they are executed. An inner, inner loop may have a few lines that are basically always being called.
The low level code is in the hardware aka as VHDL. Software function calls to the interface port is the bridge to the hardware executive. There is a lot more smarts in hardware these days, and less need for assembled software code.
Thanks...
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.