Posted on 11/20/2017 9:43:38 PM PST by dayglored
Such efforts to avoid not ruining the EQNEDT32.EXE binary are time-consuming, and no sane developer would have taken this route if he still had access to the source code.
...
I worked at a huge computer company in operating systems, and there were a couple of guys who would have preferred to fix every defect this way.
Microsoft is now an Indian owned and operated company. The idiocy of Indian programmers is well known in the software industry.
Yeah, especially if there are 2.2E145 versions of the source code, as in Microbloat stuff.
One of my students has a t-shirt that reads: “!FALSE is funny because it is true.
I used that yesterday in a text to a buddy having computer problems. It never gets old.
Mika saw that t-shirt and thought it was a discriminatory dog whistle
This happened recently with the New Horizons Pluto probe. They started the process to wake up the probe and make minor course corrections about 2 weeks before the one-time-only flyby. The probe was unresponsive. They determined that most of the code had been corrupted or loast.
They had to reproduce, compile and send the code again in about 10 days. A process that had taken them years to complete before launch...3 years before.
They got it done just in time with a 36 hours to spare. The results were spectacular. One helluva good job.
Not really, since you would have no meaningful labels of any kind, and the optimized assembler produced by most compilers turns even very well written code into what appears to be spaghetti to humans.

-PJ
wrong.
If the code has not changed since then, and the change was simple enough, the binary produced could be nearly identical to the original.
like if you change a “+” sign in an equation to a “-” and then compile using the exact same build environment, you could get a binary that differed in just a couple of bytes.
I strongly suspect that they have the entire build environment available so they would have have to recreate it and THEN make this simple change, otherwise you’d have to do a complete integration test.
I would not doubt that the computer to build this exists virtually, with all the source code.
From the article:
"There are six such length checks in two modified functions, and since they don't seem to be related to fixing CVE-2017-11882, we believe that Microsoft noticed some additional attack vectors that could also cause a buffer overflow and decided to proactively patch them," 0patch said.Those aren't small changes that would cause "a couple of bytes" of difference. Adding length tests requires additional code not present in the original binary.In addition, Microsoft optimized other functions, and when the code modifications resulted in smaller functions, Microsoft added padding bits to avoid not messing the arrangement of other nearby functions.
And padding out an optimized function so as to not cause relocation of a function after it -- that's a sure sign somebody was editing a binary. Been there, done that. If a function was shrunk, I used the "spare" space to hold a new piece of code that had to be added. But almost always, some amount of padding was required. I would use either 0xFF, or a repeating pattern, so I could quickly identify it should I need to use it later.
Although the article didn't detail this particular trick, another sure sign of a binary patch is the replacement of straight-line instructions with an unconditional jump to a spare area, where the original instructions were copied and then additional instructions (typically a conditional test) added, and finally a jump back to where the unconditional jump had been patched in. No compiler would produce that. Such tricks of the trade are unmistakeable, and my guess is that such artifacts were what prompted the conclusion.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.