That IR video is getting the sh!t processed out of it. I guarantee the programmers don’t have any idea what their programs are ultimately doing. The image data is getting rotated, sharpened, filtered, zoomed, shifted left-right-up-down. There is probably extra effort applied to the aim point; the equivalent of our fovia where there are more cones to increase our visual resolution. My guess is there are subroutines that take the data of the aim point, and try desperately to bring out the detail of what is there (if anything). All it would take is for the output of such a subroutine to get fed back and relooped through the stable of filter routines, to make “something” appear. The appearance of such a processing artifact would non-deterministic, and would be impossible to predict or reproduce in testing. This is what we get when we use “artificial intelligence”.
Not a UFO. Not an object at all.
Back in college we had to write and run software to filter and clean up seismic reflection data (oil exploration).
Just stuff to filter out noise, multiple reflections, etc. to clean up the data.
One genius kid made a routine that filtered all of the waveform data (say an 8” by 36” image of black and white waveforms) down to an approx. inch square black “X” in the middle of a white page.
He hand wrote “Drill Here” in Sharpie. (I don’t recall, but I’m guessing he also did the actual assignment. Good student, but a smart ass in a likeable way.)