If there is any benefit to gain-of-function research it should probably be carried out off world, either in a high orbit where the station airlocks/environmental systems can be purged to space or low where the failsafe is accelerated fiery re-entry.
high level AI research would be safe in space, and possibly remotely located on earth, but the failsafe’s on that have to revolve around data transmission rates, distance to receivers, and the available manipulatable tool set. There can be zero probability of the sentient program being able to transmit itself out of a study area, or fabricate a device where it could smuggle copies out. You’d need a system whereby no device capable of holding more than say, 1/1000th of the total size of the program could come within it’s potential grasp...
Absolutely right! AI is our next near term existential threat. Asimov’s rules for robots, but with layers of fail-safes and defense in depth. As with all complex systems, complexity theory, game theory, and hubris will be our downfall. The “it will never happen” mindset and the “hold my beer and watch this” mindset will be our undoing. Most importantly, the idea the scientist know better than everyone else make the necessary humility and mitigation impossible.
Good thing I’m old - it will be you’re problem to clean up—if you survive.
I probably read too much science fiction, but true emergent artificial intelligence would be impossible to contain, even off world.
It would eventually generate its own replicating and space faring technology—and after it finished humans off would go looking for its next conquest.