“AI thing to imprint it’s output with some steganographic markers”
The output is just plain ascii text, so there is pretty much no place for steganographic markers.
I understand your point.
I am not a data scientist, but if this technology can be created, similar technology can discern between human-created content and machine-generated content. Perhaps not technically ‘steganography’ but something similar.
A previous poster mentioned using this tool to paraphrase content, then re-write it using actual research and composition. That makes sense, but would, I believe, defeat the reason that one attended college in the first place.
OpenAI guest researcher Scott Aaronson said at a December lecture that the company was working on creating watermarks for the outputs so that people could see signs of a machine-generated text.
++++++++++++++++++
It was in the original article. I Should have read further.