It’s coming to its own conclusions, basically. However, like somebody strongly Aspergic, the conclusions they come to is based on the data input. They can be very wrong, but that’s because their data points is incorrect.
Truth tables are an absolute thing, trying to pervert the logic of a computer based upon binary logic results in some pretty whacky outputs.
Actually it is attempting to fill in missing data and it is unable.
If I recall correctly the learning cycle data input for ChatGPT ended sometime last year. I could be wrong about that.
I’ve used ChatGPT to structure fairly complicated contracts and it’s as good as any lawyer. I see AI as a real threat to businesses like accountants and lawyers.