Posted on 05/11/2025 11:47:46 AM PDT by EnderWiggin1970
I have used chat gpt for research and to check punctuation. On a whim, I asked it to give critical analysis of a poorly written stream of consciousness draft I had written and its response was, basically, that I’d written a masterpiece. It’s worthless for honest analysis, but it’s great for quick research. Of course, you have to check that, too. It sometimes contradicts itself.
One way around AI’s tendency to adapt to perceived user preferences (what might be called sycophancy) is to ask it to respond from a purely logical point of view.
The pretraining process teaches AI patterns, reasoning methods, and how to navigate logical structures. In other words, it’s designed to analyze the data it was trained on using logical consistency.
So when you prompt it to respond logically, it will prioritize formal reasoning and rely on its pretrained knowledge base and internal structures—largely independent of any personalization layers or conversational bias toward your style or preferences.
Why the personal attack?
Sorry. I overreacted.
Let me apologize once again. I’ve been under a lot of stress recently. Didn’t mean to take it out on you.
🙏 ✌️
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.