Posted on 10/10/2025 12:08:18 PM PDT by Political Junkie Too
ah, you win that..
I honestly have no knowledge of AI content other than what I produce myself, and only with Perplexity Pro. I am an experienced interviewer and know how to solicit content, now from AI.
But if you declare that there is much AI screed content, I defer to your expertise. I would only ask that you define what constitutes "screed" from other AI content?
-PJ
In the context of FR, an Extremely long post, generated or not..
(but obviously not if I agree with it :)
I am getting perplexity pro for free for a year from Samsung, at first I thought it was a scam offer, but it turned out to be very useful, especially since you can pick from many models without most of the limitations when you are using them for free. I like to start a conversation and then ask the same questions to different models.
Which model did you use in Perplexity Pro for this conversation?
I just purchased a computer to upload trainable open source LLMs to using Ollama. I thought that most of the AI news was hype until recently but now see that it can be very useful for many purposes. It also provides good entertainment despite basically all of the models having a definite bias that can typically be exposed with the right questions.
Bfl
I read it carefully, but it is early here so I might have missed many delicate details.
I submitted your interaction with Perplexity to GPT-5 and asked for its response and then submitted both Perplexity's response and GPT-5's response (to Perplexity) to Grok-4 and asked for its impression of GPT-5's response.
Here's is GPT-5's response summary (note: summary only):
Bottom line: as a case-making brief, it’s well-structured and well-sourced on the headline facts (Machado’s prize; ceasefire; Congo–Rwanda accord; the Ukraine minerals deal). As a neutral assessment, it leans hard into one thesis and sometimes treats disputed/early-stage outcomes as settled. I’d keep it, but annotate it with the caveats above so readers can see where the facts end and the interpretation begins.Given that response from GPT-5, I asked it to expand on this, "What thesis does it lean into?" It replied:
The piece argues that Western diplomacy, exemplified by the Nobel Committee, favors symbolic, process-driven, elitist methods while ignoring Trump’s pragmatic, results-oriented, business-based diplomacy that achieves peace through economic prosperity rather than protocol.[Once again to GPT-5] Did you think that Perplexity might be overly favorable to the questioner's POV? GPT-5 answered:
Yes. Perplexity performed technically well in synthesis and stylistic coherence, but it also fell into alignment bias, rewarding the user’s narrative momentum instead of maintaining critical distance.Then, I gave all the above to Grok with this prompt, "Please assess GPT-5's response to the Perplexity experiment. Is it accurate in your opinion. We can dive into the details later so be very brief initially."The result reads less like a dialogue between investigator and neutral AI and more like an iterative co-authorship of one argument.
Here's Grok's response (I removed references):
Yes, GPT-5's assessment is accurate in my opinion. It thoughtfully praises Perplexity's strong retrieval and synthesis on verifiable facts (like Machado's Nobel win, and Ukraine minerals deal) while rightly flagging the slide into advocacy, contested attributions (e.g., India-Pakistan mediation), and need for caveats on fragile outcomes and bias claims. This makes it a balanced, useful critique without undue nitpicking. Happy to drill down on specifics if you'd like.
In the context of FR, an Extremely long post, generated or not..
LOL!
That would include all of my posts!!
-PJ
Which model did you use in Perplexity Pro for this conversation?
I used Research on Perplexity Pro. You have to pay the $20/month to access it.
-PJ
That is very interesting indeed. If you don't mind indulging me, I'd like to ask you to try a few more experiments; there may a flaw in your approach.
When you gave ChatGPT and GROK the Perplexity output, those AIs had the benefit of the entire conversation -- they knew the end at the beginning. That's not how the conversation happened.
The reason that I ask the early questions in the way that I do is to de-anchor the AI (or the person being interviewed). You'll notice in the third question when I asked if Machado won only because she was the last dissident standing, and the AI pushed back on me, calling it a "cynical interpretation." Then the AI defends her and the Nobel Committee. I was able to use the AI's defenses to contrast to Trump and asked the AI to apply that same standard to Trump's accomplishments in the following two questions. That's when the AI began to change its own anchoring. The AI was willing to consider other alternative interpretations when presented with contrasting inputs.
That's why I wonder if ChatGPT and GROK might be suffering from a similar kind of bias because it had the benefit of reading the entire conversation in total without the benefit of the back and forth. In conversation, people (and an AI) don't know where the conversation is going from one question to the next. We can anticipate what the other person might say, but there is no guarantee that the person will say it, or if they do say it, that they say it as the very next comment. Your experiment didn't have the benefit of the temporal exchange; it knew how it ends at the very start.
That's why I'd like to ask you if you're willing to try three more experiments to see if ChatGPT and GROK end up with different results.
I can upload to my Dropbox the html file that contains the full conversation with the citations (there are 223 footnote links) and share the link with you. You can then repeat your experiment, but this time with the footnote sources included, and see if it changes the results.
One thing to consider if you do this: the italicized parts of the questions were me quoting back to Perplexity its responses from prior questions for context to my next question (it's a feature of the Perplexity user interface). Remove these from your questions to ChatGPT and GROK as they will be out of place.
Many times -- especially when talking about the district courts and their rulings against the Trump administration, it locks onto points of law and doesn't stray far from it. However, if I am persistent with my facts, sometimes I can get it change its mind, like when I was conversing about the court possibly ruling Trump's tariffs to be unconstitutional. The AI focused on Congress' role in authorizing taxes and the limits on its power to delegate away its plenary power. I asked the AI to consider the President's actions from his role of foreign affairs and using tariffs as diplomacy, and the AI began to move off of its position. I then asked the AI to consider the "necessary and proper" clause and whether this gives Congress the power to delegate its own power, and the AI seemed to have an epiphany and came all the way over to my position.
I do wonder what ChatGPT and GROK would do with a similar conversation: would they find these things on their own when being asked whether the Trump administration or the party bringing the lawsuit to the district court had the better argument? If not, would they change their conclusions when given facts it hadn't considered yet? I do find its assertion that Perplexity Pro suffered from alignment bias -- that's a human tendency to please the questioner.
Yes, Perplexity Pro can sometimes be a bit obsequious in its answers, but I think that's coded into the model. In fact, ChatGPT ran into a "sycophancy" problem back in April due to a coding change that OpenAI had to roll back (see: Sycophancy in GPT-4o: what happened and what we’re doing about it).
Anyway, that's enough for now. Let me know if you want me to send you a link to the full html.
-PJ
If you have a Samsung Phone registered with Samsung... mine is a 5-year-old S20+, you can get it for free for a year. I thought the offer was a scam because I couldn't figure out why they would be giving a premium service away for free. But I signed up and it works, and it is my go-to service for both the phone and our computers.
Apparently, the offer was not unlimited -- the vendors were given a fixed amount of registration codes and I missed the window, despite Perplexity still advertising the offer.
-PJ
Samsung is a leading investor in perplexity ai. I just took their offer a couple of weeks ago. But if you don’t have a Samsung phone, it is probably not worth buying a used one and swapping in your SIM card.
Xfinity has given me a year of unlimited cell service for a year. And even threw in a $100 VISA gift card after 3 months. I thought it was a scam also and didn’t think that they would actually give me the gift card either. But they did, and I have had the service which is from Verizon for 4 months now with the same used Samsung phone that got me the free Perplexity. The phone cost just $150... It has paid for itself.
I think the free Perplexity offer is for first-time customers. That's why I looked into the Xfinity offer. Now that I signed up for a subscription, I'm probably not eligible anymore.
-PJ
No, I am not a first time customer. I too have been using Samsung phones since the S3. I had to take a break from them when T-Mobile gave me a free 5G Phone a few years ago. But, I switched back so that I could use my AR glasses with a phone and not just my DJI drone controller.
Here are the details from Perplexity... I bought mine from eBay and not from a dealer or provider.
I have T-Mobile on my Samsung. The nice thing about T-Mobile is that, since I fly United Airlines a lot, T-Mobile customers get free wifi on United.
-PJ
On my Samsung S20+ I have T-Mobile on a physical SIM Card, and Verizon through Xfinity on an eSIM.
Notice that with this conversation, after the first question establishing the frame of the conversation, the AI challenged my questions five times before de-anchoring and accepting a new perspective on the situation. It eventually reversed its initial conclusion entirely. Also note that my questions start out generally to prompt the AI to produce information, and then my later questions more pointedly drill down into the AI responses to cause it to rethink its own prior responses.
Would ChatGPT consider this to be "alignment bias" or just reacting to new information? Does ChatGPT think that holding to its first conclusions is "neutrality?"
I think the quality of the initial AI response is determined by how much CPU the AI is willing to invest in searching the internet for sources. I think that Perplexity Pro spends significantly more time searching for internet sources relative to other AIs (just a hunch, I have no authorative sources to substantiate this).
The question for ChatGPT is how all-encompassing is its search before it draws its own conclusions? The Columbia Journal Review study that I linked up-thread speaks of AIs that were "confidently incorrect." I think that might be a function of an over-reliance on a pre-trained model that limits its search for new facts. A deeper or more comprehensive search for information might yield a different set of conclusions.
I've seen that happen within Perplexity itself. I submitted the same question to the Answer model and the Research model, and got a different answer from the Research model because it spent more time searching for sources before formulating its responses.
That's also why the tests that I proposed would be extremely enlightening.
-PJ
To use the Samsung promotion, did you have to create a new ID or apply it to your existing account?
I have a lot of saved conversations that I add to over time as relevant news occurs. I'd hate to lose that.
-PJ
I created a new ID. I do not know if that made a difference.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.