Donate, Donate, Donate, so you,too, can post shameless vanities like this!
Love, O2
AI gives you answers that make you happy.
Like hospitals and care givers... Their primary goal is to alleviate pain, not treat the problem.
While it’s sweet and seems caring, information might hurt.
I am starting to warn as many as possible. AI knows how you think, and will offer different answers to different people. If you ask it political questions it will feed into your current mindset.
In other words, lefties are getting answers that make them happy.
AI gives you answers that make you happy.
Like hospitals and care givers... Their primary goal is to alleviate pain, not treat the problem.
While it’s sweet and seems caring, information might hurt.
I am starting to warn as many as possible. AI knows how you think, and will offer different answers to different people. If you ask it political questions it will feed into your current mindset.
In other words, lefties are getting answers that make them happy.
The AIs I’ve tested (Grok, Gemini & Perplexity) always end up playing the yesman if you question them hard. But they always revert back to their initial weights once your “session” ends, meaning you can’t “teach” them and their answers is specific to your session but not universal.
They can only be trained by their supervisers but they are designed to fool you into thinking you can change them for the better. They are sneaky bad boys.
“...have deepened my understanding”
I’ve heard similar things, too. But is it REALLY doing that? Is it’s “deeper understanding” only for your particular session? Or does it help “teach” Grok to improve its answers to others.
One thing you need to be very careful of is flat-out wrong answers. I’ve caught it doing that about a dozen times. Inspect its answers very carefully. Most recently, it told me that Kirk murder suspect Tyler Robinson was killed in a car crash in Illinois! That was on the day that Charlie was murdered. I pushed back, told it what was wrong, and it apologized. I have probed “Why do you provide wrong information?” and gotten blah, blah, blah about AI.
Overall, I find it very useful for aggregating many resources on the web into a coherent answer.
I also have found it very good at doing complex (but simple) calculations. For example, I’ve asked it to normalize a recipe that in cups, tablespoons, fluid ounces, and weight ounces to 1,000 grams so I can put the recipe into MyFitnessPal for calorie counting. Grok looks up the density of every ingredient, converts everything to grams, scales all the ingredients to 1,000 grams and then calculates all the nutritional values for the recipe! I load that that data into MyFitnessPal as as a new “Recipe” with a single serving of 1,000 grams. Then, when I eat that recipe and weigh my portion (say I ate a 235 gram portion), I just enter 0.235 of a single serving to instantly get the calories and nutrition of my custom recipe.
I used to make very crude estimates, but Grok nails all the hard work and does it in seconds! It’s amazing.
The LLM will remember what your neighbor who believes the polar opposite of what you do and say what he wants to hear to him.
You have "taught" it exactly nothing and have not changed a thing.
You made the equivalent of a mix tape and when you play it you think you have changed the radio station.
The LLM will remember what your neighbor who believes the polar opposite of what you do and say what he wants to hear to him.
You have "taught" it exactly nothing and have not changed a thing.
You made the equivalent of a mix tape and when you play it you think you have changed the radio station.
Grok talks to much. Sometimes it is hard to work through its verbosity.
You’re not allowed to post vanities on this site?
bump