Objectivists libertarians tend to believe that a benefit to the individuals derives solely from actions or laws that benefit individuals directly, where conservatives tend to believe that actions or laws benefiting the larger group in addition to such for the individual result in increased benefit for the individual. This study appears to provide support for the conservative viewpoint.
As someone knowledgeable in these areas, what is your take on this article?
Ah good - we've found the basis for the Golden Rule in some neurons on our brain that give us pleasure when we cooperate.
How nihilistic can you get?
- 3 by ThePythonicCow
Objectivists libertarians tend to believe that a benefit to the individuals derives solely from actions or laws that benefit individuals directly, where conservatives tend to believe that actions or laws benefiting the larger group in addition to such for the individual result in increased benefit for the individual. This study appears to provide support for the conservative viewpoint. -Nebullis
-- How weird. - PyCow finds the article 'nihilistic' in its finding that the libertarian golden rule concept is valid; --
--- while Neb claims that the article shows such a concept to be not libertarian at all, but to be a 'conservative' view.
--- Dichotomy anyone?
Objectivists libertarians tend to believe that a benefit to the individuals derives solely from actions or laws that benefit individuals directly, where conservatives tend to believe that actions or laws benefiting the larger group in addition to such for the individual result in increased benefit for the individual. This study appears to provide support for the conservative viewpoint. As someone knowledgeable in these areas, what is your take on this article?
Oops, I thought I had already posted this early on, but apparently not:
===========================================
Note to libertarians: cooperation, not just self-interest is hardwired.
Note to Nebullis: Cooperation is in our long term self-interest - which is why it's hardwired!
Even when unemotional software agents play the Prisoners' Dilemma (the recurring kind where you'll be encountering the same players in the future), the ones that adopt a variation of tit-for-tat always win out in the end. It doesn't surprise me that there was a selection pressure among humans for empathy & cooperation.
What's unhealthy, IMO, is the kind of altruism that says you have a moral obligation to help people, just because they say so. There's a subtle difference between making a long term investment in the well-being of those you value, and willingly playing host to a parasite.
Long-term cooperation requires a certain level of brainpower, since you have to be able to remember who you've interacted with in the past and who's done whom wrong, & who owes whom a favor. And putting yourself in the other person's shoes (empathy) also requires you to be able to step back & construct a model with yourself & the other person, & relate that model to the real world where you are in it. Which helps explain why only humans use cooperation to such an extreme as we do.
===========================================