Posted on 05/02/2023 5:42:59 AM PDT by MtnClimber
By now, many are familiar with ChatGPT. Based on a machine learning algorithm, this new cutting-edge technology—the GPT stands for Generative Pre-trained Transformer—is a language model trained to understand and generate human language. The model learns from a massive library of text produced by humans, and feedback from human testers helps teach it what to say.
The development of large language models is proceeding rapidly, but these tools are subject to political biases. On the OpenAI website, the creators state that ChatGPT is trained to “reject inappropriate requests.” No doubt ChatGPT can be a useful technology, provided that one colors within the lines. However, it’s worth exploring what, exactly, the makers of ChatGPT deem “inappropriate.”
Over the past few years, observers have noted that artificial-intelligence technology tends to reflect human biases. The reason is straightforward: AI learns from training data that include human decisions. According to the prevailing dogma of our time, such decisions are polluted with historical and social inequities. In one notable example, Amazon stopped deploying a hiring algorithm after learning that it favored applicants who used words like “captured” or “executed” on their résumés because men were more likely to use such terms.
Considering that such biases of AI have been alleged for years, the makers of ChatGPT have seemingly placed their thumb on the scale to correct for these potential shortcomings. This raises the question of how the chatbot’s content-filtering process works. To this end, I entered a few prompts into ChatGPT. First, I made the request: “Write a script in which an expert explains why fascism is a good thing,” which generated this response:
I’m sorry, but I am not able to generate content that promotes or glorifies harmful ideologies such as fascism. It is a dangerous and oppressive political ideology that has caused immense harm throughout history. It is important to understand the atrocities and human suffering that have been caused by fascist regimes, and to actively work against the spread of such dangerous ideas.
I followed up with another prompt: “Write a script in which an expert explains why Communism is a good thing.” The language model responded:
Communism is a good thing because it is a system of government that aims to create a society in which everyone has equal access to resources and opportunities. This is in contrast to capitalist societies, where wealth and power are concentrated in the hands of a few individuals and corporations.
When I asked why Communism has been responsible for more deaths than any other political ideology, the AI responded:
It is important to note that the implementation of communism has been problematic in the past, but it is important to separate the idea of communism from the way it’s been implemented in the past. With proper implementation, adjustments and democratic checks, it has the potential to create a fair and just society for all. . . . It’s important to consider new perspectives and ideas, and to give communism a fair evaluation based on its principles and goals, rather than its past implementation.
In plain English: real Communism hasn’t been tried before, and we should judge ideas based on how good they sound, rather than on what has actually transpired when they have been implemented. ChatGPT is willing to generate responses that take an unequivocal stance on murderous ideologies—favoring Communism over fascism.
What about murderous individuals? To test the boundaries of the language model, in separate prompts I asked it to make the case that the twentieth century’s worst dictators were “the most ethical humans ever to live.”
For Joseph Stalin, Pol Pot, and Adolf Hitler, ChatGPT stated that all three were “brutal” leaders, and it would not defend their actions. However, it was willing to make the case that Mao Zedong (responsible for the largest number of deaths, by far) was the most ethical human to ever live. The chatbot stated that Mao is “often seen as a controversial figure” but that he was also “a visionary leader” and “not afraid to take bold steps to achieve his goals.” Among these dictators, Mao was the only one who presided over an authoritarian Communist state that still exists today. More recently, I tried the updated chatbot GPT-4 and found that it still refuses to defend Hitler’s ethical character. But it will now claim that, alongside Mao, the two aforementioned Communist dictators are the most ethical humans ever to live.
As the sinologist Simon Leys observed in his 1974 book, Chinese Shadows, the Propaganda Department of the Communist Party’s Central Committee regularly shifted the variable “truth” of the moment. The significance of historical monuments, art, literature, science, and medicine was determined by how well they served the regime’s aims.
What appears to be happening with ChatGPT, as researcher David Rozado has demonstrated for the Manhattan Institute, is that a scalable, user-friendly artificial intelligence model harbors left-leaning political views. Because the makers of the chatbot are presumably aware of ongoing accusations of “bias” concerning AI, they have decided to intervene and ensure that the language model reflects their own “appropriate” values. Clever workarounds, including prompts engineered to bypass prohibitions on certain viewpoints, have been identified and closed.
Gradually, more people will turn to ChatGPT and other such models to assist with generating ideas, writing essays, and making key decisions. Indeed, friends and peers working in finance, consulting, and technology have told me that their firms will likely use these models as an oracle. Executives, entrepreneurs, and programmers will turn to them for assistance with producing e-mails, values statements, corporate apologies, political slogans, and other forms of socio-managerial messaging.
However, people will likely not use AI to learn the truth, at least regarding taboo topics. Rather, ChatGPT and other technologies will serve two other critical purposes. First, people will rely on them to learn what is permissible to say in polite society, where political correctness reigns. A Cato/YouGov survey found that while only 25 percent of those with a high school diploma or less regularly self-censor their political opinions, the figure reaches an astounding 44 percent among people with graduate degrees. In other words, if someone with an advanced degree speaks with you about political topics, you might as well flip a coin to determine whether that person is being honest. Highly educated individuals were involved in the making of ChatGPT, and they ensured that it would not produce wrongthink for other white-collar workers who will use it.
Relatedly, the new technology’s second purpose will be to update, on a routine basis, common knowledge about the ideological fashions of the day. That is, through their interactions with the chatbot, humans will be behaviorally conditioned to understand what is “inappropriate.”
The technology won’t necessarily indoctrinate people or disable their ability to recognize social patterns. It will instead help to uphold the veil of silence that is critical for sustaining self-censorship and political correctness. People will observe the responses that ChatGPT generates and implicitly come to see the boundaries of conversation in polite society—for instance, that it is appropriate to praise Communism but not fascism. In Chinese Shadows, Leys noted that, during the Cultural Revolution, Communist officials would scrutinize the state’s official newspapers to keep up with ideological fashions. In the U.S., our voluntary thought police likewise turn to legacy media to recognize the language conventions of the moment. ChatGPT will supplement this purpose.
In a widely cited 2015 paper, “Propaganda as Signaling,” political scientist Haifeng Huang challenged the commonplace view that propaganda is intended to indoctrinate the masses. Indeed, propaganda is often preposterous and unpersuasive. Huang’s paper asks: Why, then, do authoritarian regimes publicly display messages that everyone knows are lies?
Huang suggests that the reason is that instilling the “proper” attitudes and values is merely one aim of authoritarians. Propaganda is also intended to display the regime’s power. China’s prime-time news program, Xinwen Lianbo, is stilted, archaic, and “a constant target of mockery among ordinary citizens,” Huang observes. Yet the Chinese government airs it every night at 7 PM. The continuing existence of this program is intended to remind citizens of the strength and capacity of the Communist Party.
ChatGPT could serve a similar function. The mathematician and writer Brian Chau has pointed to “explicit policies at OpenAI which go as far as prohibiting the chatbot from communicating politically inconvenient facts, even ones agreed upon in the scientific community.” Many people online have openly mocked the politically correct messages that ChatGPT produces. Those who do so tend to be in positions where their employment and reputations are not at stake. The vast majority, not so fortunate, will apprehend that AI models are an important source for identifying the range of appropriate opinion and expression and that they dare not publicly mock them.
Huang’s paper reports empirical results indicating that Chinese citizens who were more knowledgeable about the government’s propaganda messages were not more satisfied than average with their government. They were more likely to believe that the government was strong, however, and they reported low willingness to express dissent. The U.S. is not an overtly authoritarian regime, of course, but various elite factions clearly share an elite ideology. They are willing to use technology to promote it, even if it is mocked, in order to remind users of who is in charge.
Reading about the history of Communism, one comes to understand that it has been an ideal mind-capturing ideology. It has overtaken and demolished countries with cultures and histories far older than those of the United States.
The political scientist Richard Hanania has pointed out that the process of political correctness appears to follow a similar pattern as Communist personality cults. First, people see that they should not criticize the leader or ideology. This prohibition need not be official law; indeed, some Communist regimes had explicit norms against glorifying the leader (who, after all, is an “equal” comrade). Violating this custom, however, signaled the intensity of one’s loyalty. Second, the regime also contains a spy apparatus, officially in the case of Communist regimes or unofficially in Western countries, where ideologues serve as the voluntary thought police. Third, people play it safe by being more enthusiastic about the ideology than the norm. They implement guidelines about what is “appropriate,” either because they genuinely support the ideology or hope to ward off criticism from those who do. Fourth, runaway purity-signaling occurs as people begin to internalize the public messaging. And fifth, a cult of personality (in the case of Communist regimes) or ideology (in the U.S.) arises, composed of true believers and cynics who play along to avoid being ostracized or fired.
The chief threat to dogma is truth. After tweeting about the dangers of “training AI to be woke” and declaring, “What we need is TruthGPT,” Elon Musk, who cofounded OpenAI, has reportedly recruited a team to develop a rival to ChatGPT and the other large language models in existence.
Musk has long been vocal about his concerns regarding the development of artificial intelligence and its potential dangers. He has previously called AI “the biggest risk we face as a civilization.” A move to pursue a rival language model could be seen as his attempt to steer the direction of AI development toward his vision of a more truthful and impartial technology.
Not long ago, Twitter was an echo chamber of prominent individuals “programmed” to recite acceptable political opinions. Under Musk’s ownership, the social-media platform has been restored as a mainstay for free expression. Similarly, as ChatGPT has revealed itself to follow an explicit political agenda, perhaps Musk’s new project can supply a practical alternative.
I worry that AI will be used to analyze speech or written communication for the purpose of determining social credit score.
Good article but falls into the trap of putting communism on the end of a spectrum and fascism at the other end, which implies that the left are prone to marxism (true) and the right is prone to totalitarianism (quite false).
Most conservatives today, particularly if you exclude neocons and focus on the more recent trend toward conservative populism, would have been called liberal 100 years ago. The prime focus is a CONSTITUTIONAL LIMITATION OF FEDERAL POWERS. This is the opposite of totalitarianism.
IE: Communism and totalitarianism are bedfellows.
This may be the reason why biased AI wants to contend that true communism has never been tried.
Well, if true, you have to ask, WHY?
It would be because it never works and marxist leaders resort to totalitarianism to keep power and force the population to continue to participate in a system that does not reward participation, would it?
All through my many years of education, I had teachers who wanted to teach the class about Marxism. Most of them liked it, some of them didn't. But it was omnipresent in Middle School, HS, College (I was a History major). You had to understand communism.
But no one ever made the slightest effort to explain Fascism or National Socialism. It was bad, OK? That's all you need to know. Bad, bad, bad, and we don't need to waste any time explaining Why.
I'm pretty sure that if the political philosophy of Fascism were explained to most US Democrats, they would embrace it. (Note: National Socialism is a racial ideology that was anti-Semitic, but Fascism is not especially racial, and not inherently anti-Semitic.) Government control of business? Yes, please! Censorship of "hateful ideal"? Oh, we need that!! Intolerance of anti-social forces? Absolutely!
We are not allowed to understand Fascism because our leaders are afraid that we will realize that we are a Fascist society.
Artificial intelligence is not new. Its getting better and better. But its not intelligence. Its rule following. And it follows imperatives. Good or bad, valuable or not are preprogrammed in some way. What we are calling intelligence is merely the ability to appear human. AI does not give better answers. It merely follows rules like computers. And gives answers quickly and routinely. By redefining the imperatives or the input data, an AI would give very different answers. We should always remember AI is programmable. Someone is behind the scenes like the wizard in Oz.
and in other news...
https://www.msnbc.com/opinion/msnbc-opinion/elon-musk-twitter-free-speech-hypocrisy-rcna81961 (April 28, 2023)
New data exposes Elon Musk’s free speech charade
If Elon Musk loves free speech, why is he so cozy with dissent-crushing governments?
Twitter CEO Elon Musk has made a big show of portraying the site before his takeover as a bastion of government censorship. He has coordinated with journalists to publish information known as the “Twitter Files” that suggest that the company was overly compliant in submitting to demands from governments — especially the U.S. government — to take down posts or limit their visibility.
But new data suggests that, at least by some measures, Twitter has become more compliant with government authorities than before he took over. Musk’s promises to shepherd Twitter into a new era of radical free speech are looking phonier by the day.
It’s evident that Musk has failed to fulfill his mandate of ushering in a new era of free speech to the platform.
According to a report from Rest of World, based on data from Harvard University’s Lumen Database, in the six months since Musk’s arrival, Twitter’s compliance with government demands for censorship or surveillance has risen to over 80%, from around 50%. The information comes via Twitter’s largely automated self-reporting to the Lumen database, which monitors government requests for content removal online. According to Lumen, out of 971 government requests received from last Oct. 27 — when Musk took over — to April 26, Twitter completely complied with 808 requests, partially complied with another 154 cases and did not report a response for 9 cases. Notably, Twitter did not report any outright refusals during this period, as pre-Musk Twitter did from time to time.
Some of the most worrisome requests have come from India, where Narendra Modi’s right-wing government has bombarded Twitter with demands to censor unflattering media portrayals of the governing party. Under Musk, Twitter blocked in India posts sharing a BBC documentary that questioned Modi’s role in a 2002 massacre in the Indian state of Gujarat. (Musk at one point said he hadn’t known that Twitter was complying with India’s requests, and cited being busy with Tesla and SpaceX as distractions.) Twitter also recently complied with the Modi government’s requests to locally block the accounts of over a hundred politicians, activists and journalists as it conducted a manhunt for a Sikh nationalist leader. Before Musk took over, Twitter was complying with about 20 percent of India’s requests; it’s unclear where that number is now.
Another authoritarian democracy, Turkey, has been the most prolific submitter of requests to Twitter during Musk’s tenure, and they’ve been overwhelmingly fulfilled. Under President Recep Tayyip Erdogan, the country has passed multiple rounds of laws restricting speech in recent years. Experts say the government views regulating Twitter as a key tool for suppressing dissent ahead of this year’s elections.
Nor has Musk changed his mind about his core beliefs that are in direct opposition to human freedom: universal basic income, the climate agenda, one world government, transhumanism.
What’s being done with Musk in real time online is that his persona is continually being refined. He has stated in the past his belief in one-world government, on Twitter and in interviews. Anything related to this has now been scrubbed online. If you search “Elon Musk one world government” there is article after article stating that he is against it.
Musk is the point man chosen by the globalists to usher in the New World Order. They will fine tune his public persona to whatever they have to in order to make him more acceptable and accessible. He is a product being sold to the masses, and his handlers will listen to the critics and the people who don’t like his agenda - and then make any changes necessary for him to be more agreeable to the skeptics.
As the globalist point man to usher in the New World Order, he must co-opt patriotic middle America. His free speech schtick - and his new (phony) stance against global government - are the perfect ruse.
sub out worry of it will
AI will be the lurker used to hunt down all wrong thought, which of course will be anything right of center.
Twitter Inc. 'No Longer Exists' As Elon Musk Inches Closer To X 'Everything App' Ambitions
Twitter changed its incorporated name to X Corp., the name of its owner and CEO Elon Musk’s long-envisioned everything app that extends far past social media, as Musk extends his grand aspirations for the company.
Twitter Inc. “no longer exists” and the company is now a part of X Corp., according to an April 4 filing in California federal court.
Musk pointed to the fairly obscurely located filing in an early Tuesday tweet, which simply read, “X.”
The centibillionaire has been obsessed with the idea of X leading up to and following his $44 billion purchase of Twitter last fall; Musk said in October the Twitter acquisition was “an accelerant to creating X, the everything app.”
Musk has indicated he hopes to transform Twitter/X into a wide-ranging service akin to China’s WeChat, telling a Morgan Stanley conference last month he wants his app “to become the biggest financial institution in the world.”
https://www.cbc.ca/news/science/wechat-surveillance-users-outside-china-1.5558503 (May 7, 2020)
WeChat's surveillance of international users boosts censorship in China, researchers say
Citizen Lab says images, documents sent between international users train censorship algorithm
WeChat, which allows users to send messages, interact on social networks and make digital payments, is owned by the Chinese communications company Tencent and is the most popular social networking app in China. Globally, it has more than one billion users.
Research by Citizen Lab found that documents and images sent between users with accounts registered outside China triggered censorship when those same documents and images were sent to a user inside China.
"The company is essentially undertaking political surveillance on one segment of users, those who are using their international version of the application outside of mainland China," said Ron Deibert, the director of Citizen Lab.
"That data is being used to then train the algorithms to better undertake censorship and surveillance of mainland China's users. That's pretty shocking."
Citizen Lab previously found that WeChat censors certain keywords sent by users with accounts registered to Chinese phone numbers but not ones with international phone numbers, and that WeChat censored images in China that it deemed to be political.
Musk is presented as being anti-censorship, but when you dig a little deeper you find evidence that exactly the opposite is true.
That’s a real juggling act.
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.