Posted on 03/03/2022 12:38:48 PM PST by Red Badger
Dartmouth researchers have built an artificial intelligence model for detecting mental disorders using conversations on Reddit, part of an emerging wave of screening tools that use computers to analyze social media posts and gain an insight into people's mental states.
What sets the new model apart is a focus on the emotions rather than the specific content of the social media texts being analyzed. In a paper presented at the 20th International Conference on Web Intelligence and Intelligent Agent Technology, the researchers show that this approach performs better over time, irrespective of the topics discussed in the posts.
There are many reasons why people don't seek help for mental health disorders—stigma, high costs, and lack of access to services are some common barriers. There is also a tendency to minimize signs of mental disorders or conflate them with stress, says Xiaobo Guo, Guarini '24, a co-author of the paper. It's possible that they will seek help with some prompting, he says, and that's where digital screening tools can make a difference.
"Social media offers an easy way to tap into people's behaviors," says Guo. The data is voluntary and public, published for others to read, he says.
Reddit, which offers a massive network of user forums, was their platform of choice because it has nearly half a billion active users who discuss a wide range of topics. The posts and comments are publicly available, and the researchers could collect data dating back to 2011.
In their study, the researchers focused on what they call emotional disorders—major depressive, anxiety, and bipolar disorders—which are characterized by distinct emotional patterns. They looked at data from users who had self-reported as having one of these disorders and from users without any known mental disorders.
They trained their model to label the emotions expressed in users' posts and map the emotional transitions between different posts, so a post could be labeled "joy," "anger," "sadness," "fear," "no emotion," or a combination of these. The map is a matrix that would show how likely it was that a user went from any one state to another, such as from anger to a neutral state of no emotion.
Different emotional disorders have their own signature patterns of emotional transitions. By creating an emotional "fingerprint" for a user and comparing it to established signatures of emotional disorders, the model can detect them. To validate their results, they tested it on posts that were not used during training and show that the model accurately predicts which users may or may not have one of these disorders.
This approach sidesteps an important problem called "information leakage" that typical screening tools run into, says Soroush Vosoughi, assistant professor of computer science and another co-author. Other models are built around scrutinizing and relying on the content of the text, he says, and while the models show high performance, they can also be misleading.
For instance, if a model learns to correlate "COVID" with "sadness" or "anxiety," Vosoughi explains, it will naturally assume that a scientist studying and posting (quite dispassionately) about COVID-19 is suffering from depression or anxiety. On the other hand, the new model only zeroes in on the emotion and learns nothing about the particular topic or event described in the posts.
While the researchers don't look at intervention strategies, they hope this work can point the way to prevention. In their paper, they make a strong case for more thoughtful scrutiny of models based on social media data. "It's very important to have models that perform well," says Vosoughi, "but also really understand their working, biases, and limitations."
Explore further
Can emoji use be the key in detecting remote-work burnout?
More information:
Xiaobo Guo, Yaojia Sun, Soroush Vosoughi, Emotion-based Modeling of Mental Disorders on Social Media. arXiv:2201.09451v1 [cs.SI], arxiv.org/pdf/2201.09451.pdf
-Jpay
Send it over to DU....the AI will have a breakdown and need therapy....
Consciousness. Conscience is knowing the difference between right and wrong.
“It is not the software doing the evaluating, it is the bias of the code writer.”
That’s not how self-learning AI networks work. What you are claiming pertains to classic deterministic code.
Huh... I would have expected that AI servers to explode the second it accessed Reddit.
START TYPING IN ALL CAPS AND USE A LOT OF !!!! IN YOUR POSTS!!!
THEY WON’T BOTHER WITH YOU, THEN!!! :)
There is no such thing as Artificial Intelligence. It is entirely theoretical at this point. What we are told is Artificial Intelligence is actually a bunch of nested algorithms and relational databases. There may be someday but we are a long long way off from there.
Political abuse of psychiatry in the Soviet Union
There was systematic political abuse of psychiatry in the Soviet Union,[1] based on the interpretation of political opposition or dissent as a psychiatric problem.[2] It was called “psychopathological mechanisms” of dissent.[3]
During the leadership of General Secretary Leonid Brezhnev, psychiatry was used to disable and remove from society political opponents (”dissidents”) who openly expressed beliefs that contradicted the official dogma.[4][5] The term “philosophical intoxication”, for instance, was widely applied to the mental disorders diagnosed when people disagreed with the country's Communist leaders and, by referring to the writings of the Founding Fathers of Marxism–Leninism—Karl Marx, Friedrich Engels, and Vladimir Lenin—made them the target of criticism.[6]
Article 58-10 of the Stalin-era Criminal Code, “Anti-Soviet agitation”, was to a considerable degree preserved in the new 1958 RSFSR Criminal Code as Article 70 “Anti-Soviet agitation and propaganda”. In 1967, a weaker law, Article 190-1 “Dissemination of fabrications known to be false, which defame the Soviet political and social system”, was added to the RSFSR Criminal Code. These laws were frequently applied in conjunction with the system of diagnosis for mental illness, developed by academician Andrei Snezhnevsky. Together, they established a framework within which non-standard beliefs could easily be defined as a criminal offence and the basis, subsequently, for a psychiatric diagnosis.[7]
Source https://en.m.wikipedia.org/wiki/Political_abuse_of_psychiatry_in_the_Soviet_Union
America is sliding into the pit of totalitarianism and a lot of idiots on this forum want to go fight an overseas war to against Russia a nuclear power.
I'm starting to wonder if this isn't just part of the overall depopulation plan because radiation works as well as the clot-shots to kill billions
Instead of them going after real threats, such as tens of millions of illegals from God knows where pouring in, they once again are targeting legitimate citizens.
I’m rubbing my knuckles and sucking my teeth while posting.
Just thought AI should know.
They will have to give up their computers and laptops, not just to avoid a diagnosis but because Google’s new toy Soli that uses radar electromagnetic waves to pick up exact movements and gestures. To make life simpler, they claim. It will know you’re coming and fire up all your notifications and pages before you even sit in you chair.
Let’s save them the trouble. Just ask “Are you or have you ever been a FReeper or ever visited freerepublic.com?” and you’ll find all of us mental defectives in jig time.
Glop - goople - nuclear - candy - horse - milk - bars
I am NOT crazy!
So all of you SHUT UP!
From what little I’ve been on Reddit, I don’t see how you could train an AI from there. Everyone on there seems to have one or more mental disorders. Where would be the “normal” for the training set?
:)
Is the what a singularity is?
AI programmed by who detects mental illness on exactly what type of posts? Posts that don’t support or are critical of our would be socialist masters?
AI cannot possibly understand context, so don’t believe it can detect mental illness. It has simply been programmed to look for phrases conservative use.
We’re all in DEEP trouble!🥸
“Who tells AI what comments are mental?”
Lol... Someone who has no clue... :)
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.