Posted on 04/10/2021 10:20:29 AM PDT by EBH
In collaboration with Deloitte
The COVID-19 crisis has exposed an enduring silent epidemic and greatly accelerated the need for a properly funded and regulated global mental health ecosystem. Disruptive technologies – such as artificial intelligence and machine learning (AI/ML), digital reality (DR), blockchain and the cloud – offer a tremendous opportunity to improve global mental health systems, making them more affordable and more easily scaled, particularly in countries where people lack adequate access to such services. Yet as well as offering breakthrough solutions, these technologies also have the potential for misuse and mistreatment.
The primary purpose of this toolkit is to provide governments, regulators and independent assurance bodies with the tools to develop, adopt and engage standards and policies that address major ethical concerns relating to the use of disruptive technology in mental health. It seeks to improve the accessibility, quality and safety of services that support all members of society to meet their desired emotional, social and psychological potential.
Oh my God, as if it cannot get any worse. Global ...mental health toolkit? I get more and more freaked out everyday. So essentially an AI is going to be telling you right from wrong, normal from a mental health issue, or playing God. If one doesn't think like the preprogrammed AI you will be...??
Accessing mental health services through an electronic platform to which you are directed to an Artificial Intelligence.
In determining how to create ethical AI in mental health, an ethical framework can be applied in order to interpret presented ethical dilemmas, involve stakeholders, consumer and community members to encourage debate and come to acceptable solutions, adapt that framework to AI/ML and disruptive technology and sustainably operate ethically in the long term. The possible detrimental effects of AI go beyond algorithmic bias and extend into the complicated relationships that AI systems have within human communities and the propagation of error through multiple layers in computer decision networks. When faced with an ethical dilemma, the outcome of not making a decision is effectively still making a decision to do nothing, and this can be especially detrimental when the associated outcomes could be harmful to individuals, communities and businesses, and have legal or reputational consequences. It is also important to note that seeing an ethical dilemma as a barrier may mean missing out on novel opportunities and breakthroughs that could improve our mental health and well-being.
In frameworks for making ethical decisions – such as those from Brown University54 or the Markkulla Center for Applied Ethics55 – we find a methodology for solving ethical dilemmas but no consideration of the unique challenges presented by data and AI, or how to involve the consumer, carer and clinical community in the process to develop greater social capital and a social licence to operate, a fundamental part of humancentred design for mental health. Many papers on the principles for ethical AI, such as those recommended in the World Economic Forum’s Empowering 8 Billion Minds report,56 or the OECD’s Principles on AI,57 propose a principles approach, such as: AI should benefit people and the planet; AI should be designed with respect to the rule of law, human rights, democratic values and diversity; AI should have transparency, explainability, responsible disclosure, robustness, safety, contestability and accountability. These principles leave the difficult practical implementation up to the developer. In the Proposed Regulatory Framework for Modifications to Artificial Intelligence/ Machine Learning (AI/ML)-Based Software as a Medical Device,58 the US FDA proposes a riskmanagement, quality systems and good machine learning practices system approach to ensure ongoing safety, which is largely covered by the requirements set out above. In the paper A Moral Licence for AI, Deloitte, CSIRO and Data61 argue that “fair” or “ethical” algorithms or development methodologies are not necessarily achievable in increasingly complicated systems, and that the right solution is an agreement to operate with a community through transparency and shared decisions on the best course of actions, or a “social licence to operate”.59 Below we provide a practical approach for an organization to apply ethics to disruptive technology and AI in mental health, providing transparency, fairness, social capital and a way forward in an open universe of ethical situations
You're crazy if you go to a Psychiatrist.
My Hal 9000 asked me this morning “What’s wrong, Dave?”
My Hal 9000 asked me this morning “What’s wrong, Dave?”
In today’s environment being depressed from a miscarriage is mental illness but chopping off one’s private parts is healthy.
Especially if the Woke, Virtuous, Wise and Mature individual is about eight years old.
If you don't have the "correct" views, you're mentally ill......
Very correct actually.
But even worse is it will be an Artificial Intelligence that will make the determination, not even a discussion with a real person or doctor.
And it will all be done from your computer or device! Wrong answers they come take you away. Almost seems like on needs to worship the AI to avoid going to the mental health gulag.
Freaked out yet?
Disclaimer: Opinions posted on Free Republic are those of the individual posters and do not necessarily represent the opinion of Free Republic or its management. All materials posted herein are protected by copyright law and the exemption for fair use of copyrighted works.