Free Republic
Browse · Search
General/Chat
Topics · Post Article

To: Sobieski at Kahlenberg Mtn.

Novo Nordsk to Discontinue Levemir:

What to Do If Your Insulin Is Being Discontinued

https://www.verywellhealth.com/levemir-discontinued-how-to-switch-insulin-8557836

Excerpt:

Novo Nordisk is discontinuing one of its discounted insulin brands, Levemir, because of manufacturing issues, decreasing patient coverage, and the availability of alternative treatments.

Levemir is a long-acting insulin that’s used to control high blood sugar levels in adults and children with type 1 and type 2 diabetes.

There are alternative insulins that could replace Levemir; however, endocrinologists and doctors advise patients to check in with their healthcare provider before making a switch.
**********

Article doesn’t go into detail but Novo Nordsk is maker of Ozempic and one of the reasons they are discontinuing Levemir is Ozempic. They are betting on FDA approval for iuse in obesity and a bigger market (in more ways than one).


4,591 posted on 02/18/2025 8:48:40 PM PST by Sobieski at Kahlenberg Mtn. (All along the watchtower fortune favors the bold.)
[ Post Reply | Private Reply | To 4590 | View Replies ]


To: Sobieski at Kahlenberg Mtn.

AI making up cases can get lawyers fired, scandalized law firm warns

https://arstechnica.com/tech-policy/2025/02/ai-making-up-cases-can-get-lawyers-fired-scandalized-law-firm-warns/

Excerpt:

Morgan & Morgan—which bills itself as “America’s largest injury law firm” that fights “for the people”—learned the hard way this month that even one lawyer blindly citing AI-hallucinated case law can risk sullying the reputation of an entire nationwide firm.

In a letter shared in a court filing, Morgan & Morgan’s chief transformation officer, Yath Ithayakumar, warned the firms’ more than 1,000 attorneys that citing fake AI-generated cases in court filings could be cause for disciplinary action, including “termination.”

“This is a serious issue,” Ithayakumar wrote. “The integrity of your legal work and reputation depend on it.”

Morgan & Morgan’s AI troubles were sparked in a lawsuit claiming that Walmart was involved in designing a supposedly defective hoverboard toy that allegedly caused a family’s house fire. Despite being an experienced litigator, Rudwin Ayala, the firm’s lead attorney on the case, cited eight cases in a court filing that Walmart’s lawyers could not find anywhere except on ChatGPT.

These “cited cases seemingly do not exist anywhere other than in the world of Artificial Intelligence,” Walmart’s lawyers said, urging the court to consider sanctions.

So far, the court has not ruled on possible sanctions. But Ayala was immediately dropped from the case and was replaced by his direct supervisor, T. Michael Morgan, Esq. Expressing “great embarrassment” over Ayala’s fake citations that wasted the court’s time, Morgan struck a deal with Walmart’s attorneys to pay all fees and expenses associated with replying to the errant court filing, which Morgan told the court should serve as a “cautionary tale” for both his firm and “all firms.”

Reuters found that lawyers improperly citing AI-hallucinated cases have scrambled litigation in at least seven cases in the past two years. Some lawyers have been sanctioned, including an early case last June fining lawyers $5,000 for citing chatbot “gibberish” in filings. And in at least one case in Texas, Reuters reported, a lawyer was fined $2,000 and required to attend a course on responsible use of generative AI in legal applications. But in another high-profile incident, Michael Cohen, Donald Trump’s former lawyer, avoided sanctions after Cohen accidentally gave his own attorney three fake case citations to help his defense in his criminal tax and campaign finance litigation.

In a court filing, Morgan explained that Ayala was solely responsible for the AI citations in the Walmart case. No one else involved “ had any knowledge or even notice” that the errant court filing “contained any AI-generated content, let alone hallucinated content,” Morgan said, insisting that had he known, he would have required Ayala to independently verify all citations.

.....Andrew Perlman, dean of Suffolk University’s law school, advocates for responsible AI use in court and told Reuters that lawyers citing ChatGPT or other AI tools without verifying outputs is “incompetence, just pure and simple.”


5,037 posted on 02/20/2025 8:42:49 PM PST by Sobieski at Kahlenberg Mtn. (All along the watchtower fortune favors the bold.)
[ Post Reply | Private Reply | To 4591 | View Replies ]

Free Republic
Browse · Search
General/Chat
Topics · Post Article


FreeRepublic, LLC, PO BOX 9771, FRESNO, CA 93794
FreeRepublic.com is powered by software copyright 2000-2008 John Robinson