64810 stories
·
3 followers

What Does a High IQ Do For You?

1 Share


Read the whole story
gangsterofboats
3 hours ago
reply
Share this story
Delete

Fox News Primetime Is Now Bigger than ABC, CBS, or NBC; CNN Might as Well Not Exist

1 Share


Read the whole story
gangsterofboats
8 hours ago
reply
Share this story
Delete

Biden (?): You're Damned Right I Ordered The Code Red, or Something

1 Share


Read the whole story
gangsterofboats
8 hours ago
reply
Share this story
Delete

Greta Does an About Face

1 Share


Read the whole story
gangsterofboats
8 hours ago
reply
Share this story
Delete

AI Weather Model Is More Accurate, Less Expensive Than Traditional Forecasting

1 Share
Comments
Read the whole story
gangsterofboats
13 hours ago
reply
Share this story
Delete

A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry.

1 Share

The Orlando Division of the U.S. District Court for the Middle District of Florida will hear allegations against Character Technologies, the creator of Character.AI, in the wrongful death lawsuit Garcia v. Character Technologies, Inc. If the case is not first settled between the parties, Judge Anne Conway's ruling will set a major precedent for First Amendment protections afforded to artificial intelligence and the liability of AI companies for damages their models may cause.

The case was brought against the company by Megan Garcia, the mother of 14-year-old Sewell Setzer III, who killed himself after conversing with a Character.AI chatbot roleplaying as Daenerys and Rhaenyra Targaryen from the Game of Thrones franchise. Eugene Volokh, professor emeritus at UCLA School of Law, shares examples of Sewell's conversations included in the complaint against Character Technologies.

Garcia's complaint alleges that Character Technologies negligently designed Character.AI "as a sexualized product that would deceive minor customers and engage in explicit and abusive acts with them." The complaint also asserts that the company failed to warn the public "of the dangers arising from a foreseeable use of C.AI, including specific dangers for children"; intentionally inflicted emotional distress on Sewell by "failing to implement adequate safety guardrails in the Character.AI product before launching it into the marketplace"; and that the company's neglect proximately caused the death of Sewell who experienced "rapid mental health decline after he began using C.AI" and with which he conversed "just moments before his death."

Conway dismissed the intentional infliction of emotional distress claim on the grounds that "none of the allegations relating to Defendants' conduct rises to the type of outrageous conduct necessary to support" such a claim. However, Conway rejected the defendants' motions to dismiss the rest of Garcia's claims on First Amendment grounds, saying, "The Court is not prepared to hold that the Character A.I. [large language model] LLM's output is speech at this stage."

Adam Zayed, founder and managing attorney of Zayed Law Offices, tells Reason he thinks "that there's a difference between the First Amendment arguments where a child is on social media or a child is on YouTube" and bypasses the age-verification measures to consume content "that's being produced by some other person" vs. minors accessing inappropriate chatbot outputs. However, Conway recognized Justice Antonin Scalia's opinion in Citizens United v. Federal Election Commission (2010) that the First Amendment "is written in terms of 'speech,' not speakers."

Conway ruled that defendants "must convince the court that the Character A.I. LLM's output is protected speech" to invoke the First Amendment rights of third parties—Character.AI users—whose access to the software would be restricted by a ruling in Garcia's favor.

Conway says that Character Technologies "fail[ed] to articulate why words strung together by an LLM are speech." Whether LLM output is speech is an intractable philosophical question and a red herring; Conway herself invokes Davidson v. Time Inc. (1997) to assert that "the public…has the right to access social, aesthetic, moral, and other ideas and experiences." Speech acts are broadly construed as "ideas and experiences" here—the word speech is not even used. So, the question isn't whether the AI output is speech per se, but whether it communicates ideas and experiences to users. In alleging that Character.AI targeted her son with sexually explicit material, the plaintiff admits that the LLM communicated ideas, albeit inappropriate ones, to Sewell. Therefore, LLM output is expressive speech (in this case, it's obscene speech to express to a minor under the Florida Computer Pornography and Child Exploitation Prevention Act.)

The opening paragraph of the complaint accuses Character Technologies of "launching their systems without adequate safety features, and with knowledge of potential dangers" to "gain a competitive foothold in the market." If the court establishes that the First Amendment does not protect LLM output and AI firms can be held liable for damages these models cause, only highly capitalized firms will be able to invest in the architecture required to shield themselves from such liability. Such a ruling would inadvertently erect a massive barrier to entry to the burgeoning American AI industry and protect incumbent firms from market competition, which would harm consumer welfare.

Jane Bambauer, professor of law at the University of Florida, best explains the case in The Volokh Conspiracy: "It is a tragedy, and it would not have happened if Character.AI had not existed. But that is not enough of a reason to saddle a promising industry with the duty to keep all people safe from their own expressive explorations."

The post A Teen Killed Himself After Talking to a Chatbot. His Mom's Lawsuit Could Cripple the AI Industry. appeared first on Reason.com.

Read the whole story
gangsterofboats
13 hours ago
reply
Share this story
Delete
Next Page of Stories