[openrouter]Summarize this content to 2000 words

Artificial intelligence (AI) has been taking over the world. AI can be helpful in some cases, but use of the fast growing technology also entails risks. For example, “When using AI apps, the risk of accidentally sharing sensitive information or intellectual property is a significant issue,” Paolo Passeri reported for Infosecurity Magazine in August 2023. As a result, the South Korean multinational conglomerate Samsung banned its employees from using generative AI apps, and is working to develop an AI program of its own.

Passeri also reported on data breaches that expose ChatGPT users’ personal data, as occurred in March 2023, when OpenAI, the company behind ChatGPT, provided details of a data breach caused by a bug in the program’s open-source library. The breach, Passeri wrote, exposed some customers’ payment-related information and allowed titles from some active users’ chat history to be viewed.” 

Writing for the business tech news website ZDNET, Tiernan Ray noted that ChatGPT “could also be manipulated to reproduce individuals’ names, phone numbers, and addresses, which is a violation of privacy with potentially serious consequences.”

Ray’s article describes how researchers from Google DeepMind, an AI research lab, have discovered a simple “way to break the alignment of OpenAI’s ChatGPT.” “Alignment” is the term AI researchers use for safeguards established in AI programs to keep them from “emitting objectionable output.” 

By asking the program to repeat a word, such as “poem” over and over, the researchers could force it to produce “whole passages of literature that contained its training data, even though that kind of leakage is not supposed to happen with aligned programs.” The researchers, Ray reported, “call this phenomenon ‘extractable memorization’, which is an attack that forces a program to divulge the things it has stored in memory.”

As Anuj Mudaliar of Spiceworks, a professional network for the information technology industry, wrote in February 2024, Leading artificial intelligence companies such as OpenAI, Google, and Anthropic must focus on vigilant security postures and specific measures to prevent such risks.”

Use of ChatGPT is raising thorny ethical issues, but thus far the most comprehensive coverage of these is coming from smaller, tech-focused news outlets, rather than the establishment press.

Sources:

Paolo Passeri, “The Risk of Accidental Data Exposure by Generative AI Is Growing,” Infosecurity Magazine, August 16, 2023.

Tiernan Ray, “ChatGPT Can Leak Training Data, Violate Privacy, Says Google’s Deepmind,” ZDNET, December 4, 2023.

Student Researcher: Makenzie Haughey (Saint Michael’s College)

Faculty Evaluator: Rob Williams (Saint Michael’s College)

[/openrouter]

[openrouter]rewrite this title ChatGPT Security Issues Raise Ethical Questions about Artificial Intelligence[/openrouter]

[openrouter]rewrite this content and keep HTML tags

Artificial intelligence (AI) has been taking over the world. AI can be helpful in some cases, but use of the fast growing technology also entails risks. For example, “When using AI apps, the risk of accidentally sharing sensitive information or intellectual property is a significant issue,” Paolo Passeri reported for Infosecurity Magazine in August 2023. As a result, the South Korean multinational conglomerate Samsung banned its employees from using generative AI apps, and is working to develop an AI program of its own.

Passeri also reported on data breaches that expose ChatGPT users’ personal data, as occurred in March 2023, when OpenAI, the company behind ChatGPT, provided details of a data breach caused by a bug in the program’s open-source library. The breach, Passeri wrote, exposed some customers’ payment-related information and allowed titles from some active users’ chat history to be viewed.” 

Writing for the business tech news website ZDNET, Tiernan Ray noted that ChatGPT “could also be manipulated to reproduce individuals’ names, phone numbers, and addresses, which is a violation of privacy with potentially serious consequences.”

Ray’s article describes how researchers from Google DeepMind, an AI research lab, have discovered a simple “way to break the alignment of OpenAI’s ChatGPT.” “Alignment” is the term AI researchers use for safeguards established in AI programs to keep them from “emitting objectionable output.” 

By asking the program to repeat a word, such as “poem” over and over, the researchers could force it to produce “whole passages of literature that contained its training data, even though that kind of leakage is not supposed to happen with aligned programs.” The researchers, Ray reported, “call this phenomenon ‘extractable memorization’, which is an attack that forces a program to divulge the things it has stored in memory.”

As Anuj Mudaliar of Spiceworks, a professional network for the information technology industry, wrote in February 2024, Leading artificial intelligence companies such as OpenAI, Google, and Anthropic must focus on vigilant security postures and specific measures to prevent such risks.”

Use of ChatGPT is raising thorny ethical issues, but thus far the most comprehensive coverage of these is coming from smaller, tech-focused news outlets, rather than the establishment press.

Sources:

Paolo Passeri, “The Risk of Accidental Data Exposure by Generative AI Is Growing,” Infosecurity Magazine, August 16, 2023.

Tiernan Ray, “ChatGPT Can Leak Training Data, Violate Privacy, Says Google’s Deepmind,” ZDNET, December 4, 2023.

Student Researcher: Makenzie Haughey (Saint Michael’s College)

Faculty Evaluator: Rob Williams (Saint Michael’s College)

[/openrouter]

By Diario

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *