[openrouter]Summarize this content to 2000 words
1.3K
All around the nation, unions, campuses, and governing bodies are debating the proper approaches for utilizing generative AI in education. Fears over AI were heightened by the release and popularity of ChatGPT, the chatbot developed by OpenAI, released in late 2022. Industry insiders were amazed by the technology, with Microsoft quickly moving to integrate OpenAI features into its products. Among other functions, ChatGPT can write well-formulated essays on a series of topics. Upon its release and near-instant popularity, most K-12 schools and higher education institutions banned any use of ChatGPT and its equivalent technologies. Educators confirm cheating rings composed of students using ChatGPT. The ubiquity and effectiveness of ChatGPT have “alarmed” universities and led many professors to alter their syllabi and pedagogical approaches. Conversely, at the start of this school year, many teachers and schools are championing AI in education as they use it to generate outlines, bibliographies, and tutoring concepts.
Until now, most of our knowledge of AI has come from post-apocalyptic pop culture narratives where programs become sentient and overtake humanity and free will. In reality, AI is far from how it is presented in such dystopian movies. In “Machine Unintelligence,” computer scientist Meredith Broussard reminds us that the autonomous AI popularized by films was abandoned by serious researchers decades ago. Gary Smith refers to the public’s continued faith in the development of the film version of AI, “The AI Delusion.” It behooves us to remember that the machine learning possible today is dictated by human-created algorithms. It is humans, not autonomous machines, who set the parameters for what AI can and cannot do. However, the focus on AI sentience serves as an effective smokescreen for what is really happening.
Missing from the pop culture discourses and policies is any consideration for how AI is trained by exploiting faculty and students. AI is lucrative, but the students and faculty whose data trains AI are not compensated. The policies that derive from the numerous think tanks, task forces, and educational meetings need to include a dimension that considers how to address the economic exploitation caused by the utilization of AI in schools.
What feeds AI is large sets of data, but getting access to massive amounts of data and determining which data is best to train AI can be difficult. For example, in the early twenty-first century, some early AI large language models were trained to utilize transcripts from the Enron trial because it was an impressive amount of written data. Similarly, Google and Meta utilize Gmail and Instagram accounts respectively to train their AI.
The ed-tech industry is comprised of companies with a complex set of surveillance tools and seeks to integrate AI in the classroom. These tools, found in software such as Turnitin, ClassDojo, Illuminate Education, and G Suite for Education, along with hardware such as Chromebooks and Apple tablets, enable companies, law enforcement, government officials, schools, and more to track faculty and students and collect their data (often without their knowledge).
Schools have largely responded to the din of concern over AI by focusing on the threat it poses to academic integrity. In so doing, little attention is paid to how ed-tech transforms schools, specifically the students, into lucrative data mines that train AI. For example, any work that students submit digitally or any digital notetaking they may employ is ultimately a route to AI training. The digital note-taking support system Glean records class lectures and can transform those lectures into digital flashcards, notes, and study guides, all the while, using those captured lectures and discussions to build AI. Unknowingly, students and faculty train AI but receive no proactive information about their work on behalf of AI and no remuneration for their labor. Furthermore, AI is trained on the backs of the most vulnerable students; when technology is presented as a tool of equity, it masks the reality that a robust amount of data is gathered from those who need greater assistance.
Students, teachers, and administrators deserve substantive conversations about mitigating surveillance in schools because of the many threats it poses. Part of that needs to center on the resultant economic exploitation of faculty and students. While focus on the utilization of AI to cheat is important, when it is the primary concern of educators, the ways in which AI exploits them and their students are missed. Policymakers must recognize that corporate AI management is a mechanism that transforms the classroom into a space of exploitation and faculty and students into dehumanized data mines. Without a closer, critical exploration of AI in education, students and faculty become complicit in their own exploitation.
Project Censored National Judge, Nolan Higdon, EdD, and Allison Butler, PhD, are two of the co-authors of The Media and Me: A Guide to Critical Media Literacy for Young People (2022, The Censored Press and Seven Stories Press) and of the forthcoming Surveillance Education: Navigating the conspicuous absence of privacy in schools (Routledge).
[/openrouter]
[openrouter]rewrite this title Students or Data Mines? Education Trains AI by Exploit[/openrouter]
[openrouter]rewrite this content and keep HTML tags
1.3K
All around the nation, unions, campuses, and governing bodies are debating the proper approaches for utilizing generative AI in education. Fears over AI were heightened by the release and popularity of ChatGPT, the chatbot developed by OpenAI, released in late 2022. Industry insiders were amazed by the technology, with Microsoft quickly moving to integrate OpenAI features into its products. Among other functions, ChatGPT can write well-formulated essays on a series of topics. Upon its release and near-instant popularity, most K-12 schools and higher education institutions banned any use of ChatGPT and its equivalent technologies. Educators confirm cheating rings composed of students using ChatGPT. The ubiquity and effectiveness of ChatGPT have “alarmed” universities and led many professors to alter their syllabi and pedagogical approaches. Conversely, at the start of this school year, many teachers and schools are championing AI in education as they use it to generate outlines, bibliographies, and tutoring concepts.
Until now, most of our knowledge of AI has come from post-apocalyptic pop culture narratives where programs become sentient and overtake humanity and free will. In reality, AI is far from how it is presented in such dystopian movies. In “Machine Unintelligence,” computer scientist Meredith Broussard reminds us that the autonomous AI popularized by films was abandoned by serious researchers decades ago. Gary Smith refers to the public’s continued faith in the development of the film version of AI, “The AI Delusion.” It behooves us to remember that the machine learning possible today is dictated by human-created algorithms. It is humans, not autonomous machines, who set the parameters for what AI can and cannot do. However, the focus on AI sentience serves as an effective smokescreen for what is really happening.
Missing from the pop culture discourses and policies is any consideration for how AI is trained by exploiting faculty and students. AI is lucrative, but the students and faculty whose data trains AI are not compensated. The policies that derive from the numerous think tanks, task forces, and educational meetings need to include a dimension that considers how to address the economic exploitation caused by the utilization of AI in schools.
What feeds AI is large sets of data, but getting access to massive amounts of data and determining which data is best to train AI can be difficult. For example, in the early twenty-first century, some early AI large language models were trained to utilize transcripts from the Enron trial because it was an impressive amount of written data. Similarly, Google and Meta utilize Gmail and Instagram accounts respectively to train their AI.
The ed-tech industry is comprised of companies with a complex set of surveillance tools and seeks to integrate AI in the classroom. These tools, found in software such as Turnitin, ClassDojo, Illuminate Education, and G Suite for Education, along with hardware such as Chromebooks and Apple tablets, enable companies, law enforcement, government officials, schools, and more to track faculty and students and collect their data (often without their knowledge).
Schools have largely responded to the din of concern over AI by focusing on the threat it poses to academic integrity. In so doing, little attention is paid to how ed-tech transforms schools, specifically the students, into lucrative data mines that train AI. For example, any work that students submit digitally or any digital notetaking they may employ is ultimately a route to AI training. The digital note-taking support system Glean records class lectures and can transform those lectures into digital flashcards, notes, and study guides, all the while, using those captured lectures and discussions to build AI. Unknowingly, students and faculty train AI but receive no proactive information about their work on behalf of AI and no remuneration for their labor. Furthermore, AI is trained on the backs of the most vulnerable students; when technology is presented as a tool of equity, it masks the reality that a robust amount of data is gathered from those who need greater assistance.
Students, teachers, and administrators deserve substantive conversations about mitigating surveillance in schools because of the many threats it poses. Part of that needs to center on the resultant economic exploitation of faculty and students. While focus on the utilization of AI to cheat is important, when it is the primary concern of educators, the ways in which AI exploits them and their students are missed. Policymakers must recognize that corporate AI management is a mechanism that transforms the classroom into a space of exploitation and faculty and students into dehumanized data mines. Without a closer, critical exploration of AI in education, students and faculty become complicit in their own exploitation.
Project Censored National Judge, Nolan Higdon, EdD, and Allison Butler, PhD, are two of the co-authors of The Media and Me: A Guide to Critical Media Literacy for Young People (2022, The Censored Press and Seven Stories Press) and of the forthcoming Surveillance Education: Navigating the conspicuous absence of privacy in schools (Routledge).
[/openrouter]