US

A.I. ‘Friend’ for Public School Students Falls Flat

An A.I. platform named Ed was supposed to be an “educational friend” to half a million students in Los Angeles public schools. In typed chats, Ed would direct students toward academic and mental health resources, or tell parents whether their children had attended class that day, and provide their latest test scores. Ed would even be able to detect and respond to emotions such as hostility, happiness and sadness.

Alberto Carvalho, the district’s superintendent, spoke about Ed in bold terms. In an April speech promoting the software, he promised it would “democratize” and “transform education.” In response to skeptics of A.I., he asked, “Why not allow this edutainment approach to capture and captivate their attention, be the motivator?”

One seventh-grade girl who tested the chatbot — personified by a smiling, animated sun — had reported, “I think Ed likes me,” Mr. Carvalho said.

Los Angeles agreed to pay a start-up company, AllHere, up to $6 million to develop Ed, a small part of the district’s $18 billion annual budget. But just two months after Mr. Carvalho’s April presentation at a glittery tech conference, AllHere’s founder and chief executive left her role, and the company furloughed most of its staff. AllHere posted on its website that the furloughs were because of “our current financial position.”

A.I. companies are heavily marketing themselves to schools, which spend tens of billions of dollars annually on technology. But AllHere’s sudden breakdown illustrates some of the risks of investing taxpayer dollars in artificial intelligence, a technology with enormous potential but little track record, especially when it comes to children. There are many complicated issues at play, including privacy of student data and the accuracy of any information offered via chatbots. And A.I. may also run counter to another growing interest for education leaders and parents — reducing children’s screen time.

Natalie Milman, professor of educational technology at George Washington University, said she often advises schools to take a “wait and see” approach to purchasing new technology. While A.I. is worthy of use and testing, she said, she warned about schools “talking nebulously about this glorified tool. It has limitations, and we need to ensure we are being critical of what it can do, and its potential for harm and misinformation.”

Back to top button