Addressing Altman's AI Misinformation
At Chalice, we question Sam Altman's approach to public interest in AI. The reason is simple: We at Chalice believe we benefit from public education on the topic of AI. Altman, throughout his year in the spotlight, has consistently mis-educated the public about AI.
We've found a lot of scientists feel the same way. One of the most outspoken is a computational linguist named Emily Bender. She was introduced to general audiences last March in a New York magazine profile that did a great job of describing her critique and concerns. In a nutshell, Bender takes Altman to task for talking about ChatGPT as if it thinks and decides. In actuality, as Bender notes, the application relies on human-labeled and filtered training data to predict what words, in what order, make enough sense to be an acceptable response to our prompts.
That same month, Bender joined ex-Google researcher Timnit Gebru and Margaret Mitchell of AI tech leader Hugging Face to publish a statement against ceding AI regulation to the tech giants. "We should be building machines that work for us, instead of 'adapting' society to be machine readable and writable," they wrote. This trio, calling themselves "Stochastic Parrots" (an AI joke), call for AI regulation focused on transparency, accountability, and fair labor practices.
Bender is the clearest voice on AI in social media. An example is this Tweet from May, which reframes the extremely common question "Can AI do that?" Bender says business leaders really should be asking, "Is this an appropriate use of automation?" It profoundly changes how you think about AI when you consider how completely bound any application is by its training data.
Read enough Bender and it might start to seem crazy that owners of AI applications like ChatGPT want to keep their training data secret. It becomes harder to believe that, in their efforts to produce novel and creative outputs, the problem of "hallucinations" will ever be avoided. An August piece on that topic in Fortune quotes Bender explaining why, in her view, "This isn't fixable."
👉 Bridging Silicon & Soul | AI Literacy | Digital Anthropology | Author | Speaker | Human-Centered Marketing & Media Psychology | PhD Researcher in Generative AI | EdTech | Media Voice
1wAdam, thanks for sharing! Love this.