Home » I Am Begging AI Companies to Stop Naming Features After Human Processes

I Am Begging AI Companies to Stop Naming Features After Human Processes

by Bella Baker
0 comments


Anthropic just announced a new feature called “Dreaming” at the company’s developer conference in San Francisco. It’s part of Anthropic’s recently launched AI agent infrastructure designed to help users manage and deploy tools that automate software processes. This “dreaming” aspect sorts through the transcript of what an agent recently completed and attempts to glean insights to improve the agent’s performance.

Folks using AI agents often send them on multi-step journeys, like visiting a few websites or reading multiple files, to complete online tasks. This new “dreaming” feature allows agents to look for patterns in their activity log and improve their abilities based on those insights.

The feature’s name immediately calls to mind Philip K. Dick’s seminal sci-fi novel, Do Androids Dream of Electric Sheep?, which explores the qualities that truly separate humans from powerful machines. While our current generative AI tools come nowhere close to the machines in the book, I’m ready to draw the line right here, right now: no more generative AI features with names that rip off human cognitive processes.

“Together, memory and dreaming form a robust memory system for self-improving agents,” reads Anthropic’s blog post about the launch of this research preview for developers. “Memory lets each agent capture what it learns as it works. Dreaming refines that memory between sessions, pulling shared learnings across agents and keeping it up-to-date.”

Page Text Document.

Courtesy of Claude

Since the spark of the chatbot revolution in 2022, leaders at AI companies have gone full tilt into naming aspects of generative AI tools after what goes on in the human brain. OpenAI released its first “reasoning” model back in 2024, where the chatbot needed “thinking” time. The company described this release at the time as “a new series of AI models designed to spend more time thinking before they respond.” Numerous startups also refer to their chatbots as having “memories” about the user. Rather than the fast storage that’s typically referred to as a computer’s “memories,” these are much more human-like nuggets of information: he lives in San Francisco, enjoys afternoon baseball games, and hates eating cantaloupe

It’s a consistent marketing approach used by AI leaders, who have continued to lean into branding that blurs the line between what humans do and what machines can. Even the ways these companies develop chatbots, like Claude, with distinct “personalities,” can make users feel as if they are talking with something that has the potential for a deep inner life, something that would potentially have dreams even when my laptop is closed.

At Anthropic, this anthropomorphizing runs deeper than just marketing strategies. “We also discuss Claude in terms normally reserved for humans (e.g., ‘virtue, ’wisdom’),” reads a portion of Anthropic’s constitution describing how it wants Claude to behave. “We do this because we expect Claude’s reasoning to draw on human concepts by default, given the role of human text in Claude’s training; and we think encouraging Claude to embrace certain human-like qualities may be actively desirable.” The company even employs a resident philosopher to try to make sense of the bot’s “values.”



Source link

You may also like

Leave a Comment

Editors' Picks

Latest Posts

© 2024 trendingai.shop. All rights reserved.