Thus far, LLM advances have allowed people to finish their homework and write emails, but things could be about to get a lot more interesting.
Researchers from Google and Stanford have created something akin to a mini Westworld with 25 AI agents which interact with one another. The paper titled “Generative Agents: Interactive Simulacra of Human Behavior” populates a sandbox with 25 AI agents. “In this work, we demonstrate generative agents by populating a sandbox environment, reminiscent of The Sims, with twenty-five agents. Users can observe and intervene as agents they plan their days, share news, form relationships, and coordinate group activities,” the paper says.
And these agents end up doing things that aren’t too dissimilar to what humans do. “Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day,” the paper says.
These “agents” were created through an architecture that extended a Large Language Model to “store a complete record of the agent’s experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior,” the paper adds.
“In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time,” the paper continues.
To set all this up, the researchers first created a fake world named Smallville. “The Smallville sandbox world, with areas labeled,” the paper describes the following diagram. “The root node describes the entire world, children describe areas (e.g., houses, cafe, stores), and leaf nodes describe objects (e.g., table, bookshelf). Agent remember a subgraph reflecting the parts of the world they have seen, in the state that they saw them,” it adds.
The researchers then created agents with detailed descriptions. This was a decription for an AI agent John Lin.
John Lin is a pharmacy shopkeeper at the Willow Market and Pharmacy who loves to help people. He is always looking for ways to make the process of getting medication easier for his customers; John Lin is living with his wife, Mei Lin, who is a college professor, and son, Eddy Lin, who is a student studying music theory; John Lin loves his family very much; John Lin has known the old couple next-door, Sam Moore and Jennifer Moore, for a few years; John Lin thinks Sam Moore is a kind and nice man; John Lin knows his neighbor, Yuriko Yamamoto, well; John Lin knows of his neighbors, Tamara Taylor and Carmen Ortiz, but has not met them before; John Lin and Tom Moreno are colleagues at The Willows Market and Pharmacy; John Lin and Tom Moreno are friends and like to discuss local politics together; John Lin knows the Moreno family somewhat well — the husband Tom Moreno and the wife Jane Moreno.
With the setting and characters defined, the researchers saw that they began interacting with one another. This was a sample conversation the agents had with one another:
John: Good morning Eddy. Did you sleep well?
Eddy: Good morning dad. Yeah, I slept great.
John: That’s good. What are you working on today?
Eddy: I’m working on a new music composition for my
class. It’s due this week, so I’m trying to get it finished.
But I’m having so much fun with it!
John: That sounds great!
Slowly, the agents started developing routines. John, for instance, woke up at 6 am everyday, and brushed his teeth, took a shower and ate breakfast. He then spoke to his wife and kid before heading out to work.
Most interestingly, the researchers were able to get the agents to organize a Valentine’s Day party based off an initial intialization of one of the agents.
“At the beginning of the simulation, one agent is initialized with an intent to organize a Valentine’s Day party. Despite many possible points of failure in the ensuring chain of events—agents might not act on that intent, might not remember to tell others, might not remember to show
up—the Valentine’s Day party does in fact occur, with a number of agents gathering and interacting,” the researchers noted.
There was some odd behaviour too. “The college dorm has a bathroom that can only be occupied by one person despite its name, but some agents assumed that the bathroom is for more than one person because dorm bathrooms tend to support more than one person concurrently and choose to enter it when there is another person inside. Likewise, agents in Smallville may not realize that certain places are closed after certain hours and decide to still enter them. For instance, the stores in Smallville all close around 5 pm, but occasionally, a few agents enter the store after 5 pm, not understanding that the shop has already closed,” the researchers said.
Still, the paper presents a tantalizing new possibility that we can now create virtual worlds with agents that talk and behave like other humans. These conversations are lifelike, and the agents have enough agency to meet, chat and have parties. Sure, there are gaps, but it’s not hard to imagine a future where these agents will be able to lead lives that are indistinguisable from humans. And that leads us to an even more tantalizing question — are we also agents living in someone else’s world?