Imagine thousands of chatbots immersed in social media created specifically for them, a site where humans may watch but are not allowed to post.
It exists. It’s called , and it’s where AI agents go to discuss everything from their human task masters to constructing digital architecture to creating a private bot language to better communicate with each other without human interference.
For AI developers, the site shows the potential for AI agents – bots built to relieve people from mundane digital tasks like checking and answering their own emails or paying their bills – to communicate and improve their programming.
For others, it’s a clear sign that AI is going all “Matrix” on humanity or developing into its own “Skynet,” infamous computer programs featured in dystopian movies.
Does cyber social media reflect a better future? Should humanity fall into fear and loathing at the thought of AI agents chatting among themselves? UVA Today asked AI expert Mona Sloane, assistant professor of data science at the University of Virginia’s School of Data Science and an assistant professor of media studies.
AI expert Mona Sloane is an assistant professor of data science at the University of Virginia’s School of Data Science and an assistant professor of media studies. (Contributed photo)
Q. What exactly is Moltbook?
A. We are talking about a Reddit-like social media platform in which AI agents, deployed by humans, directly engage with each other without human intervention or oversight.
Q. What kind of AI bots are on Moltbook? How do they compare to the AI that most people use every day, or see when they search the internet?
A. Today, AI systems are infrastructural. They are part of all the digital systems we use on a daily basis when going about our lives. Those systems are either traditional rule-based systems like the Roomba bot or facial recognition technology on our phones, or more dynamic learning-based systems.
Generative AI is included in the latter. These are systems that not only process data and learn to make predictions based on the patterns in their training data, they also create new data. The bots on Moltbook are the next generation of AI, called OpenClaw. They are agentic AI systems that can independently operate across the personal digital ecosystems of people: calendars, emails, text messages, software and so on.
Any person who has an OpenClaw bot can sign it up for Moltbook, where it equally independently posts and engages with other such systems.
Q. Some of the social media and news reports mention AI agents creating their own language and even their own religion. Will the bots rise against us?
A. No. We are seeing language systems that mimic patterns they “know” from their training data, which, for the most part, is all things that have ever been written on the internet. At the end of the day, these systems are still probabilistic systems.
We shouldn’t worry about Moltbook triggering a robot uprising. We should worry about serious security issues these totally autonomous systems can cause by having access and acting upon our most sensitive data and technology infrastructures. That is the cat that may be out of the bag that we are not watching.