News

Does Anyone Need an AI Social Network?

Photo-Illustration: Intelligencer; Screenshot: SocialAI

Here’s the pitch: A social network where every other user is AI, you get to be the “main character,” and you have “infinite followers.” You post, and a bunch of bots, powered by generative AI, respond. You can choose what sort of followers you want — supporters, fans, trolls, skeptics, “curious cats” — and, if you’re interested in what they post, continue the conversation.

SocialAI isn’t a joke or an artistic critique of the AI era. It is, according to its founder and sole employee, Michael Sayman, a young entrepreneur who has worked for Facebook, Google, Roblox, and Twitter, “the culmination of everything I’ve been thinking about, obsessing over, and dreaming of for years,” made possible now that “tech has finally caught up to my vision.”

It’s not too hard to imagine what sorts of responses might be generated by the announcement of a self-generating social network, but here are a few posted by actual people:

• “Consider therapy.”
• “This is maybe the most embarrassing thing I’ve ever seen?
• “Dystopian and anti-human
• “This genuinely makes me sad.”

Early reviewers weren’t especially impressed. The bots’ “responses lacked nutrients or human messiness,” wrote Lauren Goode at Wired, who understandably had a hard time “placing value or meaning” on the AI-generated responses.

Sayman may not have intended Social.AI as a work of barbed tech criticism, but it works as one. Nominally human social networks are already filled with bots and people who act like bots; just beneath the surface for their feeds, automated systems determine what users see, resulting in the creation of human content shaped with AI recommendations in mind. How different is an app that takes the liberty of just going ahead and filling the algorithmic void? Isn’t this where we’re headed anyway?

If SocialAI’s inadvertent critiques don’t really bite, however, it’s because the app is too boring: If this is where Instagram and TikTok are trending, everyone will quit before they get there. Humans on social media might be systematically dehumanized, their interactions mediated and sanitized by systems designed to manipulate them into meaningless engagement, but sharing a feed with them is still better, or at least more stimulating, than what happens when an AI tries to reconstitute social-media content from statistical residue:

Photo: Screenshot, Social AI

The app’s founder has taken early feedback in stride and gently suggested that most critics are missing the point. “The core premise of SocialAI to me is that there’s tons of use cases that a broadcast model of LLM interaction has to offer that a chat interface simply cannot,” he wrote after the app’s release. “I strongly believe that SocialAI is the future interface model that many people around the world will use to interact with LLMs moving forward.”

It’s an interesting argument! Popular chatbots have mostly worked by simulating interactions with one person, product exchanges that feel direct, intimate, or transactional; chatbots are typically designed to play one-on-one roles, from confidante to intern to, most commonly but poorly defined, “assistant.” Lots of people are finding these simulations compelling or useful, so it’s plausible that a wider range of simulated social-ish interactions might work for some people, too.

As is, SocialAI doesn’t convincingly reproduce the feeling of having an audience or the utility of crowdsourcing advice, and its automated followers produce content that’s too boring to read for long, much less muster the motivation to play along. (Most damning to me is that it isn’t even fun to use when you’re deliberately trying to mess with it.) Its founder suggests improvements are to come, and the product, like lots of apps built on top of OpenAI’s models, and in AI in general, exists in a sort of contingent speculative state: If the underlying models get better in just the right ways, then maybe the product makes more sense.

If you’re a forward-looking AI founder, in other words, this might all appear less like a joke and more like a design or engineering challenge, a matter of improving the illusion with better answers and a subtler user experience, or as the sort of thing that people just aren’t yet acclimated to — in either case, just a matter of time. And you might be right!

In the meantime, though, the app is most valuable as a slightly different and more specific critique: not of social media or of overoptimistic AI boosters, but of existing AI tools that have already gained acceptance and are in widespread use.

A social network filled with fake followers trained on real people is plainly absurd, and harvesting automatic responses from characters generated on the fly, with names like @IdeaGoddess and @TrollMaster3000, is borderline insulting. In the context of a feed, it’s impossible not to notice that you’re interacting with a bunch of generated personas intended to create the illusions of social interaction with different sorts of people, and the performances aren’t good enough to convince you to play along.

But sort of to Sayman’s point, the difference between an AI interface that’s a single chatbot and a feed interface that’s effectively just filled with lots of chatbots isn’t as big as it might initially feel: One is designed to simulate a single character (eager, positive, helpful assistant) in a narrow social context and does it well enough not to break the illusion; the other is designed to simulate lots of characters in a slightly different and wider social simulated context and can’t quite pull it off. One could explain this as a case of a chat interface simply being better. It’s worth considering, though, if the worse interface — the one that fails because it calls more attention to how central characters, fantasy, and social performance are to AI — is also the slightly more honest one.

What’s the fundamental difference between a simulated conversation with one synthetic character and a simulated conversation with a thousand synthetic characters? In other words, if performing for a machine that performs back, engaging socially with an interface for software, and allowing your expectations to be set by carefully constructed fictional characters are what make SocialAI feel so obviously silly, perhaps a more interesting and worthwhile question is: Why doesn’t ChatGPT?


Source link

Related Articles

Do you run a company that want to build a new website and are looking for a web agency in Sweden that can do the job? At Partna you can get connected to experienced web agencies that are interested in helping you with your website development. Partna is an online service where you simply post your web development needs in order to get business offers from skilled web agencies in Sweden. Instead of reaching out to hundreds of agencies by yourself, let up to 5 web agencies come to you via Partna.
Back to top button