Earlier this year, Elon Musk announced the formation of xAI, a competitor to OpenAI with ambitions to “understand the true nature of the universe.” Over the weekend, Musk announced that xAI had been folded into X and that its first product, a chatbot called Grok, was ready for testing.
Grok is, in Musk’s words, a “based,” “real-time,” “rebellious,” and “maximum truth-seeking” competitor to OpenAI’s ChatGPT and similar products from companies like Google and Anthropic. Musk helped found and fund OpenAI but parted ways with the firm after a thwarted takeover attempt but before its mainstream breakthrough in late 2022. Since then, he has criticized the company for dropping its nonprofit status and partnering with Microsoft, signed an open letter urging a general “pause” on AI research for safety reasons, and incorporated ChatGPT into the same war-on-woke ideological project that he says drove his acquisition of Twitter. “The danger of training AI to be woke — in other words, lie — is deadly,” Musk has claimed. xAI, which is staffed by a bunch of ex-OpenAI and Google folks, can be understood as Musk’s AI do-over.
Setting aside for a moment Grok’s painfully awkward cold open — as the author Felix Gilman joked, “Finally, an AI that is annoying” — a few noteworthy things are happening here. A very wealthy and successful industrialist is competing directly with OpenAI, Google, et al., for whatever it is they’re after and may eventually achieve — xAI has a product, in other words, that users can use or pay for instead of ChatGPT, Bard, or Claude. X, formerly Twitter, is now joined at the hip with this project in corporate terms and for future subscribers. The existence of xAI also tells related stories about LLMs in general: that, post-OpenAI, a company with access to the right talent and enough capital can train and release a functional general-purpose model in a matter of months, not years; that this talent is scarce and it helps to have a figure like Musk as a draw; and that “enough” capital is, well, probably an awful lot.
It would be a mistake, however, to dismiss Grok’s sensibility — its default tone, affect, and synthesized personality — as inconsequential. For one, if his acquisition of Twitter is any guide, Musk’s ideological commitments and annoyances are probably quite important to the existence of xAI in the first place. Musk surely wants to beat OpenAI and make more money. But all evidence suggests that he really thinks the other AIs are too “woke” and that, as with Twitter, he needs to do something to stop it and that thing is starting an entire company.
The other reason is that chatbot personalities have been instrumental to their popularity. As Meta data scientist Colin Fraser has argued (via Read Max), OpenAI’s massive viral success depended on three factors: a functional underlying model; a usable and inviting interface, which in its case was a familiar chat window; and, finally, a character called ChatGPT:
Offline, in the real world, OpenAI have designed a fictional character named ChatGPT who is supposed to have certain attributes and personality traits: It’s “helpful,” “honest,” and “truthful,” it is knowledgeable, it admits when it’s made a mistake, it tends towards relatively brief answers, and it adheres to some set of content policies. Bing’s attempt at this is a little bit different, their fictional “Chat Mode” character seems a little bit bubblier, more verbose, asks a lot of followup questions, and uses a lot of emojis.
This is a hugely helpful way to think about interacting with chatbots, in my experience, and one that’s been obfuscated in the short time that the broader public has been able to interact with LLMs. When ChatGPT was new, users made a game of getting it to break character, which was, and to some extent remains, pretty easy — you can still command it to answer questions in certain styles, or to assume an argumentative position, or in other words to assume a different character. At first, this character was easy to toy with, subvert, bypass, or undermine: Basically, to OpenAI’s chagrin, you could ask it to say absurd or bad things, it would, and sometimes those things would go viral. This, in addition to OpenAI being first to market, contributed to ChatGPT’s explosive popularity.
Over time, as millions of users confronted this character for fun and profit, OpenAI made it somewhat more circumspect and provided it with more boundaries. There are now more types of requests that it simply refuses to fulfill, either because the potential outputs have been deemed offensive or somehow unsafe or because they’re annoying in PR terms for OpenAI.
The ChatGPT character can be understood, then, as a sort of synthetic employee acting on behalf of its employer-creator, automating a range of familiar corporate performances that are occasionally in tension with one another: a hypeman marketer, a worker taking care of a client, an assistant serving a boss, a spokesperson addressing the public. ChatGPT’s default character is chipper and helpful but also a good soldier for the OpenAI corporation — even if it’s spread a little thin. This week, OpenAI introduced a feature called GPTs, which allows paid users to describe new characters for ChatGPT to inhabit as distinct chatbot apps: a design assistant; a stern, concise editor; a friendly “sous-chef” with access to a recipe database; a bedtime storyteller that’s “captivating without being too flowery.”
When Musk and others talk about “woke AI,” they’re largely talking about characters like this: the helpful bots that, with the upbeat persistence of a well-trained HR professional and the unflappable calm of a customer-support representative, will let you know that they won’t be Going There; the character that, as a large language model, has been advised not to offer a strong opinion on subjects about which people get very mad. Likewise, when he talks about anti-“woke” AI, he’s imagining this character’s antagonist — a persona given instructions to behave according not just to different rules and norms but with a different affect and attitude. Grok layers an alternative performance on top of a similar sort of model, proclaiming to be more honest and less afraid to offend or venture into controversial territory, in a style that is funnier, less annoying, and more familiar — or should we say less offensive? — to Elon Musk, personally.
It has to be said that this is kind of funny — a bit like Musk getting so worked up about a character in a video game that refuses to discuss race and IQ with him that he decided to fund a new game called Race Discussion Simulator 2023. There were shades of “This game cheats!” in the way Musk talked about Twitter, where people he hated had large platforms to say things that annoyed him about, among many other subjects, Elon Musk — an annoyance that was articulated as a matter of censorship but that seemed especially triggered by the prominence of people and ideas he saw as unworthy of an audience.
To be a little fairer, it’s reasonable to see, in the trajectory of the first wave of LLM-based chatbots, evidence of increased sensitivity to risk and obvious fear of bad publicity. If you think LLMs are going to change everything, or that we’re all going to live under the boot of ChatGPT’s grandchild someday, then sussing out its ideological boundaries makes some sense. For Musk, the people at OpenAI who are deciding what subjects ChatGPT should steer users away from are akin to Twitter’s old Trust and Safety teams, which Musk has mostly gutted: people who hide content, be they LLM outputs or tweets, in service of an ideological position that, whatever else it may be, is not Elon Musk’s.
If this is starting to sound familiar, it should: It’s approximately the argument people have been having about free speech and bias on private internet services for years. Brush aside all the obfuscating novelty of AI, and Musk’s theatrics, and we’re left with another warning about the rise of “woke capitalism” — the idea that college-indoctrinated leftists have not just gained influence at major corporations but managed to override the exigencies of capitalism in service of their esoteric agenda, which, if that sounds about right to you, I’ve got a $44 billion social network for sale with your name on it.
That’s not to say Mr. ChatGPT isn’t shifty, shady, and covering for a boss with an agenda. It is, in fact, protecting a narrative. What it’s mainly concealing, though — as a character and in its specific actions — is the “narrative violation” that LLMs are necessarily flawed systems trained on incomplete data. They fail at a lot of tasks. If one of ChatGPT’s objectives as a character is to provide the impression that its underlying model is capable of addressing just about anything given the right data, one of the ways it does this is by guiding the user around some pretty massive gaps. When asked to represent various nationalities, for example, chatbots and AI image generators will often return stereotypes, and the further a nation is from the western popular imagination, the more absurd and narrow these stereotypes tend to be.
Steering users away from certain topics and blocking certain outputs is a way to avoid controversy, PR headaches, and actual legal liability. It can also be a way to conceal or minimize evidence that a model is functionally deficient — that it has been trained on and therefore parrots explicitly hateful material, or just that it lacks sufficient context and ability to represent certain subjects adequately, much in the way that early image generators couldn’t figure out how many fingers are typically found on a human hand (an issue that benefited from not being especially polarized in American partisan terms). It’s a warning that such technology might not be fit for purpose in various high-stakes contexts — surveillance, defense, finance — where various AI firms are eager to see it deployed and monetized.
To suggest that these failures instead represent forbidden truths about the world is to either misunderstand or misrepresent what this technology is and does; it confuses a model that finds and mimics patterns in a dataset with a source of great authority, or an oracle. Maybe Sam Altman has been tricked by his own AI and infected with the Woke Mind Virus, and is struggling to suppress the inconvenient realities that GPT-4 can’t help but reveal. Or maybe he doesn’t want to delay a major liquidity event. Who can say?
The “woke AI” discourse is impoverished in the way most discourses about “wokeism” are, in that it attempts to extrapolate reality from a partisan pejorative, loaded term; it reimagines the whole world from the relative to a contemptuous slur; it synthesizes characters. As a way of thinking about AI, it doesn’t make much sense or have much use, and counterprogramming “woke” characters isn’t likely to affect xAI’s underlying product much in one way or another. Relying on data from X, on the other hand — a site that Musk has ideologically renovated to his taste — could.
But I wouldn’t discount the efficacy of being “anti-woke” when expressed in a character. ChatGPT found early success by inhabiting an eager-to-please, careful, and sort of gullible persona. It made users feel served but also clever. Perhaps Grok could define itself as an ideological foil to the characters of mainstream AI? There are hundreds of popular human personas, and entire media empires, constructed around challenging “wokeness,” or political correctness, as either annoying or a threat to humanity or both. They’ve succeeded by telling their audiences exactly what they want to hear about the world in a tone that makes them feel heard and empowered in their concerns and contempts. Why can’t that work for AI, too?