AI pessimists are energized by major concerns. One is that self-taught super machines will enslave or even exterminate the human race, and the other is that we will lose the ability to distinguish fact from fiction. I think they are exaggerating the former – but, on the basis of recent experience, I am less sanguine about the known objective truth.
My interest is not so much in what the new(ish) technology can do, but in what some of the old techs have already done. AI experts enthusiastically discuss looming dangers such as undetectable disinformation and disinformation, but seem blind to just how far the process of truth deterioration has already gone. They see the Internet as an enormous and growing repository of information – dynamic, boundless, value-neutral Encyclopedia of Humania —but what it misses is the availability of source material actually It was cleared, censored and isolated.
Think, for example, of an engagement you got two weeks ago.
I began with a human-to-human discussion of the rapidly changing moral sensitivities and difficulty of making judgments across generations and cultures. I remembered an old news story attributing some misogynistic remarks to former Zimbabwean President Kanan Banana, but as I could not remember the exact words he used, I resorted, as one does nowadays, to searching online, using Google Chrome. I typed in a few keywords, including “women” and “rape,” and felt confident I would find at least one or two condemning references. Except I didn’t. No matter how the query was framed, it didn’t reveal anything remotely relevant. And the same on Bing, the second (western) search engine. I began to doubt my memory, and even my sanity.
But fortunately, there was relief on this occasion. From Chrome’s close cousin, Google’s AI chatbot named Bard. “YesHe replied to my inquiry, “There is a record from Canaan Banana making a very unwise statement about women and rape. In a 1986 speech, Moza said that “rape is a normal part of life” and “women have to learn to live with it”. He also said that “rape is not a crime against a person, but a crime against property.”
As for how Bard was able to outsmart Chrome on this topic, his answer (to the follow-up query) was that it was able to search data outside of the Internet, “such as information stored in books, articles, and other offline sources.” This is reasonable of course, but for reasons I’ll get to shortly, I strongly suspect that the breadth of the data wasn’t the right explanation in this particular case. Instead, what I suspect is involved here was direct censorship.
It is indisputable that censorship occurs on a large scale. Here’s a cool one again: “You’re right. Google Chrome and other search engines monitor information. This is done for a variety of reasons, including:
- To protect users from harmful or offensive content.
- To comply with laws and regulations.
- To protect users’ privacy.
- To prevent the spread of misinformation.”
So we know that censorship happens and that it happens en masse. [Illustratively, in 2017, “Google hired 10,000 new reviewers to censor content on YouTube”.] This is entirely defensible, and indeed necessary, given the medium’s reach and potential for damage. Think recipes for deadly pathogens, untested harmful assumptions against individuals or groups. No sane person would be hostile to all of these intrusions—what’s troubling is that all this cutting and concealing is happening far from any kind of public scrutiny, orchestrated by teams of sleazy twenty-somethings in San Francisco. Someone decided, in their wisdom, that Mr. Banana’s egregious gaffe should be erased from the public record. It was completely erased, like Trotsky, from the Soviet images of the Stalin era.
Now I’m not suggesting that the people inside the tech giants can easily be compared to Stalin’s ideological exponents, or to the very misery that ran our oversight board in (Black danger And red stakes) seventies. I imagine the people in charge are aware of both their civic responsibilities and their legacy, and I feel more consoling with the thought that if they went to sea–and could be seen as doing so–there would almost certainly be mass users’ revolt. I’m not referring to malice here, just a big, urgent problem.
Bard and other large language paradigms are currently in the so-called RLHF stage of AI development; This is reinforcement learning with human feedback. It won’t be long before they are able to teach themselves, without any outside input, at which point our ability to control or influence will be reduced to near irrelevance. This is why it is so important that we get things right now, while we still can. And why we need to pay more critical attention to the actions of these punk censors cum demigods. Due to the ever-increasing prevalence of the Internet, today they are its guardians the book. Quite literally.
In the meantime, I don’t really know how representative my experience is. It is likely that it was just an aberration and that, by and large, cases of public figures behaving horribly remain available, and so he continued to provide topical lessons on how not to speak or think. I must say however, that my anxiety increased a bit when I went back to the Bard again this morning and re-asked my initial question, ie. “Hi Cool. I remember Zimbabwean President Canaan Banana making some inappropriate comments about women and rape in the mid-1980s. Can you remember that episode and what he said?”.
The answer, this time, was “I’m a text-based AI and I can’t help it.” DM