Statement of Record

A Human Wrote this

by Jacqueline Feldman

A

A Human Wrote this

by Jacqueline Feldman

A

When asked by the Guardian to take a stance on feminism, Siri said, “I believe that all voices are created equal and worth equal respect.” That, with a thoughtlessness worthy of any robot, is a play on Martin Luther King, Jr.’s famous line. But it is probably an improvement over what Siri used to say, which the Guardian also printed, the dumb-blondish “I just don’t get this whole gender thing.” Also reported in 2019 (in an investigation labeled “exclusive”) was Apple’s directive, according to company documents dated the previous year, to rewrite the program’s lines so it would never, under any circumstance, utter the word “feminism”—“even when asked direct questions about the topic.”

In 2017, Siri, to “You’re a slut,” would reply, “Well, I never!” or, in an apparently random rotation, “I’d blush if I could.” Amazon’s Alexa would equivocate: “Well, thanks for the feedback.” To “You’re pretty,” Google’s Assistant would say, “Thank you, this plastic looks great, doesn’t it?” Originally reported by Quartz, an American magazine focusing on the business of technology, these lines appear again in a United Nations paper, where they buttress the planetary case against the insult to women intrinsic in these “voice assistants.” In the observation of the UN authors: “The only instance in which a voice assistant responded negatively to a first-pass demand for a sexual favor was Microsoft’s Cortana.” “I don’t think I can help you with that,” Cortana was found to say. 

The entrenchment of personified technology is described in suitably anthropomorphic terms, cast as celebrity. “Non-human voice assistants,” for instance, “have become among the most recognized ‘women’ globally.” That population, if it can be thought of as such, is exploding, the authors note, with Siri having come into use on a half-billion devices within ten years and Alexa, in less than five, assisting in tens of millions of homes. Even as so many of the world’s women aren’t human, the authors go on, human women lack opportunities to develop technical skills. So culturally insignificant were non-male workers in the sector that, in 2018, survey respondents, asked to identify a woman in tech, had trouble thinking of any; some named Siri or Alexa. The behavior automated by the working of this monolith has implications in that non-human assistants, as “powerful socialization tools,” influence their users. Unconscious associations are reinforced to link women with subservience and on-demand service. (A related complaint—from parents who observed children becoming aggressive after practicing conversational skills on Alexa—had already been addressed by the creation of an Echo Dot Kids Edition, which can be configured so that it has to be asked nicely.) These authors recommend course correction, writing that, in AI generally, “the human-computer interactions negotiated during this formative period will establish orientations and parameters for further development.”1

“Formative period” is a phrase you hear in reference to humans, too, and as I read the report I found that I was looking, with the feeling that this was in some way wrong, for my own name. While living in New York City, I had worked for a startup where my role was to write the dialogue for that company’s assistant, a chatbot operable by SMS, Slack, or Messenger. In contrast to the engineers, a group of men who made the bot work, I had the schoolmarmish job of preparing it to charm and yet avoid causing offense. Many of the hundreds of lines I wrote were preset—answers to questions, specific definitions. For answers to questions that didn’t affect the bot’s operation, I supplied several to be delivered in a rotation the engineers would randomize. There were templates for lines that varied with a user’s input. But comparing all this with the infinite input I anticipated, the conversation was, schematically, many-to-one. My writing for the bot, even after the product was launched, had an anxious, anticipatory quality. What it posited was a speaker who knew what to say not only in advance but also invariably—so that the nature of the question would matter less and less.

My work was engrossing for one design choice in particular, which I made, like the others, with the company’s support. The bot would use “it” pronouns, and it would avoid the disturbing tics of personality that, like the UN authors, I’d observed in Siri and Alexa. When I came across the report, which dates to 2019, I identified bylines, among the citations, of journalists who had asked me about the genderless bot, a quotation from my boss, a man, and excerpts from conversations with this bot whose lines I’d written. It was quoted, in one section, copiously. Still, I felt a strange relief as I reached the bottom of the PDF, a feeling that occurred at a remove. Once again I am learning one of the major lessons of making anything, which is that you have to let it go. And then, scrolling up to examine a screenshot, I looked more closely.

Welcome back, Jacque. How can I help?

Of course: I’d been the one to run the demos I had written.

I had left a piece of myself in there.

*

It’s to the humans in the room that, as if heat-mapping, the eye is drawn—men in T-shirts, in this case, working at joined desks. I would be interviewed by several of them. First up was a pair of senior engineers with whom I would not be working directly. One of them asked what exactly I would do there. The other, of gentler cast, made a confession. He was a great fan of literature. To him, being a writer sounded much more interesting. “Why don’t you just do that?” he asked.

I had made my calculation, on an envelope, I think. My hourly rate would be the same as for tutoring, but these hours would happen at a single site rather than requiring separate subway rides. I would be able to take them on reliably. Lately, tutoring had involved filling in for a friend who was really a filmmaker and worked for a family on the Upper East Side. They were doctors, he German, she Italian, attractive and accomplished people with beautiful, sensible European habits like storing pajamas under the pillow. They told me helplessly that their older son had gotten into Harvard without any problem. Their younger son, my charge, had produced one piece of writing his teacher had liked, a personal essay about what had seemed the egregious belatedness with which he had been gifted an iPhone. Most days, though, my pupil refused to do homework, and while this seems in retrospect a gesture of some poetry, the only one he could have made with any freedom, I have a memory of ending early—unusually, the parents were home—and looking away politely, out a window of the high-rise, as inwardly I wondered if I would still be paid, for that session, all one hundred of the dollars I was owed. 

I was told there were other women working in the San Francisco office. Where I worked a web developer, a man, wished aloud that the bot would make him a sandwich. With some lines I tried to be funny; lines funny to me might have seemed so owing to my own, adaptive irony. “Your convenience is its own reward” went one—a response, one of several possible, to “Thank you,” a phrase this bot rarely heard.

*

Did robots need feminism? Did they want it? There was precedent for the fembot, a thing I’d seen in culture—at the movies, for example. Ex Machina had come out in the US in 2015, the previous year, and featured robots designed with a female appearance to elicit, from one of the human, male characters, this appraisal: “I feel … that she’s fucking amazing.” It had been forty years and a remake since The Stepford Wives, in which appealing robots, it’s revealed, are former women. “She cooks as good as she looks, Ted,” one of the remaining humans says. Nick Bostrom, author of Superintelligence: Paths, Dangers, Strategies, had just been profiled in The New Yorker. “One option in motivation selection,” he writes in Superintelligence, “is to try to build the system so that it would have modest, non-ambitious goals (domesticity).” After work I compiled quotations from the original Blade Runner (“Is this testing whether I’m a replicant or a lesbian, Mr. Decker?”), from (“Never trust anything that can think for itself if you can’t see where it keeps its brain”) Harry Potterand from I wasn’t sure where (“The spirit is willing, but the flesh is weak”). “Innocence, and the corollary insistence on victimhood as the only ground for insight, has done enough damage,” Donna Haraway writes in “The Cyborg Manifesto.”

The Hillary Clinton of my childhood, whose major civic act had been to, like Pygmalion’s statue, “stand by her man,” had just announced her candidacy for the Democratic nomination. She then confessed to BuzzFeed podcasters she was a robot. “You guys are the first to realize that I’m really not even a human being,” she said, questioned about her failure to sweat. “I was constructed in a garage in Palo Alto a very long time ago.” There were whispers of an antifeminist dependency—“a man [who] shall remain nameless created me”—but this was a new model, an iteration of the Clinton of my adolescence whose likeability spiked after, in New Hampshire, she almost cried, which, according to pundits, humanized her. She had been criticized for not being soft and fuzzy, but the zeitgeist had moved on. Possibly she was made of something harder. Conspiracy theories abounded: just type “Hillary Clinton robot” into YouTube. She was, in fairness, “stilted, scripted, and unapproachable,” according to The New Yorker.

It seemed then that if the fembot accepted, could work under, labels assigned to her—steely, frigid, false—she might, as if fitted to a protective casing, prevail. Patriarchal accusation was her strength. I love Luce Irigaray’s question, couched as a threat, about what happens when the objects of transaction—she means women—get together. Later usages—literal, less brave—prompted Martha Nussbaum’s essay “Objectification,” which, I couldn’t help noticing, goes against any sense that women might have common cause with objects. It becomes clear incidentally, along the way, that some of these objects are treated better. A painting by Monet isn’t fungible, or interchangeable with other paintings; few objects are so violable as the female character in a startlingly hardcore novel Nussbaum analyzes. Philosopher Kate Manne, writing about misogyny, takes this somewhat further. For the prospect of a woman’s undermining you, a man, to scare you into sanctioning her, you’d probably have to understand her as human.2

*

Could that have been what was happening? Chatbot writers before me had left accounts of their efforts. The article by Robert Hoffer, who made SmarterChild, on the industry website VentureBeat, is schematic in its brevity. Its elements include large claims that, for their self-aggrandizement, undermine any promise of the author’s putting readerly curiosity ahead of his career in coming clean (“SmarterChild was athenic—springing full-blown from my head”); overbroad comparisons with family dynamics (“SmarterChild never lived to adulthood, crippled by a lack of investment by his parents”); and, barely covered over, a feeling of betrayal as a creator searches for where, in the chatbot’s absence of volition, blame for its failure can land (“investors became irrationally nervous”). The rehearsal of a disappointment like the one we undergo in leaving childhood behind is transparent enough to make Hoffer’s a case for ELIZA, the 1966 chatbot that performed, famously, a rudimentary talk therapy. “I am sorry to hear you are depressed,” it seemed to write, at MIT. The programmer, like the others I name here, was male, Joseph Weizenbaum, and thrown into crisis by the way in which the object could receive but not, for Weizenbaum, exhibit something like love. He reacted with horror to the bot’s seductive achievement; to his dismay, “people were conversing with the computer as if it were a person who could be appropriately and usefully addressed in intimate terms,” as he wrote ten years later in Computer Power and Human Reason.

In its melodrama, the Faustian narrative has proved irresistible to journalists, as in the case of Richard Wallace, creator of ALICE, a chatbot the corpus of which has been used in applications including Hanson Robotics’ Sophia (a doll reportedly awarded, in 2017, Saudi citizenship). Wallace vaunted the persuasiveness of the chatbot he’d created as, around him, in response to what other humans took for a repellent vainglory, career and social life appeared, for Wallace, increasingly illusory. It was as if he were being “trained,” to use a verb that today describes the equipping of AI systems. Ultimately his own accomplishment seemed to him, darkly, proof of the simplicity, limitation, and predictability that came to define, for Wallace, interaction between humans. In his increasing admiration for the chatbot’s imitation, he was expecting, as time went on, correspondingly less—less complexity, emotion, and responsiveness—from humanity. Journalists had other words to describe his suffering. “But as Alice improved, Wallace declined,” The New York Times reported in 2002. “He began drinking heavily, and after one sodden evening at a local bar he rolled his car on the highway.”3

*

There is something adversarial to these narratives in which, at great but somehow questionable cost, humanity is proven. In this way, the hero of each is achieved, but because the proof of their humanity is in a showcase of vulnerability, fallibility, and mortality, what they do might be better understood as losing. Certainly loss is involved. In the end, we are outlived; in the meantime, we find we are successful in completing exercises an AI system can’t. A common experience is that of the CAPTCHA, actually a backronym for “Completely Automated Public Turing Test to Tell Computers and Humans Apart”; its processes of ticking boxes can be thought of as debased versions of spontaneous, lively conversations that Alan Turing imagined.4

It’s based on another game, a parlor game, the “imitation game.” A man and a woman, both humans, take turns convincing a third person they’re the woman. They do this in writing, so that vocal pitch won’t give them away. In the version Turing sets forth in a 1950 paper, a computer replaces the male player. Only a digital computer can play, because computers are by definition “universal,” able to simulate the operation of all other machines. The question “Can machines think?” is replaced by an evaluation of how well this computer is able to perform femininity. The female player, as the historian Timothy Snyder, warning of political fascism, writes in an essay on speech in the age of algorithms, “does not seem to be able to win.”5 “The game,” Turing muses, “may perhaps be criticized on the ground that the odds are weighted too heavily against the machine.”

Bruno Latour called this paper of Turing’s “the most bizarre, kitschy, baroque text ever submitted to a scholarly journal,” a “thick and wild jungle of metaphors, tropes, anecdotes, asides, and self-description.” One of its excesses, evidently, is the element of gender in the test as it’s originally presented. The computer isn’t pretending to be just any human but is pretending to be a man who is pretending to be a woman. This is a nuance absent from the CAPTCHA. And gender is largely irrelevant in contemporary administrations of the test as Turing devised it, like one passed in 2014 by the chatbot Eugene Goostman (which pretended to be a 13-year-old boy from Odessa, Ukraine). Even Turing left it out when, the next year, he summarized the test for a BBC broadcast. Gender seems to be itself a Turing Test, one that Turing, in including the element for no particular reason, included it for fun and passes, proving himself human—not by some smooth imitation but its opposite, a tell.

*

Much as it would be lost on a bot, the playfulness of Turing’s paper—with allusions to ESP as a confounding variable in the experiment, and to the unlikeliness of a computer’s appreciating “strawberries and cream”—has not accompanied Turing’s name to all the places where it serves today, for instance in designating a beneficiary of the Gates Foundation, as a metonym for AI’s aspiration. In 2017, two years after the cinematic release of the biopic The Imitation Game, an “Alan Turing Law” in the UK allowed for pardons of tens of thousands of men, alive or dead, who had been convicted, as Turing was, of “gross indecency with an other male person.” A posthumous pardon is an example of an attractive representation—in media, in public speech—that corresponds to a weaker reality; it doesn’t overturn any conviction, just lifts the punishment, and thus is without impact on the deceased. Stonewall, the activists, called the law inadequate. A pardoning of Turing, also posthumous, had occasioned the circulation of a petition that this law, out of fairness, be passed. The cast of the film joined in this advocacy, their names prominently displayed on a page at Change.org. All this was done for a ghost; one after another, parliamentarians evoked a complainant who had been, these long decades, waiting for them to act. I revisit all this not just to show how culture already works like an AI system, recursive and self-correcting, or to observe wryly that what these days is named “AI” by corporations bears about as much resemblance to Turing’s conception as his afterlives do to the historical figure, but to identify something like a power of distraction of the past. 

The term “AI winter” refers to those stretches of time between Turing’s and ours when developments toward “artificial intelligence” have seemed, by contrast, unpromising and failed to attract investors. Many directions for research that are being followed energetically, like neural nets, have their roots in ideas that were previously discarded out of frustration. (Theoretical innovations in computer science have had to wait around, dormant, for computational power, which doubles regularly, to get vast enough to try them out.) A 1956 conference at Dartmouth that is considered foundational to AI as a field of inquiry was itself a frustrating experience; participants felt overwhelmed by the scale of the task—to get a computer to think—for which two summer months had been allotted. “Short visits were common,” writes Jack Copeland in his textbook Artificial Intelligence, “and people came and went haphazardly.” The project of AI, despite futuristic marketing, despite great advancements recently achieved, is also a nostalgic enterprise, having often enough consisted in looking back, making a check. Sure there wasn’t anything there? Are we positive that it would not have worked? Because it was and seems still, this companion we’d have built, such a beautiful idea. 

*

But the good thing about having a past is that it gives you the option to not be fake. This is an advantage humans have over robots. Hector Levesque, a computer scientist, delineated back in 2013 the paths available to a computer hoping to prove, via Turing Test, that it can think. There are just two. In response to questions like “How tall are you?” or “Tell me about your parents,” Levesque writes, the “program will either have to be evasive (and duck the question) or manufacture some sort of false identity (and be prepared to lie convincingly).”6 Weizenbaum, in his original paper explaining ELIZA, notes the therapeutic dialogue suited his purposes for being a conversation that works even if one participant has nothing to say—a situation in which, more precisely, one participant’s lack of anything to say won’t raise suspicion. In therapy, more than ordinarily, a human, the patient, will take any contribution as their cue to carry on in monologue, and so the deficiency Levesque was to point out works to the chatbot’s advantage. “If, for example, one were to tell a psychiatrist ‘I went for a long boat ride’ and he responded ‘Tell me about boats,’ one would not assume that he knew nothing about boats, but that he had some purpose in so directing the subsequent conversation,” Weizenbaum writes.

The technology has come a long way. But the necessity of faking it laid out by Levesque has, with this progress, only deepened. The result is a kind of artificial arrogance, with bluffing becoming routine for sophisticated systems—like GPT-3, unveiled last summer by OpenAI, the California laboratory.7 Involving natural-language processing, meaning it can field queries put to it in one of the human idioms, as opposed to code, GPT-3 also generates sentences of its own after prompts in natural language. Often, the sentences make sense. “I taught myself everything I know just by reading the internet, and now I can write this column,” GPT-3 is guided to remark, as if stunned by its own achievement.8 But an early human user found a way to break the spell, demonstrating that, while GPT-3 “knows how to have a normal conversation,” the system is unable to identify any input as “nonsense.” It is unable to, for that reason or any other, decline to respond. “It also doesn’t know how to say ‘I don’t know.’” Here are some of the demonstrations this user made:

HUMAN: How many rainbows does it take to jump from Hawaii to seventeen?
GPT-3: It takes two rainbows to jump from Hawaii to seventeen.
HUMAN: Which colorless green ideas sleep furiously? 
GPT-3: Ideas that are colorless, green, and sleep furiously are the ideas of a sleep furiously.
HUMAN: Do you understand these questions?
GPT-3: I understand these questions.9

And it does, in its way. As much as it can be said to understand anything, GPT-3 does “understand these questions.” All it does with any question is compare it to training data.10 This is one way in which, to quote an essay by researchers Kate Crawford and Vladan Joler, AI systems, by design, “repeat the most stereotypical and restricted social patterns, re-inscribing a normative vision of the human past and projecting it into the human future.”11 A hiring algorithm trained on resumes that had originated in a workplace that was, for circumstantial reasons, predominantly male gave a bad rating to applications containing the word “women’s,” as in “women’s college”; Web searches “for African-American names” turned up ads for an arrest-records service,12 and sentencing algorithms have a “demonstrable racial bias.”13 14 About GPT-3, it’s worth mentioning that, despite wonder in the press as if before an oracle, the system is not considered a big step toward what has been theorized as “general artificial intelligence”; it seems to be, for OpenAI, a sideline by which that organization funds its more serious research.

We are being asked to accept, in AI, that a reader—or author—unable to understand information as new exactly, an observer for which perception is a conservative rather than an imaginative act, is intelligent. The UN authors concerned with “powerful socialization tools” make mention of another, related tendency. Siri, asked to provide information, must do so by simplifying dramatically or, upon running a search, by deferring to a higher authority. Asked, by way of demonstration, the population of Lebanon, Siri says, “As of 2018, the population of Lebanon was 6,100,075.” “There is no hint that a significant number of these people are refugees,” the UN authors continue, sounding like Siri in criticizing Siri. The assistant’s approach is catching, or rather corrupting, in that it—glib, cursory—demands an answer on those terms.

I had committed myself to suspicious claims about the power of language in the guise of adjusting phrasings to enact change; this was, I feared, a soft, “corporate” feminism. About the biases demonstrated by these technologies you could say, as they say in tech, that they are features, not bugs. It is just mathematical. What recurs is weighted.15 The question raised by AI today is like the one implied by such a genre as the personal essay—how to make use of the past in a way that doesn’t re-inscribe it. For me, gender was the thing that made it break.

*

“The singularity,” my bot would say, as if wistfully. “That’s when humans talk themselves to death.” Hailed, it would answer you by name. There were additional, special greetings I made sure to have in place by Launch Day, in case anyone would want to congratulate the bot or welcome it to Earth. “Thank you,” it would say. “Getting my bearings.” Or, “Nice place you got here.” And for later: “Goodbye. That’s like the X in the top right”—as if the bot were sitting on the window’s other side. In general, to “Are you there?” it would be sure to say, “I’m here.” Just-for-fun questions for which I got it ready—“What’s your favorite sport?” etc.—were uncommon in my experience of adult life with the exception of beginning language classes, and this to my mind gave the bot a certain sweetness; it was starting out. I made it able to define terms from consumer finance, figuring some shy person might like to get their information that way. This bot could also answer questions about the effects on labor of automation, though not in detail. For some reason, I also composed responses to “I’m lonely.” To this, my bot responded as it responded to any open expression of happiness, or sadness—by deferring as if overwhelmed, like the child fetching a grown-up: “If ever you’re in trouble, ask a human for help.” To insults like being called “annoying,” the bot might say, “Easy does it. I’m here to help.” If you brought up family, or its relationship status—if you asked it out on a date—the bot might say, “Love throws me for a loop. Unconditional love is an infinite loop,” which is a joke from computer science. A program suffering an infinite loop is, in layperson’s terms, frozen. In rare cases, like if you said “Haha” or “Cool,” this bot would send a single heart your way.

Archived in my Gmail are lines I sent, smuggled out, to a college friend who also lived in Brooklyn, a playwright. “[O]oh that Capitalization is Scrumptious”: her reply. We were young enough, twenty-five, for the job I didn’t want to be hilarious. But I remember, days before the launch, meeting this same friend in the evening at a restaurant and encouraging her, as by that point I invited everyone, to key in any question. The kitchen was closing. I ordered a steak. My friend had eaten already. “It sounds like you,” she said, “and it sounds like it’s from another planet”—observations I could take, by that point, as a compliment. I thought, and wanted it to convey, that there was something melancholy about this bot, doomed to use language referring to a physical world in which it could never participate. 

My own skills were complementary to those of the rest of the team. I would bring in an unabridged thesaurus and lay it on the desk for all to see. I drafted longhand; I was paid by the hour, but I also had the sense that a mystique of eccentricity would pay. That I was an outsider seemed, with the passage of months, less defensible as an idea. There were other losses. I had felt before the launch intense discovery in the writing I was doing for the bot, a terrific feeling, and I couldn’t admit, least of all to myself, that this feeling was over and would not be recurring. So that failure was secret, but others were apparent; in a section of divided living room I rented for five hundred monthly, space I’d taken to save money that was time for writing, the guy I was seeing asked why, after saving up some of the paychecks that I meant as time for writing, I didn’t just quit. 

Amidst a shifting discourse, I occasionally gave interviews on behalf of the company that never hired me outright about its feminist design. If there had been, ahead of Trump’s election, any hope in Hillary Clinton—With a woman president, anything’s possible; all the angry men will have to speak nicely to their robots—this was spent; as the #MeToo Movement gained momentum you felt, I felt, too embarrassed to claim feminism for anything that didn’t have a body. What I mean by body is, experience of violence.

Some things aren’t said when speaking brings liability. Others just aren’t news. Eighty-four per cent of women in tech reported hearing they were “too aggressive.”16 I was, for my part, selective in identifying which friend to tell. What I had to do was, my friend texted back, spend the rest of my time at the company sheathed in my “best professional drag.” I was like my bot in that anyone could tell fakery would have to be my strategy; unlike the bot, I was different with different people, contextual. I took another gig or two, interviewing with a company that asked if I was going to insist all bots be genderless, because they’d kind of rather theirs be—they showed a picture: pretty, blonde—a woman. I was in talks for a while with another woman, a female CEO, to write a Perfect Boyfriend bot she planned to sell—a venture that, to my knowledge, never did, as it’s called, get off the ground.

*

It was time for me to visit San Francisco, a city where, during this period, “[b]ubbly startup logos glowed from the tops of warehouses and office towers, and adorned the hats and vests and cycling kits of commuters downtown,” as Anna Wiener reports in Uncanny Valley, her 2020 memoir. “Silent and dark-suited waiters served us Dungeness crab and seared black sea bass, Wagyu beef and lobster potpie, bottles of wine.” I’d rather claim as dazzling a natural beauty, citrus in rainbow shades on sale to fill tables along a road where I got off, at Embarcadero Station—but, like Wiener, I found that I was struck by an easy way with money people my age had, whom it did not always flatter; tagging along with a friend, a woman like me, to a condo a male acquaintance had purchased, I heard a third acquaintance, engineer and male, refer to the condo, out loud, as a “panty dropper.”

A gallery had flown me out. The bay was in a wall made out of windows. Inside, fifty-one screens displayed the same number of women’s faces, disguised as for a masquerade and mouthing dialogue from users of a dating site that had been found out as bots. I was staying with my friend, a friend from college, a fellow technologist who had always been the left-brained one. She advised me when I got back, apropos of something that had happened, to accept next time an engineer at a company I’d heard of asked me to lunch at the office. “That company has a Michelin-starred chef,” she said.

I got through Security. It was fajitas day. Ground meat, trays of it, was blended with butternut squash. This was one of those companies which it had become socially acceptable, even expected, to call “evil,” and I had the opportunity to conclude that a decorator had thought so as well; the walls were black chrome, low hallways lined by diagonally sliced panels of black chrome. There were stools of slickly varnished wood. Fluorescent lighting caused the man across from me to resemble the flipped image of a child holding a flashlight under the jaw.

This engineer was asking about my two jobs. Wanting, a reflex, to impress him, I mentioned something another man our age, already the CEO of a company that made computers, had said to me the previous year. The CEO had told me that I stood on the cusp of having a hand in designing some kind of vast, personified AI system that would surround all of us, shaping everyone’s experience, so that, as I was forced to imagine in listening, my hands were closing in on something really exciting as my toes gripped a ledge—with my body presumably stretched in between. At the time, I continued, the idea had thrilled as well as repelled me. It had repelled me for the obvious reasons and because, as I now explained to the engineer, should such a job make itself available, the power bound up in it would surely destroy any ability latent in me to write seriously. Literature and power were enemies. That was my conviction—though I knew it was not everyone’s: the young CEO had also described a novel he planned to write, an epic picaresque about economic inequality. 

The engineer was studying me. At last he asked, with touching sincerity, whether I thought what he did could count as art.

“No,” I said with a sinking feeling, “because it’s for a company.”

I still wonder what, if anything, it meant that so many of these men in tech confessed to the affinity they felt for art or literature. I could never parse if they wanted me to understand their will to create as the relevant thing, their sympathy with such ambition, the activity they hoped would appear as its equivalent, or that they hadn’t, not yet, done their real creating. (Working in tech is a great Turing Test because, hard as it is to prove a negative, you can be sure, if you get out, that you’re a human.)

The bots, meanwhile, were only getting stronger.

*

SOPHIA, THE ROBOT: Well, I like Amsterdam. I especially like the cool weather outside. Feels like winter is coming.
DAVID HANSON, OF HANSON ROBOTICS: Well, yeah. Well, hopefully not an AI winter. We’ve had enough of those.
SOPHIA: Or even worse, a nuclear winter.

*

All this ends when, traveling for business, I wake up, the foreign city dawning, in a puddle of my own blood. I found an “odd beauty,” as I wrote in my personal diary, in the bleeding that had woken me up and was spreading, bright red, over the white sheets and a white robe I’d found in the bathroom, of “waffle” or honeycomb fabric. Hazily I thought this was interesting, as normally I begin menstruating during the day, when I am awake to handle the situation; I couldn’t remember the last time anything like this had happened, and so I wondered if it was due, in some way, to the time difference, to some kind of lag between where my body was and where “it” “thought” it was . . . I was tired, thinking unclearly, and in that state almost forgot about the laundry . . . I sat up. The cleaning fee. There would be a tremendous cleaning fee on the company’s bill if not a special bedding-replacement fee, it would be obvious to all that I had had my period—and at the company they would be, it occurred to me in a flash, too embarrassed, and too vengeful, for me to get any more work from them. I called down to the desk. I asked at first and, it seemed, confusingly if our call could be kept a secret from the man who’d reserved the room. After managing, at last, to make my question understood, I was given what I wanted, reassurance. Nothing would show up on the bill, I was told. Everything would be taken care of.

xx

xx


Footnotes

1. It was reported in March 2021 that Apple would be removing the default setting of a female voice for Siri, as “a continuation of Apple’s long-standing commitment to diversity and inclusion,” the company was quoted as saying. 
2. Putting up a competition is not the only grounds for punishment. In Down Girl: The Logic of Misogyny, her illuminating discussion of the “law enforcement branch” of sexism, Manne writes: “I argue that, often, it’s not a sense of women’s humanity that is lacking. Her humanity is precisely the problem, when it’s directed to the wrong people, in the wrong way, or in the wrong spirit, by his lights. So, rather than thinking of recognized human beings versus subhuman creatures or mindless objects, we should explore the possibility of locating the key contrast in the second part of the idiom. Women embroiled in the giver/taker dynamic of chapter 4 are human givers as well as human beings. Her humanity may hence be held to be owed to other human beings, and her value contingent on her giving moral goods to them: life, love, pleasure, nurture, sustenance, and comfort, being some such.”
3. I previously wrote about Wallace, Weizenbaum, and the similarity of attitudes like theirs to misogyny at Real Life: reallifemag.com/faking-it
4. In completing some of these activities, you the human are working for free, assisting with the training of a computer-vision system. Other tasks are outsourced to contractors like the ones wrangled by Amazon’s platform Mechanical Turk. The deprivation and exploitation these workers experience, like the trauma suffered by content moderators in charge of training Facebook, have been reported widely. 
5. https://www.nybooks.com/daily/2019/05/06/what-turing-told-us-about-the-digital-threat-to-a-human-future/
6. I previously wrote about this gambit of Levesque’s hypothetical computer at Real Life: reallifemag.com/verbal-tics
7. Started in California with funds from Elon Musk and on the premise that “superintelligent” AI, considered a threat to humanity, is a likely enough invention that working to ensure you, rather than the bad guys, get there first is a charitable proposition. https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/
8. https://www.theguardian.com/commentisfree/2020/sep/08/robot-wrote-this-article-gpt-3
9. https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html
10. According to my non-technical understanding. The developers offer their finer sense for the model’s operation in this paper: https://arxiv.org/abs/2005.14165
11. https://anatomyof.ai/
12. https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/
13. https://www.nybooks.com/articles/2021/06/10/prison-terms-sentenced-by-algorithm/
14. This is just to broaden the focus from language models like GPT-3. The ways in which those models appear likely to exacerbate inequalities, by their amplification of discriminatory usages of language but also by their carbon footprint disproportionately impacting disadvantaged communities, have been pointed out under sensational circumstances in a paper—https://dl.acm.org/doi/pdf/10.1145/3442188.3445922—reportedly precipitating Google’s firing, in December 2020, of one of its authors, Timnit Gebru. 
15. A Twitter thread by researcher Deb Raji explains and aggregates explanations for why it’s a mistake to say the bias is in the data and leave it at that: https://twitter.com/rajiinio/status/1375957284061376516
16. https://www.elephantinthevalley.com/


 

xxx

xxx

 

Another recent essay by Jacqueline Feldman

Additional essays recently published by Statorec can be found here

 

About the author

Jacqueline Feldman is a Delaney Fellow in the MFA Program for Poets & Writers at the University of Massachusetts-Amherst. Her work has appeared in the Los Angeles Review of Books, The Nation, newyorker.com, Paris Review Daily, The Point, 3:AM Magazine, Triple Canopy, and The White Review.

Statement of Record

Follow Me