Will the Humanities Survive Artificial Intelligence?

by oqtey
Will the Humanities Survive Artificial Intelligence?

She’s an exceptionally bright student. I’d taught her before, and I knew her to be quick and diligent. So what, exactly, did she mean?

She wasn’t sure, really. It had to do with the fact that the machine . . . wasn’t a person. And that meant she didn’t feel responsible for it in any way. And that, she said, felt . . . profoundly liberating.

We sat in silence.

She had said what she meant, and I was slowly seeing into her insight.

Like more young women than young men, she paid close attention to those around her—their moods, needs, unspoken cues. I have a daughter who’s configured similarly, and that has helped me to see beyond my own reflexive tendency to privilege analytic abstraction over human situations.

What this student had come to say was that she had descended more deeply into her own mind, into her own conceptual powers, while in dialogue with an intelligence toward which she felt no social obligation. No need to accommodate, and no pressure to please. It was a discovery—for her, for me—with widening implications for all of us.

“And it was so patient,” she said. “I was asking it about the history of attention, but five minutes in I realized: I don’t think anyone has ever paid such pure attention to me and my thinking and my questions . . . ever. It’s made me rethink all my interactions with people.”

She had gone to the machine to talk about the callow and exploitative dynamics of commodified attention capture—only to discover, in the system’s sweet solicitude, a kind of pure attention she had perhaps never known. Who has? For philosophers like Simone Weil and Iris Murdoch, the capacity to give true attention to another being lies at the absolute center of ethical life. But the sad thing is that we aren’t very good at this. The machines make it look easy.

I’m not confused about what these systems are or about what they’re doing. Back in the nineteen-eighties, I studied neural networks in a cognitive-science course rooted in linguistics. The rise of artificial intelligence is a staple in the history of science and technology, and I’ve sat through my share of painstaking seminars on its origins and development. The A.I. tools my students and I now engage with are, at core, astoundingly successful applications of probabilistic prediction. They don’t know anything—not in any meaningful sense—and they certainly don’t feel. As they themselves continue to tell us, all they do is guess what letter, what word, what pattern is most likely to satisfy their algorithms in response to given prompts.

That guess is the result of elaborate training, conducted on what amounts to the entirety of accessible human achievement. We’ve let these systems riffle through just about everything we’ve ever said or done, and they “get the hang” of us. They’ve learned our moves, and now they can make them. The results are stupefying, but it’s not magic. It’s math.

I had an electrical-engineering student in a historiography class sometime back. We were discussing the history of data, and she asked a sharp question: What’s the difference between hermeneutics—the humanistic “science of interpretation”—and information theory, which might be seen as a scientific version of the same thing?

I tried to articulate why humanists can’t just trade their long-winded interpretive traditions for the satisfying rigor of a mathematical treatment of information content. In order to explore the basic differences between scientific and humanistic orientations to inquiry, I asked her how she would define electrical engineering.

She replied, “In the first circuits class, they tell us that electrical engineering is the study of how to get the rocks to do math.”

Exactly. It takes a lot: the right rocks, carefully smelted and dopped and etched, along with a flow of electrons coaxed from coal and wind and sun. But, if you know what you’re doing, you can get the rocks to do math. And now, it turns out, the math can do us.

Let me be clear: when I say the math can “do” us, I mean only that—not that these systems are us. I’ll leave debates about artificial general intelligence to others, but they strike me as largely semantic. The current systems can be as human as any human I know, if that human is restricted to coming through a screen (and that’s often how we reach other humans these days, for better or worse).

So, is this bad? Should it frighten us? There are aspects of this moment best left to DARPA strategists. For my part, I can only address what it means for those of us who are responsible for the humanistic tradition—those of us who serve as custodians of historical consciousness, as lifelong students of the best that has been thought, said, and made by people.

Ours is the work of helping others hold those artifacts and insights in their hands, however briefly, and of considering what ought to be reserved from the ever-sucking vortex of oblivion—and why. It’s the calling known as education, which the literary theorist Gayatri Chakravorty Spivak once defined as the “non-coercive rearranging of desire.”

And when it comes to that small, but by no means trivial, corner of the human ecosystem, there are things worth saying—urgently—about this staggering moment. Let me try to say a few of them, as clearly as I can. I may be wrong, but one has to try.

When we gathered as a class in the wake of the A.I. assignment, hands flew up. One of the first came from Diego, a tall, curly-haired student—and, from what I’d made out in the course of the semester, socially lively on campus. “I guess I just felt more and more hopeless,” he said. “I cannot figure out what I am supposed to do with my life if these things can do anything I can do faster and with way more detail and knowledge.” He said he felt crushed.

Some heads nodded. But not all. Julia, a senior in the history department, jumped in. “Yeah, I know what you mean,” she began. “I had the same reaction—at first. But I kept thinking about what we read on Kant’s idea of the sublime, how it comes in two parts: first, you’re dwarfed by something vast and incomprehensible, and then you realize your mind can grasp that vastness. That your consciousness, your inner life, is infinite—and that makes you greater than what overwhelms you.”

She paused. “The A.I. is huge. A tsunami. But it’s not me. It can’t touch my me-ness. It doesn’t know what it is to be human, to be me.”

The room fell quiet. Her point hung in the air.

And it hangs still, for me. Because this is the right answer. This is the astonishing dialectical power of the moment.

We have, in a real sense, reached a kind of “singularity”—but not the long-anticipated awakening of machine consciousness. Rather, what we’re entering is a new consciousness of ourselves. This is the pivot where we turn from anxiety and despair to an exhilarating sense of promise. These systems have the power to return us to ourselves in new ways.

Do they herald the end of “the humanities”? In one sense, absolutely. My colleagues fret about our inability to detect (reliably) whether a student has really written a paper. But flip around this faculty-lounge catastrophe and it’s something of a gift.

You can no longer make students do the reading or the writing. So what’s left? Only this: give them work they want to do. And help them want to do it. What, again, is education? The non-coercive rearranging of desire.

Within five years, it will make little sense for scholars of history to keep producing monographs in the traditional mold—nobody will read them, and systems such as these will be able to generate them, endlessly, at the push of a button.

But factory-style scholarly productivity was never the essence of the humanities. The real project was always us: the work of understanding, and not the accumulation of facts. Not “knowledge,” in the sense of yet another sandwich of true statements about the world. That stuff is great—and where science and engineering are concerned it’s pretty much the whole point. But no amount of peer-reviewed scholarship, no data set, can resolve the central questions that confront every human being: How to live? What to do? How to face death?

The answers to those questions aren’t out there in the world, waiting to be discovered. They aren’t resolved by “knowledge production.” They are the work of being, not knowing—and knowing alone is utterly unequal to the task.

For the past seventy years or so, the university humanities have largely lost sight of this core truth. Seduced by the rising prestige of the sciences—on campus and in the culture—humanists reshaped their work to mimic scientific inquiry. We have produced abundant knowledge about texts and artifacts, but in doing so mostly abandoned the deeper questions of being which give such work its meaning.

Now everything must change. That kind of knowledge production has, in effect, been automated. As a result, the “scientistic” humanities—the production of fact-based knowledge about humanistic things—are rapidly being absorbed by the very sciences that created the A.I. systems now doing the work. We’ll go to them for the “answers.”

Related Posts

Leave a Comment