#70 | What the Talmud can teach us about AI
Or what, and I cannot stress this enough, is wrong with me?!
I should start by saying: I’m no Judaism expert. I dropped out of Hebrew school to focus on ballet. I don’t keep kosher. I had to Google “where exactly does the Talmud come from” before writing this. I am also not an AI expert (does such a thing exist?). I am however a Jew who is working more and more with AI everyday.
As it turns out, one of the most Jewish things about me, aside from my affection for a good pickle (well any pickle, really), is my love of questions. Not just asking them, but swimming around in them as if they were an algae-filled summer camp (also Jew coded) lake full of generations of secrets. If you’ve ever read anything I’ve written, you know how much I love doing the backstroke through questions. In fact, in this world where ONE MUST HAVE A NICHE in writing, I might say my niche is questions. I write in and around and through questions. I love questions. Turning them over like stones, poking at the little slimy bugs underneath, asking the bug why it’s there, singing the bug little songs, asking the bug how it feels about being observed. That sort of thing.
My professional life has become more and more entangled in the world of AI, and when people ask me what I think about AI (or more likely when I ask myself what I think about AI), I find myself a little tongue-tied. Not because I don’t have opinions, I have too many opinions, but because the whole conversation feels less like one singular perspective and more like an ongoing Talmudic debate.
Talmudic. I said it. The Talmud - the thousands-of-pages-long ancient Jewish text full of rabbis arguing with other rabbis across generations about everything from property law to whether a cucumber is still a cucumber if it gets pickled - that’s where I find a parallel to AI. What is wrong with me?
The Talmud isn’t a rulebook. It’s a record of interpretation. A masterclass in never arriving at an answer. And lately, that’s exactly what conversations about AI feel like to me.
Every time someone asks a question about AI- Is it conscious? Is it stealing? Is it art? - we don’t get answers. We get more questions. Those questions lead to more questions. And those questions lead to entire conference panels on the ethics of digital souls and no one remembering what the original question even was.
This is not a failure of the field. It is the field.
Some institutions, of course, are trying to get ahead of this. The Vatican, which has seen its fair share of philosophical spirals, published a formal document calling for an "ethics of algorithms," warning against bias, dehumanization, and what Pope Francis called the “technocratic paradigm.” A very elegant way of saying: please don’t turn the world into a giant predictive text engine with no soul. I’d of course, be interested in hearing what the Pope from Philadelphia (it’s a joke, friends) has to say on the matter.
And yet, even in that statement, as well-meaning and morally clear as it is, you can feel the tension. Can we actually guide something we don’t fully understand? Are we building the next great tool of progress, or the world’s most well-dressed Trojan horse?
The rabbis of the Talmud wouldn’t try to answer that. They’d likely ask: what does the question itself reveal?
In one passage of the Talmud, I’m pretty sure the rabbis ask: what’s greater: study or action? One rabbi says action. One rabbi says study, because study leads to action. This goes back and forth for a while, and eventually they just…keep both. The point isn’t to pick a side. The point is to keep circling the question.
That’s how AI feels. Do we need more philosophy or more policy? More engineers or more ethicists? Yes. Also yes. What a very Jewish answer.
The question of what AI is, you know, the questions like: who it benefits, what it threatens, whether it’s a tool or a mirror or a very sophisticated parrot - don’t land anywhere neatly. And maybe they’re not supposed to. Maybe these are questions we’ll always be circling, the way the Talmud circles law and ethics and the meaning of a cucumber. (Okay, I may have taken some creative liberty with the cucumber part. I don’t believe the Talmud literally asks about pickles. But it does debate what category a food belongs to depending on how it’s prepared, so we’re not that far off. And anyway, if a vegetable goes through fermentation, is it not transformed? Welcome to the Talmud.)
The other day, someone asked me if I thought AI is “good or bad,” and I wanted to say: compared to what? Electricity? Capitalism? The last two seasons of RHONY?
We keep trying to nail AI down like it’s a math problem. Unfortunately, it’s not a math problem and Will Hunting is not solving it on a whiteboard. It’s a conversation. A debate. A long, digressive, possibly infinite one. One that Will Hunting would definitely have an opinion on. And the sooner we accept that, the saner we’ll all be.
To be clear: I’m not suggesting we throw up our hands and stop trying to regulate or explore or define AI. We should absolutely keep trying. Exploration, experimentation, failure and reward - we should seek it all out. But maybe we should also get more comfortable with the idea that it won’t resolve. Not fully. Not ever. It’s Talmudic: which is to say, it’s alive, it’s breathing, it’s exciting.
The rabbis didn’t fear contradiction. They made room for it. And maybe that’s a useful position for us right now: not just asking what AI is, but asking what our questions about it say about us.
I don’t know how AI will evolve. But I do know that if your brain feels like it’s doing somersaults every time you try to form an opinion about it: Congratulations. You’re asking the right questions.
Chic Schmaltz La Vie,
LCF
Love this piece! Such a spot on description of where we are with AI right now.
Just incredible Lyndsey!!!💓