On What AI Can (and Can't) Do For Writers with Tony Stubblebine
"I think there will be use cases in AI. But I also think it's being wildly oversold right now as something that can do [everything]."
Welcome to our interview series, On Something with Somebody! Today we are talking with Tony Stubblebine, CEO of Medium about human vs “AI” editorial curation. If you like these interviews and want to tag along with this newsletter for more, you can sign up here.
Tony showed up to our video interview five minutes early to find me doing weird stretches and breathing exercises. Ugh, humans, right?
It went like this:
TONY S: Benjamin. Oh, hi. If I don't show up early, I end up getting distracted, which always ends with me 15 minutes late. So, I logged in early so I didn't forget.
BENJAMIN D: Good. And then you find me there just stretching my neck, just like mentally preparing.
TONY S: I don’t want to get in the way of your mental preparation.
BENJAMIN D: It's more my, you know, computer life. I'm sure you understand. Computer all day. The body, you know, trying to get everything…uh, aging, aging better.
TONY S: Mhm.
You know who doesn’t get aches and pains? Algorithms. And these days, those algorithms often come in the form of “Generative AI” (or just “AI” if you’re trying to sell me something).
Algorithms and AI have been on my mind because I spent ten years as a copywriter, seven years writing on and off on Medium, and most recently have been participating in a new program where editors on Medium are paid to recommend writers’ works. I thought that was pretty cool. And Tony is the one in charge of it all.
So below, we’re sharing my chat with Tony Stubblebine, CEO of Medium on using algorithms for good, the benefits of human-centric curation, and whether AI is coming for all of our jobs or not (the answer? Some of our writing jobs, but probably not all of our writing jobs).
The algorithm is still there for people who love algorithms for whatever reason. I don't love them. I love humans. I love being human. I love being around humans. I love hearing from humans. But some people are just really excited by technology. And so for them, you can allow that there's still an algorithm here, and the algorithm is doing the work.
Benjamin Davis: A lot of the rhetoric around AI and algorithms focuses on its impact on writers. But there isn't enough of a conversation about the problem with curation algorithms and AI as the curator of content. At Medium, you’ve moved away from algorithmic curation in favor of editors and human curators. How does that work?
Tony Stubblebine: We redesigned our distribution system on Medium to move away from what's standard on all social media platforms. The standard is attention-based recommendations. And attention is really easy for algorithms to measure and optimize. If you’re running an ad business, that’s good business: The more attention you get, the more ads you get to serve.
We, as a matter of purpose, chose a different business model because we wanted to be more aligned with our readers. The business model is a subscription, and that means that we have to answer a different question: What will a person be happy to have paid to read? And that’s not an attention question, that’s a quality, substance, and value question.
So, we redid our recommendations to put humans back in the loop. Specifically, we put together a three-layer system. The first layer is editors around the community. Medium is mostly structured around publications that bloggers submit stories to, and the editors who run those publications pick and choose [what to publish]. And they now submit the best of the best for a program on Medium called the Boost, where we share what's been written beyond just who follows that publication.
The editors share those with our internal team, and we call [that second layer] the curators. The idea is that the editors are subject matter experts, and the curators are maintaining a quality bar. There’s not a black-and-white definition of quality, it’s sort of the quality of what writing we think is worth showing to someone who is a stranger to you, or has never heard of you.
And then we've deprecated the algorithm. There's still an algorithm, but we're not asking it to decide what's high quality or low quality. We're just asking it to play matchmaker for our third layer: the readers. So if you like to read memoir, and we have a high-quality piece of memoir, we can play matchmaker, and that's done algorithmically. And if you're not a Python programmer—and we have a lot of software development writing—the algorithm knows, Even though I have a really high-quality software development article I'm not going to show it to you.
That was just kind of an acceptance of the limitations of recommendation algorithms. They’re great for playing matchmaker, but really terrible for deciding what is high quality or low quality.
BD: When an algorithm looks at a piece of my work, and it’s trying to judge its quality, it can’t do it in the same way that a human can. And a lot of people seem to be heading in the direction of, Hey, we need to design a better algorithm that can judge the quality of a piece of work. And Medium was just like, No! Let’s use humans.
I believe this transition—moving toward human-centric curation—started to happen around when you came in as CEO. What made you see the solution as more humans need to be involved?
TS: Those people are misguided, and I'll sometimes soften my anti-algorithm take for them to say, we did design a better algorithm. Algorithms are always built around human signals, even an attention-based algorithm. And we were able to design a better algorithm by building a better signal. We built editors and curators as the signal that indicates quality, and then we added it into a recommendation algorithm.
The algorithm is still there for people who love algorithms for whatever reason. I don't love them. I love humans. I love being human. I love being around humans. I love hearing from humans. But some people are just really excited by technology. And so for them, you can allow that there's still an algorithm here, and the algorithm is doing the work. But it was always doing the work off of these clues that it can optimize. And the clues are, for example, a subject matter expert looked at this and went, Oh yeah, this is really good. This captures the subject well. I think that's the start of it.
The piece that I really feel strongly about is if you're a writer, and you're writing online, and you're working in a place where the success or failure of your writing is based on an algorithm, it ends up corrupting your writing. You end up changing the writing in order to match the algorithm. And when that happens, it immediately means you're no longer writing for the reader—you're writing for two audiences at best. The obvious example online is those cooking recipes that have been padded in order to please Google’s search engine algorithms. But the end reader doesn't need any of that padding and is actually kind of annoyed by it.
And so what I've liked viscerally about this change for Medium is now when you're writing, you're always writing for a human. You're writing first for a human editor, then for a human curator, then for a human reader. There's a lot more similarity between those three types of people, and it feels different because you're not contorting yourself to write for the algorithm at the same time.
BD: Yeah, I agree. I remember those days of writing for the algorithm and writing for SEO, and it felt awful. It’s always something I’m worried about: we built the internet off of this SEO, writing it in a specific way to please the algorithm. And now we have an algorithm that’s going to come in and write it for you. But nobody wants to read that kind of thing in the first place, right?
TS: Ultimately, we're in control. The future that we're building for ourselves is a future that we're in control of. And we feel that agency pretty strongly.
Medium really set out to be an alternative to what we didn’t like on the Internet. We were seeing the same Internet you were seeing, and we didn’t like it either. And we just felt like, If we don’t like it, we should build something different. Fortunately, we're in a unique position to have deep content backgrounds and deep social media platform backgrounds, and the ability to build software, and to have access to capital so that we could go do just that.
BD: All these companies are popping up and trying to feed this broken attention system with more GenAI. But you stopped. Are you concerned at all that they're right? Suddenly, there will be this ultimate curation algorithm, and it will be the end of what you're trying to build?
TS: I mean, they're not right. Their way is not going to beat us at what we're trying to do. It makes me sad. You know, the more success they have, the more sad I am for the world. But at some level, I just have to be responsible for the job I have, which is to help Medium be successful. The more trash, the more toxic, the more unreliable and untrustworthy the rest of the internet becomes, the more we succeed and stand out. So it doesn't worry me in any sort of competitive way. It's actually, unfortunately, probably good for Medium the worse the rest of the internet gets.
I think there's this idea: people wish they could optimize writing to be something different than it is, to be more efficient. But we've already tried that—that's what Cliff Notes are, right? You read The Odyssey, you didn’t read TLDR: Odysseus Had a Hard Time Getting Home. And the summary doesn't do shit for you because the way the human brain works, it requires so much story.
BD: One of the biggest problems, especially in the creative sphere, is this idea of just kind of throwing an algorithm or a robot at something to curate it, because it’s faster and easier. But you've incentivized editors with the Boost program—if, for example, an editor nominates a piece and it gets accepted, they get $45 per piece, and then the writer also makes money. Is the hope to create this place where everybody is incentivized to read and edit and curate? What is the ultimate vision?
TS: We got to an important milestone over the summer, which is that we're profitable. And I think you know, if we’re talking money in the world we live in, it is very practically beneficial to be profitable. It means we now have as much time as we want to do the things that we want to want. My belief coming into Medium was that there's a lot of untapped knowledge and wisdom in the people of the world, and blogging is one of the best ways to unlock that.
Our job is to draw it out of people, deliver it to readers, and help organize it for the long term. Eventually, you should come to Medium and feel this sense of endless possibility. That whatever you want to know about, whatever your curiosity is, whoever you want to be, you can rely on the knowledge and wisdom of someone who's done it before you. That would be a major change for the world, right?
That's exactly what I thought blogging was going to be in 2005. When I started getting into blogging around that time, it was all of these experiences of reading amateur writers with professional experiences that interested me, right? Blogging just unlocks everyone on every topic, in every life experience. And that was really exciting for me. At the time, I was working for a nonfiction book publisher—we wrote programming manuals, how-tos, and tutorials for software engineers—and we could see that this user-generated content was going to get into all the nooks and crannies that we would just never bother to write a book about. And it looked like it was going to be this massive net boon to the industry, and to all human knowledge and wisdom.
And then our view is that the business models corrupted it, right? You started having people who made their business on top of ads, which drew them to care more about attention. Now, to even have your voice heard, you have to be good at playing the attention game. So a lot of people that were vocal early on, they didn't want to play that game, and so now you don't hear from them.
That's what's exciting to me about Medium now. That we are going to be able to hear from voices that we wouldn't hear from on the standard internet. We can be more about substance and quality than attention. But we are also really, literally going to unlock stories from people across the world that you otherwise would never hear from.
BD: I love that. And I think that this addition of the curator and the editor—to have put that barrier in there—has been really useful. It needed to be there. For a while, it was kind of the Wild West of writers, and without that curation funnel, that filter system, you just wound up with a lot of bad and chaos.
It’s a big responsibility that I think not a lot of people were talking about, just like a year ago. And one of the biggest fears I hear a lot from editors is trying to decipher whether something has been generated by an actual human being. Especially in the creative sphere, people are growing more concerned about that. Are you concerned about that at all?
TS: Something massive happened to us earlier this year, where the number of posts on Medium went up 10x. And it was almost entirely spam—we think spam enabled by AI generators. So the volume went up, but the volume of what we ended up recommending to our readers didn't change at all, because that stuff is pretty easy to filter out. Because the problem isn’t Is it AI-generated or not, it's Is this worth reading or not? And we're just not seeing pure AI generated [writing] that ever is.
If you've ever been an editor, or you're just a heavy reader of other people's work, isn't it kind of amazing how you're reading a piece and then you get to a paragraph, and your plagiarism antenna starts going off? You almost can't even look at the paragraph without [trying to] understand, Why do I think the author didn't write this paragraph?
There's something really intuitive going on there. I have already seen that happen, I've experienced it. I've seen really good editors experience it. I've trained [editors] who never thought they'd be editors and they would just catch it immediately—Wait a second. What happened here? The voice is off.
So I don't think AI is going to be fooling people at the highest level of, like, a literary magazine, people who are trying to write the best and publish something fantastic. And who cares about writing that's not done at that level? Maybe at some other levels, it'll slip by. Maybe people will be using it for corporate communications or marketing. But I think you're in the business of people who care at a much deeper level, and I just don't think AI is going to get anywhere near it.
There’s sort of the gray area of AI assist, which I still think is not generating very high quality, but it sometimes has a lot of validity in that it helps someone express themselves who otherwise wouldn't be able to express themselves.
BD: Yeah. I mean, I think that is something that is becoming more and more of a debate. I think the initial reaction was either just like, No! Or you know, I've seen old educational programs and Substacks dedicated to teaching people how to just use AI to write. And I wonder—do you think we're headed into a phase where it doesn't matter? Or it doesn't matter as long as it's not fully AI? Or do you think that there's always going to be this kind of pure form of writing as a human, versus writing with AI assistance? That’s the gray area debate I see happening, and personally, I don’t like AI, but then that’s also maybe short-sighted.
TS: I think what's going on now is actually just we're exposing that there is a lot more bullshit communication going on in the world than we were acknowledging. What you and I are talking about are things that we're reading intentionally. The writer is trying to bring us on a journey and the details of it matter.
But there's also all this other writing. It's like the writing on a billboard, the writing on search results. Google put out a promo video about their new AI tools—without any self-awareness—with two different use cases. One was, oh, let's say you're writing an email to your team. You can just write down an outline of the email and AI will expand it for you, and then you can send that. Then there's use case number two. Let's say you get a really long email. AI will summarize it down to a set of bullet points. And it's like, why? The bullshit is, why the length? If the person writing wants to write in bullet points, and the person receiving wants to read in bullet points, why all the filler?
That said, I think a lot of corporate communication is filler. The details of it don't matter. There's all of these opportunities for people to create more filler content. And that's just not threatening to what we do, right? I just don’t find it threatening to real writers. Real writing is not about filling words. You write to think. I don't even like to have an AI assist when I'm writing, because it's like having a collaborator who doesn’t know anything. And I find it distracting. I'm writing to work out an idea, right? And if I don't do the work, then I don't work out the idea. On the flip side, as a reader, I'm not trying to understand how a computer lived its life. I’m trying to understand how you lived your life. So I need it to be real.
You know, sometimes people in tech say that what they're doing is new. But I actually don't think AI-generated content is a new pattern. I think there's this idea: people wish they could optimize writing to be something different than it is, to be more efficient. But we've already tried that—that's what Cliff Notes are, right? You read The Odyssey, you didn’t read TLDR: Odysseus Had a Hard Time Getting Home. And the summary doesn't do shit for you because the way the human brain works, it requires so much story. You're not going to deepen your understanding of the world on just a set of facts. You need to know what those facts mean, when they apply, how other human beings have used those facts, because you're motivated by the experience of people like you.
A human story is already the most efficient delivery vehicle for knowledge and wisdom. And all the people that have ever tried to optimize it have just found out that they're wrong.
BD: When I talk to you, when I've talked to other writers, editors, the people in our community, nobody seems to think it’s a good idea. Is it just this flash in the pan? The people with the money who are starting these, are they just going to fade into the background? Is the whole world just tired of writing emails, and we’re kind of dealing with the aftermath of that? Because it doesn't seem like the folks who are into it even want it, but everybody's trying to push us in that direction. Except, you know, Medium, which is why I was just so fascinated with everything you're doing.
TS: Well as a representative of the tech industry—I’ve worked in this industry for 25 years—I apologize for the misguided, well-funded hype cycles that sometimes break through to the mainstream. I think there will be use cases in AI. But I also think it's being wildly oversold right now as something that can do [everything]. And so I would encourage you and your community to trust your own analysis on this. If you think it's a bad idea for the things that matter to you, you're right. You're absolutely right.
Blogs are dead, or dying. Too many people with little to say all blogging. AI is a tool we will all use, especially writers. Sure AI is HATED in the writing world, but it too great a genie. Using AI for fun, I'd say it can write at the 6th-grade level. (54% of adults have a literacy below a 6th-grade level.20% are below 5th-grade level). When a major writing contest is won...and it is coming... using AI, all is lost.
This is extremely good and valuable. Much respect for Tony based on this interview alone!