Whoa, serious pushback against the future of AI

Anybody else happen across this? It’s a post by a guy named Ed Zitron, whom I hadn’t heard of previously, as far as I can recall.

The post starts this way:

A week and a half ago, Goldman Sachs put out a 31-page-report (titled “Gen AI: Too Much Spend, Too Little Benefit?”) that includes some of the most damning literature on generative AI I’ve ever seen. And yes, that sound you hear is the slow deflation of the bubble I’ve been warning you about since March

The report covers AI’s productivity benefits (which Goldman remarks are likely limited), AI’s returns (which are likely to be significantly more limited than anticipated), and AI’s power demands (which are likely so significant that utility companies will have to spend nearly 40% more in the next three years to keep up with the demand from hyperscalers like Google and Microsoft).

This report is so significant because Goldman Sachs, like any investment bank, does not care about anyone’s feelings unless doing so is profitable. It will gladly hype anything if it thinks it’ll make a buck. Back in May, it was claimed that AI (not just generative AI) was “showing very positive signs of eventually boosting GDP and productivity,” even though said report buried within it constant reminders that AI had yet to impact productivity growth, and states that only about 5% of companies report using generative AI in regular production.

For Goldman to suddenly turn on the AI movement suggests that it’s extremely anxious about the future of generative AI, with almost everybody agreeing on one core point: that the longer this tech takes to make people money, the more money it’s going to need to make.

Zitron goes on to take apart the whole idea that AI is the wave of the future, at least the near future. He’s certainly vehement. I don’t know enough about this subject to have an opinion, but vehemence and good writing will get you a long way with me, so I found this post persuasive.

What makes this interview – and really, this paper — so remarkable is how thoroughly and aggressively it attacks every bit of marketing collateral the AI movement has. [Economist Daron Acemoglu of MIT] specifically questions the belief that AI models will simply get more powerful as we throw more data and GPU capacity at them, and specifically ask a question: what does it mean to “double AI’s capabilities”? How does that actually make something like, say, a customer service rep better?

And this is a specific problem with the AI fantasists’ spiel. They heavily rely on the idea that not only will these large language models (LLMs) get more powerful, but that getting more powerful will somehow grant it the power to do…something. As Acemoglu says, “what does it mean to double AI’s capabilities?” 

It’s a long post. Here’s the conclusion:

The reason I so agonizingly picked apart this report is that if Goldman Sachs is saying this, things are very, very bad. It also directly attacks the specific hype-tactics of AI fanatics — the sense that generative AI will create new jobs (it hasn’t in 18 months), the sense that costs will come down (they’re haven’t, and there doesn’t seem to be a path to them doing so in a way that matters), and that there’s incredible demand for these products (there isn’t, and there’s no path to it existing). 

Even Goldman Sachs, when describing the efficiency benefits of AI, added that while it was able to create an AI that updated historical data in its company models more quickly than doing so manually, it cost six times as much to do so.

The remaining defense is also one of the most annoying — that OpenAI has something we don’t know about. A big, sexy, secret technology that will eternally break the bones of every hater. 

Yet, I have a counterpoint: no it doesn’t. 

… That’s my answer to all of this. There is no magic trick. There is no secret thing that Sam Altman is going to reveal to us in a few months that makes me eat crow, or some magical tool that Microsoft or Google pops out that makes all of this worth it.

There isn’t. I’m telling you there isn’t. 

This is the first time I’ve seen a post like this, though for all I know people have been pushing back for months, or all year.

It sounds to me like there’s a real chance that this time next year, we’re going to be basically in the same place we are right now: a whole lot of ridiculously bad pablum “content” will have replaced fairly bad, generic “content” currently written by people, a whole lot of students will be trying to cheat on their English papers, a whole lot of so-called professionals will be trying to cheat when writing up their junk so-called research, and AI will be useful for crunching huge data sets and probably not a lot else.

That’s an interesting prospect. Getting rid of fake content on the internet may be a real problem, but a lot of what is called “content” is already useless and awful, so … is that going to make a big difference? If people in general are less enamored with using generative AI to diagnose patients (where it may very well hallucinate) and in general are better at ignoring fake content, then that sounds like probably a good thing?

Beats me, but the linked article is perhaps worth reading.

Please Feel Free to Share:

Facebooktwitterredditpinterestlinkedintumblrmail

2 thoughts on “Whoa, serious pushback against the future of AI”

  1. I am not familiar with the author, but I agree with the sentiment– as a software developer, I’ve had similar reservations about “generative AI.” The software takes in a bunch of labeled data and performs, effectively, a sort of complex pattern-matching– the “prompt” is effectively a search term to see what aggregated result you get back.

    That can be useful in some cases, such as medical, putting in an MRI image to find similar images classified as this or that disease, although there are deep qualifications about the usefulness of that as well. Because you don’t have fine-grained control over _how_ the pattern-matching is done, you don’t always know exactly what it’s matching on– I recall one medical imaging machine learning study that was generating false positives for diagnosis based on the name of the hospital that was burned into the image because that hospital’s data had a slightly higher rate of diagnosis in the condition being studied!

    There are three main problems for “generative” AI. First, that it inherently can’t do things like math problems because it is basically free associating based on the structure or syntax of a math problem instead of doing any sort of calculation. Second, the massive energy usage– Google this month admitted that its greenhouse gas emissions are soaring because of AI, missing its previous emissions reduction goals due to it, I suspect because it is running it with every search instead of just for users that opt in!

    The third and biggest problem though, and the one that’s led to most pushback, is the source of the massive amount of input data to “train” AI. Most of the products on the market use datasets from non-profit groups that scraped webpages including copyrighted artwork and writing for academic use (and because the data is labeled with the creator, you have picture generating AI where you can put in a prompt of, say, “a woman with a mask in the style of Michael Whelan” and get an image very clearly based on Whelan’s cover for Joan D. Vinge’s The Snow Queen, to name one specific example I saw recently) and artists and writers have pretty much no recourse against it. :(

    The same has happened with the new fad of AI audiobook narration, which often trained from audiobook narrators without their consent, and with AI music generation as well (a big problem polluting streaming sites like Spotify), although I think due to specific copyright laws around music that music has been more protected. The argument is that the AI is “inspired” by artwork as a human would be “inspired” by artwork, but functionally from a programming language it’s more akin to saving a picture someone’s artwork from their website and manipulating with any other software, like Photoshop, there are just several layers of indirection in between and decades of science fiction “AI” tropes that these companies lean on to mystify the process.

    It feels inevitable that the limitations are catching up with the technology and potential users are less inclined to adopt it the more they know about the questionable ethics behind it (not to mention the potential legal fallout from several pending legal cases about copyrighted training data), so here’s hoping the bubble bursts sooner rather than later!

  2. I’m glad you weighed in, Sandstone, as I know you’re a lot more aware of this stuff than I am.

    To me — speaking as someone paying a lot of money to audiobook narrators this year — AI narration seems deeply inferior FOR FICTION. That’s because it’s all very well to have a pretty voice reading the words and putting in the pauses for commas and so forth — that’s fine for nonfiction — but better narrators are far, far more capable of voicing a character as a distinct person than AI. And I don’t see how that can change, because AI narration obviously doesn’t understand what it’s reading and can’t decide to voice a character a certain distinctive way.

    I realize I said in a previous post that I thought AI narration was going to completely replace human narration. I no longer really think that’s likely. Instead, I think a lot of less-skilled voice actors are going to need to find something else to do, while more-skilled voice actors will have plenty of work.

    I guess we’ll almost certainly know which guess about the foreseeable future will prove true within, say, five years.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top