In the way
This sums up my experience of companies and products trying to inject AI in to the products I use to communicate with other people. It’s always just in the way, making stupid suggestions.
This sums up my experience of companies and products trying to inject AI in to the products I use to communicate with other people. It’s always just in the way, making stupid suggestions.
Anyone at an AI company who stops to think for half a second should be able to recognize they have a vampiric relationship with the commons. While they rely on these repositories for their sustenance, their adversarial and disrespectful relationships with creators reduce the incentives for anyone to make their work publicly available going forward (freely licensed or otherwise). They drain resources from maintainers of those common repositories often without any compensation.
Even if AI companies don’t care about the benefit to the common good, it shouldn’t be hard for them to understand that by bleeding these projects dry, they are destroying their own food supply.
And yet many AI companies seem to give very little thought to this, seemingly looking only at the months in front of them rather than operating on years-long timescales. (Though perhaps anyone who has observed AI companies’ activities more generally will be unsurprised to see that they do not act as though they believe their businesses will be sustainable on the order of years.)
It would be very wise for these companies to immediately begin prioritizing the ongoing health of the commons, so that they do not wind up strangling their golden goose. It would also be very wise for the rest of us to not rely on AI companies to suddenly, miraculously come to their senses or develop a conscience en masse.
Instead, we must ensure that mechanisms are in place to force AI companies to engage with these repositories on their creators’ terms.
This is a great new musical project from Brad:
Brad Frost plays drums to the albums he knows intimately, but has never drummed to before. Cover to cover. No warm-up. No prep. Totally cold. What could possibly go wrong?
I really enjoyed watching all of The Crane Wife and In Rainbows.
The web is open, apps are closed. The majority of web users have installed an ad blocker (which is also a privacy blocker). But no one installs an ad blocker for an app, because it’s a felony to distribute that tool, because you have to reverse-engineer the app to make it. An app is just a website wrapped in enough IP so that the company that made it can send you to prison if you dare to modify it so that it serves your interests rather than theirs.
The moment you run LLM generated code, any hallucinated methods will be instantly obvious: you’ll get an error. You can fix that yourself or you can feed the error back into the LLM and watch it correct itself.
Compare this to hallucinations in regular prose, where you need a critical eye, strong intuitions and well developed fact checking skills to avoid sharing information that’s incorrect and directly harmful to your reputation.
With code you get a powerful form of fact checking for free. Run the code, see if it works.
The tech bros advocating for generative AI to take over art are at the same level of cultural refinement as the characters in Severance. They’re creating apps to summarize books to people, tweeting from accounts with Greek statue profile pictures.
GenAI would automate Lumon’s cultural mission, allowing humans to sever themselves from the production of art and culture.
Good news for the fediverse, the indie web, and community sites like The Session:
People are abandoning massive platforms in favor of tight-knit groups where trust and shared values flourish and content is at the core. The future of community building is in going back to the basics.
This is a great little helper in understanding anchor positioning in CSS.
Rich suggests another reason why the UX of websites on mobile is so shit these days:
The path to installing a native app is well trodden. We search the App Store (or ironically follow a link from a website), hit ‘Get’ and the app is downloaded to our phone’s home screen, ready to use any time with a simple tap.
A PWA can also live on your home screen, nicely indistinguishable from a native app. But the journey to getting a PWA – or indeed any web app – onto your home screen remains convoluted to say the least. This is the lack of equivalence I’m driving at. I wonder if the mobile web experience would suck as badly if web apps could be installed just as easily as native apps?
You can think of flying to Mars like one of those art films where the director has to shoot the movie in a single take. Even if no scene is especially challenging, the requirement that everything go right sequentially, with no way to pause or reshoot, means that even small risks become unacceptable in the aggregate.
You do not have to use generative AI.
AI itself cannot be held to account.
If you use AI, you are the one who is accountable for whatever you produce with it.
There are contexts in which it is immoral to use generative AI.
Correcting or fact checking generative AI may take longer than just doing a task yourself, or with conventional AI tools.
You do not have to use generative AI.
My main problem with AI is not that that it creates ugly, immoral, boring slop (which it does). Nor even that it disenfranchises artists and impoverishes workers, (though it does that too).
No, my main problem with AI is that its current pitch to the public is suffused with so much unsubstantiated bullshit, that I cannot banish from my thoughts the sight of a well-dressed man peddling a miraculous talking dog.
Also, trust:
They’ve also managed to muddy the waters of online information gathering to the point that that even if we scrubbed every trace of those hallucinations from the internet – a likely impossible task - the resulting lack of trust could never quite be purged. Imagine, if you will, the release of a car which was not only dangerous and unusable in and of itself, but which made people think twice before ever entering any car again, by any manufacturer, so long as they lived. How certain were you, five years ago, that an odd ingredient in an online recipe was merely an idiosyncratic choice by a quirky, or incompetent, chef, rather than a fatal addition by a robot? How certain are you now?
Some good—if overlong—writing advice.
- Focus on what matters to readers
- Be welcoming to everyone
- Swap formal words for normal ones
- When we have to say sorry, say it sincerely
- Watch out for jargon
- Avoid ambiguity: write in the active voice
- Use vivid words & delightful wordplay
- Make references most people would understand
- Avoid empty adjectives & marketing cliches
- Make people feel they’re in on the joke – don’t punch down
- Add a pinch of humour, not a dollop
- Smart asides, not cheap puns and cliches
- Be self-assured, but never arrogant
I Feel Like I’m Going Insane
Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they’re building “advanced artificial intelligence” that can take “human-like actions,” but when you look at any of this shit for more than two seconds it’s abundantly clear that it absolutely isn’t and absolutely can’t.
Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble.
We are in the midst of a group delusion — a consequence of an economy ruled by people that do not participate in labor of any kind outside of sending and receiving emails and going to lunches that last several hours — where the people with the money do not understand or care about human beings.
Their narrative is built on a mixture of hysteria, hype, and deeply cynical hope in the hearts of men that dream of automating away jobs that they would never, ever do themselves.
Generative AI is a financial, ecological and social time bomb, and I believe that it’s fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we’re all fucking morons.
This is absolutely wonderful!
There’s deep dives and then there’s Marcin’s deeeeeeep dives. Sit back and enjoy this wholesome detective work, all beautifully presented with lovely interactive elements.
This is what the web is for!
Want to use all those great features that have been in landing in browsers over the past year or two? View transitions! Scroll-driven animations! So much more!
Well, your coding co-pilot is not going to going to be of any help.
Large language models, especially those on the scale of many of the most accessible, popular hosted options, take humongous datasets and long periods to train. By the time everything has been scraped and a dataset has been built, the set is on some level already obsolete. Then, before a model can reach the hands of consumers, time must be taken to train and evaluate it, and then even more to finally deploy it.
Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff. This gap creates a time between when a new technology emerges and when AI systems can effectively support user needs regarding its adoption, meaning that models will not be able to service users requesting assistance with new technologies, thus disincentivising their use.
So we get this instead:
I’ve anecdotally noticed that many AI tools have a ‘preference’ for React and Tailwind when asked to tackle a web-based task, or even to create any app involving an interface at all.
Being “in tech” in 2025 is depressing, and if I’m going to stick around, I need to remember why I’m here.
This. A million times, this.
I urge you to read what Miriam has written here. She has articulated everything I’ve been feeling.
I don’t know how to participate in a community that so eagerly brushes aside the active and intentional/foundational harms of a technology. In return for what? Faster copypasta? Automation tools being rebranded as an “agentic” web? Assurance that we won’t be left behind?
AI has the same problem that I saw ten year ago at IBM. And remember that IBM has been at this AI game for a very long time. Much longer than OpenAI or any of the new kids on the block. All of the shit we’re seeing today? Anyone who worked on or near Watson saw or experienced the same problems long ago.
Heydon’s latest video is particularly good:
All of my videos are black and white, but especially this one.
We wonder often if what is created by AI has any value, and at what cost to artists and creators. These are important considerations. But we need to also wonder what AI is taking from what has already been created.