In the way
This sums up my experience of companies and products trying to inject AI in to the products I use to communicate with other people. It’s always just in the way, making stupid suggestions.
This sums up my experience of companies and products trying to inject AI in to the products I use to communicate with other people. It’s always just in the way, making stupid suggestions.
We’re at a point in the most ecosystems where pulling in libraries is not just the default action, it’s seen positively: “Look how modular and composable my code is!” Actually, it might just be a symptom of never wanting to type out more than a few lines.
It always amazes me when people don’t view dependencies as liabilities. To me it feels like the coding equivalent of going to a loan shark. You are asking for technical debt.
There are entire companies who are making a living of supplying you with the tools needed to deal with your dependency mess. In the name of security, we’re pushed to having dependencies and keeping them up to date, despite most of those dependencies being the primary source of security problems.
But there is a simpler path. You write code yourself. Sure, it’s more work up front, but once it’s written, it’s done.
Anyone at an AI company who stops to think for half a second should be able to recognize they have a vampiric relationship with the commons. While they rely on these repositories for their sustenance, their adversarial and disrespectful relationships with creators reduce the incentives for anyone to make their work publicly available going forward (freely licensed or otherwise). They drain resources from maintainers of those common repositories often without any compensation.
Even if AI companies don’t care about the benefit to the common good, it shouldn’t be hard for them to understand that by bleeding these projects dry, they are destroying their own food supply.
And yet many AI companies seem to give very little thought to this, seemingly looking only at the months in front of them rather than operating on years-long timescales. (Though perhaps anyone who has observed AI companies’ activities more generally will be unsurprised to see that they do not act as though they believe their businesses will be sustainable on the order of years.)
It would be very wise for these companies to immediately begin prioritizing the ongoing health of the commons, so that they do not wind up strangling their golden goose. It would also be very wise for the rest of us to not rely on AI companies to suddenly, miraculously come to their senses or develop a conscience en masse.
Instead, we must ensure that mechanisms are in place to force AI companies to engage with these repositories on their creators’ terms.
I like the look of this proposal that would allow authors to have more control over network priorities for third-party iframes—I’ve already documented how I had to use a third-party library to fix this problem on the Salter Cane site.
And by LLMS I mean: (L)ots of (L)ittle ht(M)l page(S).
I really like this approach: using separate pages instead of in-page interactions. I remember Simon talking about how great this works, and that was a few years back, before we had view transitions.
I build separate, small HTML pages for each “interaction” I want, then I let CSS transitions take over and I get something that feels better than its JS counterpart for way less work.
Many of us got excited about technology because of the web, and are discovering, latterly, that it was always the web itself — rather than technology as a whole — that we were excited about. The web is a movement: more than a set of protocols, languages, and software, it was always about bringing about a social and cultural shift that removed traditional gatekeepers to publishing and being heard.
The web is open, apps are closed. The majority of web users have installed an ad blocker (which is also a privacy blocker). But no one installs an ad blocker for an app, because it’s a felony to distribute that tool, because you have to reverse-engineer the app to make it. An app is just a website wrapped in enough IP so that the company that made it can send you to prison if you dare to modify it so that it serves your interests rather than theirs.
The moment you run LLM generated code, any hallucinated methods will be instantly obvious: you’ll get an error. You can fix that yourself or you can feed the error back into the LLM and watch it correct itself.
Compare this to hallucinations in regular prose, where you need a critical eye, strong intuitions and well developed fact checking skills to avoid sharing information that’s incorrect and directly harmful to your reputation.
With code you get a powerful form of fact checking for free. Run the code, see if it works.
The tech bros advocating for generative AI to take over art are at the same level of cultural refinement as the characters in Severance. They’re creating apps to summarize books to people, tweeting from accounts with Greek statue profile pictures.
GenAI would automate Lumon’s cultural mission, allowing humans to sever themselves from the production of art and culture.
I see the personal website as being an antidote to the corporate, centralised web. Yeah, sure, it’s probably hosted on someone else’s computer – but it’s a piece of the web that belongs to you. If your host goes down, you can just move it somewhere else, because it’s just HTML.
Sure, it’s not going to fix democracy, or topple the online pillars of capitalism; but it’s making a political statement nonetheless. It says “I want to carve my own space on the web, away from the corporations”. I think this is a radical act. It was when I originally said this in 2022, and I mean it even more today.
You do not have to use generative AI.
AI itself cannot be held to account.
If you use AI, you are the one who is accountable for whatever you produce with it.
There are contexts in which it is immoral to use generative AI.
Correcting or fact checking generative AI may take longer than just doing a task yourself, or with conventional AI tools.
You do not have to use generative AI.
My main problem with AI is not that that it creates ugly, immoral, boring slop (which it does). Nor even that it disenfranchises artists and impoverishes workers, (though it does that too).
No, my main problem with AI is that its current pitch to the public is suffused with so much unsubstantiated bullshit, that I cannot banish from my thoughts the sight of a well-dressed man peddling a miraculous talking dog.
Also, trust:
They’ve also managed to muddy the waters of online information gathering to the point that that even if we scrubbed every trace of those hallucinations from the internet – a likely impossible task - the resulting lack of trust could never quite be purged. Imagine, if you will, the release of a car which was not only dangerous and unusable in and of itself, but which made people think twice before ever entering any car again, by any manufacturer, so long as they lived. How certain were you, five years ago, that an odd ingredient in an online recipe was merely an idiosyncratic choice by a quirky, or incompetent, chef, rather than a fatal addition by a robot? How certain are you now?
Ah, this is wonderful! Matt takes us on the quarter-decade journey of his brilliant blog (which chimes a lot with my own experience—my journal turns 25 next year)…
Slowly, slowly, the web was taken over by platforms. Your feeling of success is based on your platform’s algorithm, which may not have your interests at heart. Feeding your words to a platform is a vote for its values, whether you like it or not. And they roach-motel you by owning your audience, making you feel that it’s a good trade because you get “discovery.” (Though I know that chasing popularity is a fool’s dream.)
Writing a blog on your own site is a way to escape all of that. Plus your words build up over time. That’s unique. Nobody else values your words like you do.
Blogs are a backwater (the web itself is a backwater) but keeping one is a statement of how being online can work. Blogging as a kind of Amish performance of a better life.
I Feel Like I’m Going Insane
Everywhere you look, the media is telling you that OpenAI and their ilk are the future, that they’re building “advanced artificial intelligence” that can take “human-like actions,” but when you look at any of this shit for more than two seconds it’s abundantly clear that it absolutely isn’t and absolutely can’t.
Despite the hype, the marketing, the tens of thousands of media articles, the trillions of dollars in market capitalization, none of this feels real, or at least real enough to sustain this miserable, specious bubble.
We are in the midst of a group delusion — a consequence of an economy ruled by people that do not participate in labor of any kind outside of sending and receiving emails and going to lunches that last several hours — where the people with the money do not understand or care about human beings.
Their narrative is built on a mixture of hysteria, hype, and deeply cynical hope in the hearts of men that dream of automating away jobs that they would never, ever do themselves.
Generative AI is a financial, ecological and social time bomb, and I believe that it’s fundamentally damaging the relationship between the tech industry and society, while also shining a glaring, blinding light on the disconnection between the powerful and regular people. The fact that Sam Altman can ship such mediocre software and get more coverage and attention than every meaningful scientific breakthrough of the last five years combined is a sign that our society is sick, our media is broken, and that the tech industry thinks we’re all fucking morons.
You can still have a home. A place to hang up your jacket, or park your shoes. A place where you can breathe out. A place where you can hear yourself think critically. A place you might share with loved ones who you can give to, and receive from.
Now, more than ever, it’s critical to own your data. Really own it. Like, on your hard drive and hosted on your website.
Is taking control of your content less convenient? Yeah–of course. That’s how we got in this mess to begin with. It can be a downright pain in the ass. But it’s your pain in the ass. And that’s the point.
If you’re roughly 70% happy with a piece of writing you’ve produced, you should publish it.
Works for me!
You’re also expanding your ability to act in the presence of feelings of displeasure, worry and uncertainty, so that you can take more actions, and more ambitious actions, later on.
Crucially, you’ll also be creating a body of evidence to prove to yourself that when you move forward at 70%, the sky stubbornly fails to fall in. People don’t heap scorn on you or punish you.
This is absolutely wonderful!
There’s deep dives and then there’s Marcin’s deeeeeeep dives. Sit back and enjoy this wholesome detective work, all beautifully presented with lovely interactive elements.
This is what the web is for!
I’m not a fan of Nicholas Carr and his moral panics, but this is an excellent dive into some historical media theory.
What Innis saw is that some media are particularly good at transporting information across space, while others are particularly good at transporting it through time. Some are space-biased while others are time-biased. Each medium’s temporal or spatial emphasis stems from its material qualities. Time-biased media tend to be heavy and durable. They last a long time, but they are not easy to move around. Think of a gravestone carved out of granite or marble. Its message can remain legible for centuries, but only those who visit the cemetery are able to read it. Space-biased media tend to be lightweight and portable. They’re easy to carry, but they decay or degrade quickly. Think of a newspaper printed on cheap, thin stock. It can be distributed in the morning to a large, widely dispersed readership, but by evening it’s in the trash.
Want to use all those great features that have been in landing in browsers over the past year or two? View transitions! Scroll-driven animations! So much more!
Well, your coding co-pilot is not going to going to be of any help.
Large language models, especially those on the scale of many of the most accessible, popular hosted options, take humongous datasets and long periods to train. By the time everything has been scraped and a dataset has been built, the set is on some level already obsolete. Then, before a model can reach the hands of consumers, time must be taken to train and evaluate it, and then even more to finally deploy it.
Once it has finally released, it usually remains stagnant in terms of having its knowledge updated. This creates an AI knowledge gap. A period between the present and AI’s training cutoff. This gap creates a time between when a new technology emerges and when AI systems can effectively support user needs regarding its adoption, meaning that models will not be able to service users requesting assistance with new technologies, thus disincentivising their use.
So we get this instead:
I’ve anecdotally noticed that many AI tools have a ‘preference’ for React and Tailwind when asked to tackle a web-based task, or even to create any app involving an interface at all.