&udm=14 | the disenshittification Konami code

Another way to get Google results without the slop.

Tagged with

Related links

In the way

This sums up my experience of companies and products trying to inject AI in to the products I use to communicate with other people. It’s always just in the way, making stupid suggestions.

Tagged with

“Wait, not like that”: Free and open access in the age of generative AI

Anyone at an AI company who stops to think for half a second should be able to recognize they have a vampiric relationship with the commons. While they rely on these repositories for their sustenance, their adversarial and disrespectful relationships with creators reduce the incentives for anyone to make their work publicly available going forward (freely licensed or otherwise). They drain resources from maintainers of those common repositories often without any compensation.

Even if AI companies don’t care about the benefit to the common good, it shouldn’t be hard for them to understand that by bleeding these projects dry, they are destroying their own food supply.

And yet many AI companies seem to give very little thought to this, seemingly looking only at the months in front of them rather than operating on years-long timescales. (Though perhaps anyone who has observed AI companies’ activities more generally will be unsurprised to see that they do not act as though they believe their businesses will be sustainable on the order of years.)

It would be very wise for these companies to immediately begin prioritizing the ongoing health of the commons, so that they do not wind up strangling their golden goose. It would also be very wise for the rest of us to not rely on AI companies to suddenly, miraculously come to their senses or develop a conscience en masse.

Instead, we must ensure that mechanisms are in place to force AI companies to engage with these repositories on their creators’ terms.

Tagged with

Hallucinations in code are the least dangerous form of LLM mistakes

The moment you run LLM generated code, any hallucinated methods will be instantly obvious: you’ll get an error. You can fix that yourself or you can feed the error back into the LLM and watch it correct itself.

Compare this to hallucinations in regular prose, where you need a critical eye, strong intuitions and well developed fact checking skills to avoid sharing information that’s incorrect and directly harmful to your reputation.

With code you get a powerful form of fact checking for free. Run the code, see if it works.

Tagged with

Severance Is the Future Tech Bros Want - Reactor

The tech bros advocating for generative AI to take over art are at the same level of cultural refinement as the characters in Severance. They’re creating apps to summarize books to people, tweeting from accounts with Greek statue profile pictures.

GenAI would automate Lumon’s cultural mission, allowing humans to sever themselves from the production of art and culture.

Tagged with

Generative AI use and human agency

You do not have to use generative AI.

AI itself cannot be held to account.

If you use AI, you are the one who is accountable for whatever you produce with it.

There are contexts in which it is immoral to use generative AI.

Correcting or fact checking generative AI may take longer than just doing a task yourself, or with conventional AI tools.

You do not have to use generative AI.

Tagged with

Related posts

Reason

Please read Miriam’s latest blog post.

Changing

I’m trying to be open to changing my mind when presented with new evidence.

The meaning of “AI”

Naming things is hard, and sometimes harmful.

Unsaid

I listened to a day of talks on AI at UX Brighton, and I came away disappointed by what wasn’t mentioned.

Mismatch

It’s almost as though humans prefer to use post-hoc justifications rather than being rational actors.