skip to content
Andrew Marder

Steroids and Large Language Models

/ 3 min read

I was mesmerized by Barry Bonds. I remember him getting up to bat, wagging his bat, and then absolutely crushing the ball. Bonds was so dominant, intentionally walking him seemed like the only prudent decision. But, here’s the thing, he was cheating. He was breaking the rules of the game, using performance enhancing drugs (PEDs) to gain an unfair advantage over the competition. Bonds was an exceptional athlete, I wonder how well he would have done without PEDs. Certainly Bonds is responsible for the choices he made, but I also think we have to hold the league accountable. If Major League Baseball doesn’t enforce the rules and many players are using PEDs, do you really have a choice if you want to be the best? Large Language Models (LLMs) are steroids for knowledge workers (except there are no rules against them). In this post, I’ll discuss some places where I’ve used LLMs, why I choose to use them, how it makes me feel, and what we can do about it.

Examples

Here are some things I’ve made that wouldn’t have happened without AI assistance:

The speed with which we can access knowledge through LLMs is amazing. LLMs have helped me learn about technologies like Observable Framework, Vue.js, Firefox, and Kubernetes. LLMs helped me make things I would not have made if I didn’t have their assistance to speed up the process.

Why?

Although I am not the Barry Bonds of knowledge work, I do care deeply about my craft and my livelihood. LLMs have been able to automate a lot of knowledge work. Workers can be more productive with these tools. We see companies laying off workers because they can accomplish their objectives with fewer humans using AI tools. When I look out at the labor market, it looks like every knowledge worker is taking steroids, and if I don’t take steroids then I risk losing my job.

A mix of feelings

Building prototypes with AI is super fast. It’s been fun using AI to build things that I wouldn’t have built without AI. When I have a question now, I can ask an LLM and get a pretty comprehensive answer. It’s pretty cool. But, and this is a big butt, LLMs are giant plagiarism / bullshit machines. They’ve been trained on content without the consent of the content makers. They don’t cite their sources. They make shit up and blow smoke up your ass. They’re the creation of capitalists looking to reduce costs. Knowledge workers are expensive, cutting that cost could prove very profitable for corporations. By paying for and using AI tools, I worry that I am part of the problem.

What can we do?

I’m not sure. As an individual, I don’t feel like I have much of a choice. It feels like I have to use AI to stay competitive in the job market and continue to make a living. If there’s a tool that’s going to make me more productive, then I’m going to use it. Collectively, I think workers need to revisit the need for unions - humans are stronger together.

Join the Newsletter

Get a weekly email with my latest posts on data, code, and self-hosting. No spam, unsubscribe anytime.