Warning: This is written for the layperson when it comes to the tech industry. If you just want to see the car metaphor, you can hop down to the “What We Can Learn from Car Culture in AI ” section.

Duolingo’s Duo character is going through some stuff. For a while, he was pronounced dead which conincided with an art change: Duo changed from a one-dimensional friendly if pestering owl to a kind of surreal, sometimes monstrous, eldtritch horror bird. The app features a multiverse of Duo in reward animations, sometimes as a weird horse that turns coquettishly toward you, a bird that bursts through its skin suit to reveal a smaller bird with anime eyes, or a screaming bird with an exploding brain that is also screaming. Duo wants to reward me for knowing common elision in Italian1 by animating an array of pre-bedtime pocket nightmares.

These are uncanny versions of the previous Duo which fits the direction of the company as proselytized by Duolingo's CEO in a letter which details how Duolingo is going to lay off their contractors and lean into lessons created by generative AI2. These same lessons that already seem like they are a robot’s probabilistic understanding of what’s educational given their relevance to the casual language learner. Like, why am I learning how to say, “My cat wants to come” in Italian or “Everyone has to die” in Spanish?

The letter also represents a sort of existential crunch software engineers might feel nowadays. Not because they feel like AI is going to take their jobs. That’s still maybe years off. The current crisis is not having a choice of whether they want to incorporate AI at all.

I know I mention car infrstructure in the headline here and, I promise, I’m getting to it. What I want to do first is outline this forced de-principling.

Vocal Dissent is Loud and Isn’t Wrong

If your social media feed is anything like mine, it is rife with people that have seen what AI has to offer and have been left wanting. “AI slop” and “hallucinations” quickly grew in popular vernacular due to prevalence of products produced by AI and their something’s-off quality. Uncanny videos, gross images, and completely fabricated answers pervade basic search queries and mindless internet toys. So while people fiddle around with a poor product, they also learn about the consequences of AI, which boil down to three big gripes:

  1. AI takes advantage of anyone that’s ever produced anything. The passion and livelihoods of artists, scientists, content creators, and anyone that’s ever been on the internet were fed and continally are fed to the endless void of AI and they must be stopped.
  2. AI is super bad for the environment. The amount of computation needed to get results based on the language of a person’s question requires an astronomical amount of power to create responses and cool the computers accepting the requests using energy that, predominantly, is still planet-killing, so, thus, they must be stopped.
  3. AI benefits the current super-rich tech ruling class. Call it oligarchy, fascism, or rampant capitalist authoritarianism, the point is that the shift to using AI technology globally, even if it’s not directly monetized yet, mostly benefits the rich tech bros at the top and keeps money flowing into their bank accounts (you use AI for things, people sell the fact that you’re using AI for things, rich people keep giving them money to make more AI things, loop). They must be stopped.

These issues are generally problems of scale.

Content Licensing. That the dataset for LLMs is so large that they encompass all content modern and historical creates lots of problems. If these ethical dilemmas were solved iteratively/slowly as they ran into them, things might have been better during AI’s hypergrowth. Instead, because AI offered a passable product, the dataset grew large before considering the implications. Also many of the companies planning to roll out these technolgies got rid of their ethics departments pretty early on. Ppsh. Road blocks.

Energy Consumption. Even though AI has improved to something more than a machine randomly showing answers until you think one is right, it’s still not really “thinking.” Using LLMs means tokenizing your sentence into bits and using those bits to match to previously used bits in context domains so there’s still an element of brute force: the model is still trying to match the probability by finding all the available options first before choosing the next word that is the most likely to make sense and/or to be correct. That amount of calculation even on a small dataset or with a small number of users would still take a decent chunk of energy. But sliding it into every search query, opening tools for everyone to use, and creating cottage industries based on AI meant that it grew massively in a short time with no consideration for how to sustain it. Sorry, I meant to say that we're spinning up nuclear power plants and Microsoft, I guess, is getting into the “hide the spent uranium rods” business.

Consolidated Power. The economy is information-based at the moment and AI is a product of that information-based economy which means that American capitalism most benefits the people leading the information-based industry. The scale at which the big AI products launched required the kinds of resources and capital only the Silicon Valley ecosytem could muster. So they own it and can muscle you out of it.

This is to say the technology itself isn’t inherently bad but the way it is being wielded causes a lot of concern. This is especially true as the unproven possibilities get sold to the tech-building industry and how it can expedite delivery of software.

The Programmers’ Dilemma

I started using Cursor and my eyes opened to the agony and the ecstasy3 of what generative AI meant for my job and career. An agent not only was able to answer my questions with the context of code for problems I was bored to research (“Which dependency is causing these errors and how do I fix it?”) but could write passable code for it (“passable” is key) then insert its solution into my code for me. In fact, I could wipe out hours of my time writing boilerplate code with a well-crafted sentence. This is the part where people assume AI is going to take over engineers’ jobs.

Non-engineers might see “vibe coding” (coding by prompts and well-crafted sentences rather than knowing code) as how applications will be built so they ask, “what’s the point of knowing code?” The thing is that, with every innovation, there are still people that need to know the underlying infrastructure – for instance, people learn to use React, a handy and feature-rich framework, but still should know how JavaScript works for when things go wrong. Also, to no one’s surprise, AI code is not perfect. By a long shot. It produces code that it thinks is the most likely based on its research but doesn’t fully understand your context, trying to do something new, flaws in the dataset, etc etc. Engineers that have used AI extensively will know the pain of an AI trying to solve 100 problems that aren’t your problem and address your problem with hacks and brittle code.

Think about many questions you’ve seen asked on Reddit, for example, and all the responses to that question: right, wrong, and sarcastic. You know which answers are real and not real because you have lived experience. A machine, who is not alive (yet?), has to decide what is right. That machine, which is already made by flawed humans, is going to make mistakes.

This is why agent management is probably the way forward, with engineers overseeing the AI output and fixing the inevitable errors, even as AI improves or we find new ways to give it the right context (MCP, et al).

This is where the dilemma comes in, that dissonance between what is possible and what is reality. AI and its possibiities introduced a new era of execution to an industry looking for a reason to sidle its way out of the post-pandemic worker focus. 2020-2021 saw a boom in engineering salaries and benefits and, though big tech companies have been self-correcting over the past couple years (eliminating engineering management layers, adjusting salary bands, layoffs), AI creates an opportunity to do more with less and new ways to shape an organization. Identify low-performers, drop them along with 25% of your staff, give your top engineers AI agents, and demand more production. Instead of allowing engineers to use the agents to have a better quality of life, they’re being used to decrease headcount and increase demand for output4.

If you are an engineer that has ethical problems with AI, you are living in a bad time to be an engineer.

At the moment, AI is an expectation for engineers in the higher tiers of technology companies. If you can’t at least converse about how to use AI, or build an MCP server, or automate your tasks using LLMs, you might be left behind. And if your principles give you pause about adopting AI into your workstream – you know, if you don’t want to kill the planet or line the pockets of existing billionaires with the hard work of poorer artists – letters like those from the Duolingo CEO are inviting you to train for another vocation.

They are saying to you, “Either you buy into this technology or you can get out of the way.”

Infrastructure of the Automobile

I am carless. I have always have been carless5. My perspective on cars and car infrastructure is informed by the fact that I view cars on a spectrum of fun, leisure tools (love a road trip) to hulking, speeding, metal death machines. So I will try to be sober about this metaphor but know that I’m starting from a place where I think cars as a quotidian device, even if the option of a car seems so banal in its obviousness, is at times reckless, risky, and/or a waste of our resources.

I say this not only because cars themselves (even electric ones) are future blights6 but the infrastructure built to accommodate cars carves up the earth haphazardly, inefficiently, and dangerously. I have lived in Atlanta and Los Angeles, two cities known for their intractable traffic, both places where the population throws up their hands (with, positively, a growing number of exceptions) about how inevitable being stuck on the highway is. But it wasn’t always this way. We weren’t always comprised of high-capacity interstates and surface lots.

We made a lot of concessions so that people could use cars more readiliy. That was partly by capitalistic machinations and partly by public choice (I’m using a 2021 column by Patt Morrison from the The Los Angeles Times to cite these claims but there are also dozens of books on the subject). People in Los Angeles were buying up cars and riding the street cars less, which made the street cars bad places to ride, which made people want to ride them even less (current state of affairs for the current LA Metro system). Money in the 1940s and 1950s started to pour into big roads for cars. And then National City Lines, backed by car and tire interests – this is part that stokes the conspiracy of how GM killed public transportation across the country – eventually ran the street car company into the ground while tearing the tracks out of the street. We decided collectively that cars were the future and we were going to embrace it. So we tore up and split apart neighborhoods for people that didn’t have representation while also literally sidelining the pedestrian. And then we created culture capital around the car so that people saw cars as a rite of passage and responsibility.

But we failed to really foresee the consequences of cars being the future. We forgot that we are all also, once the car stops at least, pedestrians. We built massive structures to hold cars and city councils created development laws that insisted on dedicating large parcels of expensive land to store people’s private property. We made large roads span 18 lanes and were legally and physically uncrossable by foot (with plans to expand because, somehow, the last expansion didn’t solve the traffic problem 🤷). We had to build all this infrastructure and reshape society in order to adapt to people spending hundreds to thousands of dollars per month on a car just so they can get around the place they live.

Buses hold more people than cars. So do trains. We already had infrastructure to transport masses publically. But we surrendered it because of convenience, demands by people that could afford that convenience, and probably status a little bit.

Now, I am a public transit supporter and evangelist. I know how to get around town using buses, trains, and microtransit. LA’s transit is not nearly as bad as you’ve heard. Some might even be surprised that it exists at all. But, there are days, when I look at a route from my neighborhood near Venice to some restaurant I want to try in, say, Echo Park7, and see the travel time is 1 hour, 21 minutes by transit and 24 minutes by car (and that’s in the evening – it’s 40-75 minutes during rush hour according to Google Maps), I feel it.

No one realized how much land we would surrender. No one realized how much cars could contribute to pollution, both actively and after they are retired. No one realized how hot everything would become. No one predicted the havoc that cars would cause. People suspected the old way had no wisdom. Either that or no one cared.

I feel the pressure of a city’s bet on cars decades ago bidding me to get out of the way.

What We Can Learn from Car Culture for AI

So what does all this have to do with AI? I’m not here to stand athwart progress and yell stop. To me, the parallels are these:

  • As new technologies, they were fundamentally revolutionary even if, in their nascent stages, they had the capacity to be crude and dangerous (check out Detroit's driving culture in the early 20th century and people who ruin their lives thinking they are talking to supernatural beings through ChatGPT ). The technology isn’t bad. But being skeptical during its implementation is good.
  • We are bullish on incorporating them into our lives without necessarily understanding the complications. Understand scale and future implications.
  • Capitalists that hopped on early are forcing people to make a change and we’re letting them dictate what is the common good. Listening to the private sector as experts is fine but understand that moderation isn’t usually in the interest of making money.
  • Pandora’s box has been opened and there’s no shoving the technology back in the box (or making it smaller to test). Understand that things are growing and changing fast and make sure there is room to navigate even if this technology may not be the future.
  • There are vast ecological consequences that will only worsen as the technology scales. See the part about scaling.
  • Giant corporations have a vested interest in everyone collectively buying into their play. We are letting them tell us that the old ways have no wisdom. This is Barney Stinson logic and that should be a red flag.

Most of these are prescriptive and currently impossible. Like I said, you can’t put it back in the box. Instead, some engineers are being forced into a decision: either tamp down those principles or find something else to do. I offer no solution here as I try to nagivate the same environment. But I think it’s worth pointing out that at technology evolves, we need to understand that people the came before the technolgy were not idiots. And capitalists, typically, are not long-term-thinking geniuses. Let’s be careful. Let’s be skeptical. And I hope we enter our “seat belts” era for AI.

Also: please include a “non-messiah” prompt in your Cursor rules. It seems important.


  1. In case you want a reminder of elision: lots of Italian words end in vowels and sometimes, if the next word begins in a vowel, Italians may elide the word to make it more phonetically pleasing (“there was” = “ci era (chee eh-rah)” = “c’era (cherah)”). We do the same thing in English with contractions but that’s not what Duolingo rewards me with Duo’s more ghoulish looks. ↩︎

  2. I’m going to use the colloquial use of AI here and not the scientific version of AI. As I’ll talk about later, this these LLMs and chat bots aren’t really “intelligent” so much as they “guess real good.” Also note that I sound really harsh on AI’s vibe but I’m (mostly) not sour on the technology itself. See “The Programmers' Dilemma .” ↩︎

  3. I will find a way to make a White Zombie reference whenever I can. ↩︎

  4. Do I have evidence for this? Nothing concrete. This is why people like Gergely Orosz have popular, well-respected newsletters that pay their bills and I have been invited to guest on zero (0) engineering podcasts. But, come on. This doesn’t seem like an outlandish fantasy. ↩︎

  5. I’ve never had a driver’s licenese and I have never owned a car. I have dated people with cars so I don’t know if you want to catch me on that technicality. Also, I did drive a car while in my late-teens that belonged to my girlfriend at the time to make occasional trips across short distances. Without a license. Without insurance. Don’t do this. ↩︎

  6. What happens to every new car when no one wants it anymore? How much energy is used to recycle car parts that didn’t have to exist? As urban environments sprawl with population growth and weak stewardship in civil engineering, what are the consequences of car culture? ↩︎

  7. Quarter Sheets , I will get to you some day. ↩︎