ena.rocks with bleeding hearts (flower)


Users don't want AI, they want software that works

    2025-03-10 18:51

I wrote this almost 2 years ago, angry after a conference in a hotel room. I’ve cleaned it up a bit and added some recent events, but it remains largely unchanged.

 

When I was in university studying computer science all anyone wanted to talk about was ML. I was not that interested, but I noticed it more and more from students and recruiters and job listings. ML was going to change and improve everything about the world, drastically. I don’t think I heard anyone really propose a plan for that, the technology was still progressing. But of course, it was going to be the next big thing, all the companies and universities had to hop on it!

Except ML is hard, finding good training data is hard, and more than that all avoiding bias is hard. Google once released a photo recognition software to photos and tagged black people as monkeys. The only way they fixed it was by banning monkeys [x] [x] [x]. That sounds like it really improved our lives (sarcasm emphasized).

I once told a manager during an interview that I wasn’t interested in ML, and they replied, “wow, that’s refreshing.” All anyone wanted to do is ML, but ML can’t really exist by itself. ML instead is more a means to help achieve a goal, but it can’t be the entire solution.

 

It’s been 5 years since I graduated, I fail to see any way that ML has resulted in something that meaningfully improves my life. What aspect of the technology I actively use is improved by ML?

My searches? No, not really, I now append “Reddit” to most of the things I search, to find discussions from real people, and not content farms or long texts before every recipe that are optimized for the algorithm.

My media? Spotify won’t stop playing the same song I hate, no matter how many times I skip it.

Connections? Facebook and Instagram now only show me posts by celebrities or famous pages I don’t care about. When I want to see a notification, clicking on it takes me to a random point in a page. I no longer see posts from those I care about, instead I’m constantly advertised to by both the platform and the users of the platform.

Where to eat? I can still never decide on the best nearby restaurant, because the results change for every zoom level and pan, impossible to track where is what. Also, it doesn’t matter to me that both restaurants are 5km away, when one is in a convenient transit location, and the other is on the other side of a highway overpass that needs a car.

There’s no regard for correctness, or user input, into the things we want to see. Instead, the algorithm decides. There’s no way to filter for concise results, and most of the internet doesn’t support “not” searches well. There’s no way to ignore or dislike songs in Spotify, no way to see what comment my friend made on a post when they tagged me, no way to tell Instagram I want to see posts from my family instead of what it thinks I want to see, no way to tell Google Maps I only care about restaurants on the one street with good transit near me.

 

And now we’re in the era of “AI”. How convenient we have a new two letter acronym, basically just ML in a fancy hat, that we can use to throw tons of money and resources into with very little actual gain. None of these companies even have good ways of making their AIs as profitable as all the resources they’re sinking into them, except for the brand recognition they lose by not keeping up.

 

To fix the “reddit” search problem, Google attempted to train their search response AI on reddit and it created tons of dangerous nonsense. Because it fundamentally failed to tell what kind of searches should be “enhanced” with Reddit and why. Users append “reddit” to queries when they’re looking for opinion pieces or reviews by genuine people, not when looking for recommendations like “should I eat rocks”.

The real solution would be for Google to invest in improving their search algorithm to not favour content farms and to return better and more diverse results. Imagine if a recipe didn’t need multiple paragraphs about the poster’s entire life story to be ranked highly. Google could even attempt fix this, and instead they chose to create even more nonsense.

AI is always risky, it’s a randomness simulator that will always risk sharing inappropriate or incorrect information, especially to the wrong audience. We’re a long ways away from being able to actually trust the image or text returned from an AI. Should we ever? When the data used is always going to be biased, when the engineers will also always carry bias? Humans are biased too, but that’s not hidden by aggregating billions of data. It’s much easier to analyze a person’s conflict of interest than an amalgamation of all the data on the internet.

Sure an AI can be fun to chat to, a lot of people I know enjoy using generated AI images to make memes with friends. Not that any of them would ever pay for that, or turn off their ad blocks enough to make it worth it… Especially not when AI is intellectual property theft, and no one would ever want to pay enough to correctly compensating the people who’s art is used in these generators. Instead, AI copies and reproduces their art with no credit and compensation. (Sarah Anderson wrote a great article on this.)

 

What users actually want is for technology to work for them when they use it. They want good, seamless, and easy to use software that improves their lives. Very little of the work we do is to achieve this goal… at the least, very little of the work I do is to achieve this goal.

When I go to the doctor, I want to be listened to and not have my concerns dismissed. I don’t want an AI to predict what my condition is, or an AI nurse to perform my patient intake, because averages aren’t important in individualized patient care. Instead, it’ll probably just give the doctor a reason to send me home with a temporary fix or useless prescription, as usual.

When I contact my internet provider, I don’t want an AI to respond with things I’ve already tried, requiring me to type nonsense into the chat window before I finally get connected with a real person.

When I want a new recipe, I don’t want to read pages and pages of a generated preamble to please the content algorithms. Or to use algorithms to parse the generated preamble for me.

When I look at an image, I want the people who’s work contributed to it to receive the proper credit and compensation.

When I wake up in the morning, I want to be excited to do my job. I want to feel like I am actually doing something meaningful.

 

The user wants software to work, to get the service and support they deserve for their money and attention spent. If this means paying for people to do the work we’re trying to replace with an AI, it’s worth it. If this means that executives pockets are lined with less money and regular people can actually afford to live, it’s worth it.

Investing in delivering quality to and respecting real people will always be actually worth it.