ChatGPT as a search engine: why it's appealing (and ultimately useless)
With Google becoming increasingly difficult to use, many are turning to AI, and I'm wondering if there's merit to that idea.
I don’t know if you’ve heard it before, but I have a million times: Google just doesn’t fucking work anymore.
Google used to be pretty useful for getting answers to various questions. It used to be good for research, for finding recipes, etc. It used to be a good search engine.
But now? It doesn’t fucking work. Unless you put “Reddit” at the end of all of your queries, you get a bunch of SEO covered garbage that takes ages to get to any actual point. When researching, it can be extremely difficult to get any reliable sources that pertain to whatever subject you’re trying to get info about.
When you consider all of this, it’s no wonder that so many people have turned to selling their souls to the devil: using ChatGPT as a search engine.
Personally, I’ve yet to try this. It’s bad for the environment, it hallucinates, it’s only able to give surface level information.
But recently I tried doing that for a different idea I had that involved testing ChatGPT’s capabilities, and I hate to say it, but it worked that time. Admittingly, it was a super easy question — how to tell if hot dogs are bad — but it still got me thinking.
Could AI be used for this?
ChatGPT as a search engine
I decided to investigate by simply testing it. At first, I was impressed.
The first thing I did was ask it where to buy oatmeal bath in store, since I googled that while trying to find it for my grandma without getting very good results. The stuff it said to me, the text it generated, wasn’t very useful, but the thing that impressed me was that it did something that wasn’t there the last time I used it: it actually searched the web and gave me links to websites to look at.
This intrigued me. The idea of it finding sources more easily — the problem I currently have with Google — seemed like an interesting idea. I decided to look into it more.
I asked it to find me resources about queer German cinema, the topic of a research paper I did last semester1, and it gave me some good results. Some of the articles it recommended to me were ones I cited when I did the paper, and it also recommended a book that I didn’t find when I researched.
I tried to think of problems with this specific use of AI — is there a problem with using it to find sources for information? Obviously people should not be using the text it generates when researching a topic, but what about that source list?
Well here’s a specific problem with ChatGPT that I ran into.
After that I decided to ask about the recent NYC mayoral primary, and it gave decent information, but here’s the problem: I read the text it generated. Now, it didn’t give me any inaccurate info, but what if it had? Those two initial searches kind of coaxed me into relying on the generated text — that’s where things like hallucinations and bias and overly surface level info would be.
On other searches I did after that, I also ran into the problem of not being able to figure out where certain parts of the generated text were coming from. Another problem is that even if you look at the linked sources, it's seemingly summarizing A LOT of them at once. For that mayoral election search, there was one sentence that cited 24 different websites. How are you supposed to know where it’s getting that sentence? In a research paper, for example, I’m usually citing a maximum of 2 or 3 sources for one point, and that's only when I’m talking about a certain fact that I saw in multiple sources. There’s also a lot of instances of me specifically quoting someone else and then crediting them; ChatGPT just confidently says whatever and then puts a link at the end.
In terms of accuracy, ChatGPT is still problematic, it turns out, according to this article that ChatGPT actually gave me when I asked it about this problem. According to The Verge, studies showed that the summaries I’ve been describing are frequently inaccurate, despite some very confident language from the LLM.
Another HUGE problem I have with ChatGPT is that, even if you are purely using it to find sources to read further or only using it for simple questions, OpenAI fully supports and encourages other uses of AI that are completely unethical. Image generation, creative writing, doing your research for you, etc. All things I’m heavily against, and it’s all right in the sidebar.
But what if it wasn’t ChatGPT?
The problem with Google still stands, and even if Google was still good, there were always a few limitations when searching for something niche or specific. Search engines are simply matching your keywords to various web pages.
But an AI could, hypothetically, understand what you’re saying and give you links to results that matched your intentions fully. Maybe it could filter the ads out for you, maybe it could find images that match a specific description (already existing ones, not AI generated), maybe it could automatically organize a bunch of opinion pieces into different sides of a certain debate.
A search engine that utilizes AI technology in an ethical and reliable way could have some potential. And I really can't think of any ethical concerns with this hypothetical, other than environmental concerns, but maybe with better technology we could avoid that someday.
But I also know that that beautiful search engine will probably never exist, on account of it actually being good and helpful.
Enshittification is inevitable.
When I was looking up stuff about that ChatGPT web search feature, I saw one Reddit comment that warned an excited ChatGPT user that Google also used to be good (or “good” in ChatGPT’s case); enshittification is inevitable.
For those unaware, “enshittification” is a recent slang word used to describe the process of helpful or fun technology, such as apps, social media sites, and search engines, slowly becoming worse and worse due to profit incentive. The little indie app or website grows and grows, and starts out becoming better over time, but then it becomes a huge company (or gets bought out by one), that wants to eliminate competition, and after it does so, it has no reason to balance profitability with user experience because it now has a monopoly over the market.
This is exactly what happened to Google. It started out as the best search engine out there, but now that it has an insane monopoly over tech, all they care about is profit. Now all they care about is making sure you spend as much time looking at ads as possible, and that means making it fucking unusable and filled with SEO crap.
OpenAI is already never going to make anything like I described, because they don’t want to be helpful. They want to make sure you’re as reliant on the stupid robot as possible.
Currently, there is seemingly no way for any interesting tech to thrive without this inevitable fate. As long as capitalism works the way that it does, companies are doomed to a fate of becoming monopolistic capitalist villains.
Personally I think the answer is stricter laws on corporations that benefit the public, like antitrust laws, the encouragement of unions, and stuff that prevents consumers from getting fucked over; overall Democratic Socialist policies. Other people say that the only solution is a revolution followed by Anarcho-Communism. I don’t feel like arguing about that right now, so I’ll just say that there needs to be something done about our current system.
We need to exist in a world where people benefit from helping others, in this case through creating good and useful products that aren’t designed purely to generate as much money as possible.
To clarify I ALREADY DID THIS RESEARCH PAPER. I did not use AI at any point in the process of originally writing the paper.