The Threat and Limits of LLMs

In the last few years or even months it feels like AI are now everywhere. These large language models (LLM) are being incorporated into our lives and the platforms and applications we use so quickly that most of us don’t know how to use them. It feels like we are in the hyper growth phase of AI, led by OpenAI and other organizations which are scraping the data we share and post from words, to images, art and audio and video to create these generative AI systems.

That’s fucken scary as hell, how many people are going to be at risk from office workers to increasingly complex roles over the coming years as these LLMs improve. The goal of corporations will be to replace their people with these systems to lower their head count and put that money into the pockets of their investors. Is that a world we want to live in?

That is not happening anytime soon but it hovers like a black cloud for many people as the speed of improvement in these tools feels like months if not weeks instead of years. While these are simply tools they are powerful and can the data online as a basis to create images, articles and who knows what else for the user.

LLMs scower the internet and can recall the information you ask for in a natural way in moments. Even the scientists that are developing these models don’t have a complete understanding of how they work. That is concerning when you consider how these LLMs are being used by the public and even more concerning for how could rely on them even more so in the future.

As LLMs use our own content to train what happens as they are training on LLM generated data or when they are asked for facts and get disinformation instead. How often are these LLMs updated and what are the constraints on these systems. People aren’t known for sticking to the rules and are very self interested. I worry about how these systems are abused and while it is a wild west with little oversight that will harm the masses, while the elites seek to control more and more and they already control way too much while the people and masses of humanity are the ones to suffer. That is something that we all need to be concerned about.

If machines become what we rely upon then what hope is there for us as a whole. People don’t live online, that isn’t the real world and while it is important I don’t want to live in a world where the digital is the primary means of communication. That feels like a sad place that I don’t want to live in. I hope most people feel similar but who knows what will happen in the future. That’s my own fear and worry. I am not trying to write this for SEO but just for myself, even if I know no one will listen it is important to me to put this out there for myself for however long I have.

If we don’t know how these LLMs work how can we rely upon them, objectively. Can these systems be objective, what are their biases since they are using our own words and content to define what they are and what does that even mean. There are too many questions and not enough answers and transparnacy to trust these systems without understanding how they work. Let alone the power they take to function and what that means for our enviornment. I don’t turst OpenAI and neither should anyone else.

References:

A jargon-free explanation of how AI large language models work | Ars Technica