cross-posted from: https://lemmy.zip/post/64009035
cross-posted from: https://lemmy.zip/post/64007684
Introduction
The current socio-political discourse is dominated by a new divisive issue concerning "AI" - so called Artificial Intelligence. While some are vehemently opposed to the idea of AI infiltrating newer and newer aspects of life, some are convinced of its revolutionary transformative power. The question of AI usage in our project of The Brotherhood, has also been put into question and this essay will attempt to put my^[not everyone working on the project, just my own] perspective on it.
What even is "AI"
What is typically referred to as "AI", is in the more technical corners, known as, LLMs, or Large Language Models. They are a new innovation^[still, pretty old, around 2017-18] in a long line of automation technology, going back to the mid-20th century, not long after the computer itself was starting to become a thing of utmost usefulness.
The long journey of automation
Actually, the computer itself can be seen as the first innovation in this automation technology. After all, the computer is a literal automatic computation^[and much more, of course!] machine, that uses some carefully arranged silicon and phosphorus to manipulate electron flows and deterministically execute some rigorously defined steps.
The idea to take this further and further, was always an ambition of early computer scientists. And as speed and size started getting accessible, effort was made for closer integration with humans. This was not a trivial task as the computer and the human spoke two different languages that might as well be from different universes. From punching cards, where programmers painstakingly "wrote" binary in a literal card to Fortran to programming languages to OS to GUIs and applications, we have made tools, for our tools, for our tools, in a seemingly endless recursion.
One biggest aspect that programmers got interested in, in the very late 20th century, was natural language processing, to further bridge the "language gap". This is what enabled the early internet, through search engines. Now, this fundamentally differs in structure to previous tools. This is not deterministic, as language itself was not deterministic. So these tools relied on various statistical tools like N-grams, Markov Models, Bayesian inference etc.The parallel research on Neural Networks
Around the same time, with the advent of neuroscience^[that replaced the previous psychological models of Freud, Jung and Lacan, which were indeed not suited for STEM fields], another curious line of research began with the perceptron.
Very much influenced from early neuroscience, it slowly split from its initial inspiration and drifted towards statistical science, rather than trying to follow the exact structure of brains. This too, went through its own series of innovations with neural networks, backpropagation, Hopfield networks, CNNs, LSTMs etc.
But two innovations were critical for the explosion of interest in this very niche field -
- Deep neural networks, that made use of the newly popular GPUs, back in the early 2010s
- Transformers, which was the topic of a now, legendary 2017 paper, titled, "Attention is All You Need.
In the early 2020s, it was realised, that these two can be combined and scaled up massively^[and I mean massively] to gain a general semantic understanding of general language. This is where the two paths collided. What started as experimental cognitive research at the intersection of neuroscience and computation, turned into a statistical method to give the computers an understanding of semantic language! Thus began the era of LLMs.
An LLM is simply a statistical model trained to have a general understanding of semantics!
So What's All the Hype
What is True
The innovation, especially of GPUs and transformers are legit groundbreaking innovations that have broken a very long stall in their respective fields. And their combination to create LLMs are indeed a great engineering feat, even if not that innovative from a purely academic standpoint^[the massive scaling needed, is another level of brute-forcing. Think of the pyramids of Egypt - not as clever as it is awe-inspiring, simply due to scale].
And it is also true that this has opened up the pathway to some commercial usage in a way that was just not possible earlier. In a certain sense, it is an upgradation of the search engines with a powerful fuzzy semantic translator.
It is indeed a great addition to the coding landscape. Programming used to be 80% manual intellectual labour, where you had to go search for that one silly bug, or implement a very simple system for the 100th time. Now, a lot of this can be automated. However, to think, that this makes programming itself obsolete, is very naive. For most serious project, you still need to have great knowledge of computer science, but the entry to programming has been indeed lowered^[which is either very good news, or very disappointing, depending on how much you like to gatekeep your nerdy interests!]. Most serious programmers have simply become a senior software developer and have delegated the manual repetitive tasks to the "AI", which can understand natural language and turn them into code it has seen before^[if it has been trained in it].What is the "Bubble"
What remains in heavy doubt is the "efficiency" problem. It is yet very unclear as to whether Moore's Law will come into play here and decrease costs as time passes by, or whether the architecture itself, despite its genuine innovations, is fundamentally limited. The big corporations are betting on the former.
Meanwhile some "tech-enthusiasts" have become a little too enthusiastic about the range of its applicability. The LLMs, like any sophisticated statistical model, requires massive amounts of structured data. In certain areas like day-to-day coding, or summaries, this is not that hard. However, in areas like robotics, it is still not a "done" job^[just getting structured data itself].
The more laughable matter is that some have put into esoteric questions of consciousness^[philosophical exploration of consciousness, is indeed possible, but requires a level of rigor and seriousness, that is missing from most such discussions] in this new light. This is in part, due to the specific ancestry it has, and mostly just due to human nature of "jumping the bandwagon".
What About the Political Issues
Now we come to the most important point of this discourse. I will break it down into specific points that are frequently put to question.
The Environmental Hazards
As it now stands, the development and deployment of LLMs remain highly inefficient. But technology and development always comes at the expense of natural resources and equilibrium. The question is not of, whether it is ethical, but who controls/decides how much is sustainable^[moreover, the current climate crisis has already put adequate strain on these resources in a lot of places].
At this point, however, it stops being a environmental concern and starts being a political one. The neoliberals would indeed argue that the market would balance itself when resource scarcity starts being critical, whereas opponents might argue that state intervention is needed to prevent a calamity at all. But whatever the arguments remain about their ideal states, what is true, is that, the real world is none of those "ideal world" situations.
The neoliberal free market does not exist in its full glory, as most of the technological market is monopolised by a few corporations. The current global climate crisis, is a failure of the free-markets of the industrial and the post-industrial era. Whereas state intervention, remains, at best, ineffectual, and at worst, prone to lobbying by the same monopolised corporations.The conclusion is that the control of such critical decisions, remain concentrated in the hands of a few oligarchs who are prone to taking risky decisions and making mistakes.
The Data "Theft"
It is not unknown that the data that the LLMs are trained on, are public data. However, the access to such LLMs remain out of the hands of the people whose data made it come to fruition. It is also clear that the current copyright laws are not built to handle such cases.
Close-sourced LLMs represent a new kind of injustice with no easy solutions. On one hand, making LLMs accessible to all, would exasperate the "hype-train" and worsen the environmental impact. Whereas, stopping research on such lucrative frontiers would be catastrophically conservative. And again, this comes down to control - control of how and where to gather source data and how to commercialise it. But as long as the monopolies exists, especially on the production of cutting-edge of LLMs, control remains firmly on the hands of the select-few.The Unemployment Issues
The layoffs have been quite eye-catching, since it happened on high-class educated employees. But this is a constant byproduct of changing times and advancing technology, especially in automation. This can not be avoided without an aversion to technology itself^[which is hard to sell in the modern world!].
However, this never leads to humans not having "any work left to do" at all. No, jobs come and jobs go! But as the current landscape stands, it is indeed the case that many millions of people will get trampled under the changing times - people who have long pursued a high-profile job, only to lose their long-expected market volume or high-end salary.
This represents an utter failure of our social contract. The fact that technological progress comes at the cost of social cohesion, is a reflection of our embarrassing societal technology in comparison to our other feats^[such as engineering, or research, or industrialisation]. An automation, theoretically, should be a boon to the labour force, taking away manual labour, in place of far more interesting jobs and more time for recreation! But alas, instead it represents an existential threat to a substantial section of the population!No society can last which has a structural opposition to technological progress. The societal technology needs to keep up!
So Where Is The Brotherhood's Position on This
Now, The Brotherhood is NOT a monolithic entity. The different people in here, has significantly different positions on this^[the division is one of the reasons of this long essay]. However, I have been a significant part of this project from the start, and I can say what my position is, on this.
My philosophy is of pragmatism. One must keep the danger, very very close. The one who lives by the sword, dies by the sword. But one who forsakes the sword, lives under the sword! Currently, as it stands, "AI" is the brand new weapon, in this long warfare of control, of ideology, of dominance, as it always has been. But if the disenfranchised people needs to win, they can not afford to forsake the game. They can only win by playing the same game.
I have used AI IDEs very substantially to build the project - because I am not such a good programmer, and even if I were, I could not have done the entire project, alone, in such a short time. Now I know that this is not a replacement for actual skilled people, and in the best-case scenario, I never would have needed to use it too much. But unfortunately, reality is never perfect, and we had to do get by on what we could!
And that is my philosophy on AI usage. The rules of the game are no different, only the goals of the players and as long as we are working for a noble goal^[actually, we directly respond to that political problem of unemployment], we cannot compromise on not taking the best shot at victory!The End Justifies The Means ๐ฅ
Well they did say they don't use the information ๐คฃ