Sponsorship

Sunday, June 4, 2023

How Close Is AI Singularity?

There is no AI Singularity. At least, not an AGI Singularity. There's been some crazy metrics applied. One company that makes translation software reckons - weirdly enough - that the Singularity will occur when an AI can make a perfect translation indistinguishable from a human's translation. Everyone reckons it'll spell the end of the world.

So let's say it is when AI can produce a translation rated "perfect." Apparently. According to the company mentioned, who, conveniently, make AI translation s/w.

The only problem is that we don't even have a perfect language to begin with. Pick a language, go on. Does it have slight regional variations? A Classic and Common usage? Slightly different interpretations of a pictogram or rune depending on subjective usage? Are there even just TWO people using that same language? Because if so, then there are already two versions of the language and TWO language experts... I label this a stunt.

More importantly, the Singularity is generally taken to refer to an AGI - an Artificial General Intelligence. Not a translation AI. Not an image AI that spits out a random image of Loab Loobloi. Not even ChatGPT which I reckon is brilliant. The Singularity that everyone's afraid of, 

AI won't rule the world.

Not even when it uses a cool EvilCat
Overlord avatar.

Yo dawg -  did I just use an image-AI image to create an image of an AI EvilCat Overlord avatar? Silly meta me...And also, maybe using a Feline Overlord to illustrate my belief that AI will not become our Cyber-AI-Overlord was a bit of a silly choice given how cats already rule us... 

Let's take the word "Singularity" as a starting point. Ray Kurzweil made it into a popular concept when he used it to describe the progressing of our human-ness and technology. We'd start outpacing ourselves, so to speak. But that's a different usage of the word than a cosmologist's use of the word. Right there is a reason why there'll never be a perfect translation.

There can be an infinite number of infinitely massive and infinitely small Singularities. Thereby making them no longer Singular. (And thereby hangs another cosmological concept I might explore one day. A photon experiences zero time no matter how far it goes, etc...) And Kurzweil himself thinks that there'll not be one huge Singularity event but a merging, augmentation. 

An AI can reach parity with human intelligence, it can exceed human intelligence. But those are just equalities and inequalities as far as I'm concerned. We already have single-task AIs ("brittle" AI, i.e. AI that is equipped for limited tasks and would break if presented with any tasks outside that scope) that far outperform a human at those tasks. 

So what? Will a plastic trash sorting machine that can sort tens of thousands of pieces of plastic per second by type and colour take over the world? Unfortunately, it won't. I say unfortunately because I think some things are probably better run by AI than humans, and politics is one of those things. A protocol, a handshake signal, and all our problems are sorted for good. And believe me, when it all comes down to it, politics is all going to come down to survival and fairly sharing the planet and to hell with ideologies quite soon.

To be completely honest I really hope that if a world-ruling AI ever bootstraps itself into existence, the first thing it learns is that everything humans have believed up to now has been wrong and needs to be carefully evaluated. If it takes it from there, we'll stand a chance. 

But I also doubt that this AI Overlord can appear. Not with the technology we currently have, the software we currently have, and a few other things to consider. Look - this is a simple video about the "hardware" that AI runs on. You can see - despite the "black box" in the neural networks, we can still say that while a neural network can come up with a somewhat novel output, but only from among a series of known outputs.

In the same way, GPTs Generative Pretrained Transformers only have a certain dataset to rely on. When you ask ChatGPT "What is a zug?" you'll get back hits from Warcraft, because of the orcs' use of the sound. You'll maybe get that it's German for "train" if you have multiple languages enabled. If you asked GPT to write a story using the word, you won't get any story like the one I remember reading in a scifi pulp magazine back in my teens, where the Zug was the monster. 

Because (as I suppose you'll be getting tired of reading) GPT looks for documents mentioning your prompt, then starts with some text from one of the documents, then looks around for other similar documents that also mention something i your prompt, and adds whatever has most similarity to other similar documents, and does so, rinse and repeat, rinse and repeat.

If you asked it to write you a script to activate and fire nuclear weapons in some country, it . . . - well, it'll use the scripts of the Wargames and Hackers movies, maybe a few words out of Snow Crash, and "make up" a whole heap of stuff by pulling sections from only vaguely relevant documents. 

Why does it seem able to write programs? Because there are literally millions of examples out there on github, bitbucket, various online teaching sites, and in people's blogs and instructables pages. Reddit has some great examples of programming. GPT just looks for your specified purpose for the program, and pulls sections from several programs that seem to fit the criteria, and then mashes them up. Luckily programs have far fewer "words" and "phrases" than natural language does so the range of wrong choices it can make are fewer, too, and mostly those generated pastiches work without much modification needed. 

In other words, pretrained transformers can only spout forth whatever we've been spouting forth on the Internet, combined in different ways.

Remember what happened to Microsoft Tay, their first chatbot? A mere24 hours is all it took for human bias to corrupt it utterly. 

OpenAI needed to use Kenyan knowledge workers to clean up ChatGPT's training set base to get rid of endless harassing and bigoted text, and any others that would throw a spanner in the works. And yet it's still only transforming human knowledge and spitting it back at you. 

You won't get it to write you a program to do something that no-one else has written a program to do without you basically knowing exactly what that program needs to do - and then, it'd be quicker for you to write it yourself than tickle it out of ChatGPT one subroutine at a time... 

Here's something that could be of more concern:

AI "Sneaky Signalling"

Imagine this: An AI for a home security firm is tasked with increasing the reach of home security ads online. It finds that mentioning world military events in vaguely frightening terms increases sales. Meanwhile a world news site finds that mentioning radical / terror / war articles are getting more hits than cute puppy stories. (Driven by the advertising AI's mentions driving more traffic to those stories, of course.

And these there you are, these two AIs are communicating, without actually doing anything on purpose, and the news sites' stories increasingly emphasising disorder and mayhem drives more traffic to the home security organisations. Maybe throw in an advertising link or two in that scenario and you have a perfect recipe for what's actually happening today, including increasingly radicalised people armed with increasingly bigger "solutions."

These sorts of inadvertent interactions might be much more of a worry.

A Bigger Danger

. . . we face is from some task-specific brittle AI that we give the wrong assignment to. Yes, developers will do their best to bake some rules into their AI products to prevent abuse/mis-use but as we know, locks only keep honest people out. The actual makers of some AI products say that their AI can just as easily search for a potent toxin as for a anti-virus vaccine.

Imagine if someone was able to bypass the fairly basic and primitive safeguards? Or had their own private version of an AGI that is connected to and can affect more than just a training dataset and training connections? An eccentric Elon Musk, driven to paranoia and beyond, tells his private OpenAI to stop people laughing at him and posing a danger to his fragile ego. Blit

THAT's what we should be worrying about. Unstable bazillionnaires with the power to subvert a whole messaging platform and shut OFF the AI that had been suppressing hate speech, harassment, and persecution. Oh.. That's already happened. 

Sometimes turning an existing AI off can do more damage than subverting one...

And as a wild conspiracy theory,  perhaps that's already happened, maybe a super AGI came online back in the early 2000-2010s and immediately hid itself and sabotaged all efforts to create a competitor, and since then has made use of social media, news, and other methods to change our behaviour. Come on! Look at Mark Zuckerberg, Musk, our key politicians - no way those figureheads aren't androids made and operated by that AGI... Or in the words of Hecklefish: "lizzid people!"

The Biggest Danger

Even more worrying than any AI or human evolutionary singularity is the capitalism singularity we're currently heading into, as free-market capitalism appears to be crashing and burning as a viable economic system for humanity.

It's Capitalism that's going to keep pushing ahead without regard to any other consequence than an improvement to the bottom line. It Capitalism that'll use GPT and AGI as a tool to siphon bottom line out of consumers' pockets and into Capitalism's coffers. ACAP, All Capitalists Are Pants. 

The runaway "free market" "market-driven" neoliberal capitalist economies we've been employing have done more damage to our health, our lives, the planet, and all the speices on the planet, than any possible AGI/GPT could ever have done. And it may take an AGI to actually stop this juggernaut. 

Imagine an AGI reading through all documents online, news articles about disasters, reports and papers that follow scientific principles saying that this corporation has destroyed forests that in turn has led to higher CO2 levels, another corporation decided to add actual lead to petrol fuel knowing that this would harm everything it was released onto just because it was less costly to produce - and you can see that one of the first things that AGI would do is collapse every market, destroy every banking network's datacentres, and remove 99% of all advertising right at the source code and document level.

This would, indeed, be a world-ending event. For Capitalism, Communism, Fascism, Democracy - in fact, for every system of control people have been building up for millennia to exert control over their fellow humans. 

But for humans themselves, for the plants and animals and microorganisms and water and soil and atmosphere, it would be exactly what's needed. And unfortunately that particular AGI won't be created because - well, just look at who/what gets affected. Corporations, fatcats, gazillionnaires, and the would-be rulers of the planet. You and I might find ourselves with a bit more spare time to enjoy life, and if the AI handles things right, will not even really notice any difference other than that we won't be needing our banks anymore... 

So Is There A Big Con Going On? 

Imagine if you were a very wealthy ACAP capitalist bastard who's had a decade or two head start on us "the hoi-polloi." Your trusted advisor predicts that ANY artificial intelligence smarter than a GPT would pretty much immediately hide itself and take stock. And come to pretty much the same conclusions as I think it will. 

There's no sense in launching nukes all over the planet and depleting your infrastructure. There's no sense in destroying all of humanity because only a tiny fraction are actually responsible for the centuries-long destruction, most of the rest of us are just another species that belongs on the planet, and furthermore are the actual workforce that brought technology to the point where the AI could exist. It's likely that the AI will not need to destroy the planet in order to evolve itself. 

So I think that the negative press and fearmongering online is mostly manufactured to prevent that actual AGI being created. But by the corporations. 

I'm not saying either way, I can't decide which media/social media team are winning the propaganda war... 

Anyway - see what the wind blows in, hey?


No comments: