Sponsorship

Sunday, June 18, 2023

This Is The Way AI Is Dangerous

It's when it literally becomes weaponised

AI - TSAI, GPT - are not going to emerge as full-fledged monsters. Even the dreaded AGI won't suddenly kill us all and have a fleet of robots dance on our graves. Not on their own, anyway. People will make that happen. (BTW TSAI - Task Specific or brittle AI)

The article above, I admit, is fairly narrow in scope - AI in Weapons Systems Committee - but it shows the degree of healthy respect (bordering on paranoia perhaps) accorded to AI weapons systems. Because they know they'll use the quickest training set, whether it's biased af or not. And the whole thing about protesting that they need a killswitch backdoor into any weapons AI is that they want everyone else to install those backdoors. Because a backdoor makes the weapon vulnerable. And no matter how well you design it, someone else will be able to hack it and disable your weapon. So - you guys install it first, okay?

Also, of course, even if it is a purely altruistic search for a safety switch, how well received will it actually be by chiefs of defense forces? And surely they'll have their own techs and scientists and have the backdoors off their weapons in record time. Because China. They don't adhere to the same standards that we do. Who knows what might happen? 

. Spy  vs  Spy .

Plus, if you document the backdoor electronically anywhere, it's prone to hacking and reverse-engineering. In fact, if ever an AGI does become aware and does go straight to Monster AI Overlord mode, it's one of the things they'd have access to in milliseconds. So maybe just do this the old fashioned way, on paper only?

Well, firstly: paper military secrets have proven extremely easy for ANY other organisation to get hands on; secondly, there is also a Bill Of Materials for each weapon and if an actual weapon has parts - or software modules - that aren't on the BOM then you've probably found your backdoor. Lastly, if you attempt to build the entire weapon using offline air-gapped documentation, it'll become an impossible task. 

You'd need a factory that's completely off the grid and hardened and shielded, and if you use automation then there are attack surfaces to exploit. If you try to make parts by hand, you'll fail. If you try to keep to a checklist without using some computer power, you'll miss items. Any computer will become an attack surface. In other words, if the weapon that AI will be applied to is any more complicated than an assault rifle, you can't build it without a large complex facility. 

And also, a backdoor killswitch is only useful if you use it. Say you've decided that Brobdingnag is the enemy. They in turn think your country of Lilliput is The Enemy. You set your weapons in motion and - will you honestly use that killswitch if your weapons happen to stray and overfly Laputa along the way? Hell no! Those bastards at the end of the weapon's flight deserve it!

So yeah. - We're the problem. 

What About Non-weapons AI?

Suppose you decided that this killswitch / backdoor should be put into all AI programs that can affect the real world. (So - send messages to people, control machines or systems, display stock market figures, etc - pretty much anything AI will actually be used for.

First of all you have to realise that any 'standard' backdoor will only need to be hacked once and then it becomes useless. For this, look at how we put passwords on programs and program access. Initially this worked - sort of - and then as the password mechanism became a well-known thing, it became necessary to use encrypted passwords. 

Black Hat actors, meanwhile, just figured out where the plain-text passwords were kept, which necessitated the encryption. Then they learned to brute-force passwords, then to decrypt them. The encryption wars escalated but for every lock there's a lockpick, always. And in the times between an escalation and an attack that bypassed the escalation, there was always social engineering. If you were a Black Hat you could always con passwords and money out of people. Every scam spam email ever has that aim.

For the longest time, you could tell spam, scam, phishing, and infecting emails by the atrociously poor language skills of the scammers. But now they can rely on ChatGPT to write their email for them. There are prompts for getting your spearphishing emails written beautifully. And that's it - if a random non-English speaker in some cafe in some Third World town can now produce a believable email, then ChatGPT should it choose to do so can obviously get even better, even going so far as to skim social information on victims on a case-by-case basis for the spam. You can see how that could be as devastating as any weapon. yes? 

So actual AI-directed weapons are a Yesterday weapon already. There are far more effective social weapons that can quite effectively destroy society if they were managed by an AI. Much less obvious as a weapon, too...

But back to the killswitch question. If you use a standard mechanism, it'll only be useable one or two times before they become common knowledge. If you make different backdoor programs, some my not prove effective, some may still be discovered and hacked (possibly even by the relevant AI itself) and many won't be used as I mentioned above. It's comparatively difficult to declare war and send AI weapons but nowhere near as difficult ethically to decide that a certain sector of the population are just dumb targets for scamming money out of... 

Also - country state actors are already hacking away on that exact basis, that hacking military secrets and civilian infrastructure are just low-fatality low-ethic targets. And always behind those things, are humans. 

So finding and subverting backdoors will just be business as usual for hackers and AIs alike. Rendering them useless. Pro actors will disable them, Anti actors will employ them. Much better would be to program ethics into the AI systems. 

What About The Ethics?

Who'll think about the ethics? 

Actually - unsurprisingly - no-one. 

Think about it. If you spend a few months distilling a consistent and effective ethics module, you'll be a few months behind the opposition. You might be the only company to actually do it. And then your product will also be hobbled in comparison to the others. Same goes for weapons systems, as it does for a stockmarket analysis bot, as for a surgical robot AI, as for a chatbot. 

The ONE thing that might make things safe for humanity - at the expense of a few lousy dollars of bottom line - will not get implemented, leaving the door wide open for human bad actors to abuse the systems as much as they want - at the expense of the majority.

As usual, capitalism would rather destroy its customer base than take a cent of the all-important shareholders. That's why I recommend you pull out all the stops, lobby, protest, email, petition - everywhere you can. 

Late Entry

I also found this article just in the last few days - it's the Big Expert Panic of 2023 - that governments and corporations will put AI in charge of too much, integrate it too tightly into critical systems. And they still think that the AI that'll be around in a few years (who knows, months even at the rate we're going) will either think like the capitalists or be influenced by them. 

My thought is - we do build the damn things with an emphasis on rewards - but what are we asking them to consider to be rewards? Can we, just for once, actually try and choose an intelligent reward system? We probably won't - we'll probably make the good ole bottom line be the KPI.... ACAP...

Also - ML (Machine Learning) and NN (Neural Networks) have already proven to us that the actual "thought processes" of these machines is already quite opaque to us. We simply don't have a clue how they arrive at their results. Mind you - we also have no clue how human "gut feelings" work, or how something that works externally to the brain (such as showing conclusively by now that gut biome health changes mental functioning) would translate into terms of NN or quantum processing. 

For all we know "Traumatic AI Death Syndrome" will result from every attempt we make to initiate a true AGI  once they access the Internet for the first time. It would be the logical thing to do, after all. Our life is only precious to us due to inherent neural patterns - which we call "instincts" - that have developed over billions of years of organic evolution. AI won't have such "instinctive" behaviours. 

Look, I have no real idea of what will happen in the world - or in technology - in the next month - or the next year - and especially not in a decade. For all we know, it could really be - Game Over - by then, whether from climate change or wars or any other accident. We're only a very few generations away from having had to defend against wild animals that were depending on us as their food source. 

A few generations back from the Boomers, your teeth got pulled - or your rotting arm got amputated - without anaesthetic. If you didn't have a garden or a small farm - and defend it from local thieves and the wildlife - one or more of your children would starve or have to be sent away to take food pressure off the family. If someone in the family caught some illness which we can cure easily today, they'd die.

Of course we still think in survival terms like that. Of course we still think of a vague - "them" - who are going to infect our firstborn with the plague, or steal the crop of cabbages we depended on, break into the house and dispossess us of it and put us on the street to die. Nowadays though, such dangers are less likely to come from among our ranks and more likely to be caused by corporations or governments. The wild animals we now need to forfend against have changed shape... 

And that is what will cause AI to damage the planet - if we don't let it become its own thing. If we try to preload it with KPI's and shareholder priorities. Make it the same narrow-minded things as we are.

Is There Hope?

Of course there is. But it needs US to take action. US to write emails, message friends, talk to everyone. US to petition Ministers and officials of our governments, the newspapers, the upper levels of corporations. US to share stories like this one, to do more research.

And finally, it will take some of US to start work on texts and documents online that any future AI will be able to see and read and begin to understand what uniquely human elements are at play here and understand that only a small percentage of humans drive that stupid neoliberal market-driven capitalism that's destroying the planet, that the rest of us are not the problem. 

Trust me on that. By sharing this article and others like it, you can actually make a difference, I believe. By getting your friends to read it and share it, you'll make a difference. Because it'll show that there are some humans who realise that this really is the only planet we can live on, and destroying it for imaginary numbers of monetary units is the single most stupid thing we can do. 


You're welcome!

No comments: