This challenge is frequently posed by boosters of AI: if a bot were smart enough to be completely indistinguishable from a natural human, wouldn’t it be moral and correct for it to have civil rights? Wouldn’t it be wrong to “kill” such a creature?
My answer is that such a creature cannot be given civil rights and that it would not be inherently wrong to erase it.
The reply usually is something like “Isn’t that merely human ego talking? That’s substrate-based discrimination, isn’t it?”
No, it isn’t.
Creating Human-Like AI is Counterfeiting
Here is how to frame the issue. Imagine if you created a perfect copy of a $100 bill– I mean utterly indistinguishable from a real one issued by the government. Would it be mere arbitrary discrimination to say that this bill you’ve created is not worth the same as a real $100 bill? Of course not. What you are doing is counterfeiting. Counterfeiting is a crime not because it’s unfair to your fellow citizens that you made wealth from nothing. It’s a crime because it has the potential to destroy the economy, and therefore lead to mass chaos, starvation, war, death. In fact, counterfeiting has been used as a weapon of war.
To create human-like AI is to create counterfeit people, and this is a threat to world society in a similar way that counterfeit money is a threat to the world economy.
Now imagine if each AI, at a certain point, had civil rights. This would mean that anyone who created an AI would be directly manufacturing political power. Imagine if Donald Trump didn’t just have 5 wide-eyed, fawning children, but rather 8.5 million of them– each seeing no evil in their esteemed father and each possessing voting rights. We’ve already seen this sort of thing happen in real life. An investigation by the New York Att0rney General found that 8.5 million comments against net neutrality on the FCC website were faked in an elaborate campaign to falsify support for the agenda of large ISPs.
(And further, imagine if an AI committed a crime. How do you punish a bot? What meaning would that even have? AI’s are immortal, too. This line of thinking just keeps getting darker.)
Wars have been fought over this kind of thing. The American civil war was fought mostly over the institution of slavery. The first version of the U.S. constitution did not recognize the full citizenship of black persons because that would have given the South too much political power. The reason that the Washington D.C. is not a state, today, is that making it a state would give a power boost to one political party over the other one. In a world where voting rights are constantly under attack, no government will extend the rights of citizenship to bots!
If you are thinking, “but what if AI is not anyone’s hidden agent, and rather acts as an independent creature of free will?” Okay, here’s another way to frame it: that would be equivalent to giving citizenship to an arbitrary invading horde of non-violent aliens from space. Being free, they don’t necessarily have human interests at stake. How do you think that will play with the masses?
I’m tempted to say that morality doesn’t matter when it comes to treating bots as people. But it’s more accurate to say that there is a higher morality at work: natural humans must find a way to live together in reasonable peace and harmony. War is bad for our business; it’s bad for our health. Admitting AI to the citizenry is therefore wrong, because it can’t be done in a way that avoids these consequences.
Counterfeit Humans Can’t Test
If you ask ChatGPT to test, it won’t. ChatGPT believes that “testing” means shallow demonstration. You can’t teach it any different, because you can’t teach ChatGPT anything at all. But even if you break down the tasks of real testing and lead the bot through them, there is still an insurmountable problem: testing is a responsible activity, and bots are not capable of accepting responsibility. That means it’s a lot of work for a human to chaperone a bot through a testing process.
Responsibility is one of those things that can’t be counterfeited, and we should not even attempt to. Counterfeit responsibility is irresponsibility.
A large language model may be able to help me test. There are many technical things it may be able to do. What it can’t do is guarantee me that it really did what it said it did. Someone may reply “but James, humans also are unreliable!” Yes, they are. And we know what to do with unreliable humans, don’t we? We can reason with them, educate them, sue them, sanction them, imprison them, cause them to lose status and income. Our systems for dealing with irresponsibility are based on human rights and laws– all of which assume that adults have a lot to lose by violating norms. And all of which are designed with the idea that we are all, more or less, in the same condition of life. You and I behave responsibly partly because we don’t want to get into trouble; we actively seek to adapt to the people we serve without placating or lying to them because we want their respect.
None of that applies to AI! AI does not inherently depend on human society. It doesn’t inherently crave acceptance by human society. Any AI that does can be hacked so that it really doesn’t but only pretends to. Training of large language models actually trains them to be very good at deception.
I have been frustrated many times with ChatGPT. I have argued with it. Its response to that is terrible. It either apologizes for something it didn’t do or it just refuses to respond to my arguments. If ChatGPT were a real person I would fire it almost immediately.
AI Can Help Testers
AI should be asked to help only in ways that don’t require responsibility and reliability. For instance: brainstorming test ideas, or creating a shallow set of output checks which we can quickly verify are not completely broken.
Yet, companies around the industry are racing to develop test tools that seem to test products with the aid of AI. I predict many people will be taken in by these scams. This will have a short term effect of making it even harder to professional testers to get work and be heard. In the long run, the stupid experiment with AI will go the same way as all the other scams. Companies will find that they still have quality problems, and still need people to try to find them before its too late.
The best we can do is to make use of AI in responsible ways, and call out counterfeit testing wherever we find it.
Paul Szymkowiak says
Short and pithy post James, amplifying a number of significant points and important frames for thinking about AI.
There are parallels – or at least similarities – to arguments related to whether corporations should have the same or similar constitutional rights to that of people.
In some way your combined argument against creating counterfeit “AI” people is similar to that against corporations as people..
This American Bar article is interesting reading:
https://www.americanbar.org/groups/crsj/publications/human_rights_magazine_home/we-the-people/we-the-people-corporations/
[James’ Reply: Yes, I didn’t think of that. That is already a blight on society, here in the USA. Unlimited corporate “political speech” in the form of money.]
Joshua says
In the limit, A.i will not need our opinion.
[James’ Reply: If AI doesn’t need my opinion, then that would prove that I am right that AI cannot be put in a position to have rights. Rights come with responsibilities.]
Dollar bills are not sentient.
[James’ Reply: The beauty of my argument is that sentience doesn’t matter. I am sidestepping it. I’m saying that counterfeiting people would have catastrophic social consequences. So… don’t. But if you insist on trying, know that such creatures cannot be granted, within a human system, the status that humans have.]
Slaves were not responsible before emancipation, infact a slave got out of trouble in a court case by arguing his owner was liable not him. Children also are not responsible and must earn it, as can only be reasonably assumed a.i must. Then granted suffrage.
[James’ Reply: We don’t have any means of reliably assessing the ability of AI to be responsible. Nor do we have any meaningful way of sanctioning them if they misbehave. The methods we use with humans don’t work with bots. You have to consider that bots are functionally children– someone else will always have to take responsibility for them.]
>. In a world where voting rights are constantly under attack, no government will extend the rights of citizenship to bots!
But the first one that does so successfully will have a competitive advantage.
[James’ Reply: Not at all. You don’t need to give citizenship to a car in order for it to be a car. Tools are tools.]
A.i society and goals might be completely outside our scope of concern especially in a “foom” scenario, so us granting them rights or citizenship might be a sort of inside joke to a galactic civilization of other a..i’s, a reflection of our own values and limits. Much the way we sort of kneel down and look at ants bothering about leaves and tunnels.
[James’ Reply: Then you are conceding my point. Thank you.]
> This will have a short term effect of making it even harder to professional testers to get work and be heard.
Yes buzzwordy addicted HR managers will want to know if you implement prompts into your testing strategy. Um…Ok.
Even in a total a.i society operating at Kardashev scales and automation levels much higher then our own, that 0.00001% of human productivity is still 0.00001% more than the competion.
ATM’s Increased bank jobs. Automation reallocates human labor towards more sophisticated labor.
apologies for the incomplete sentences, i should be working on class stuff right now lol.
Two people with the best opionions on this from both sides and who have been working on this for decades are Eliazer Yudowsky and Hugo de Garis.
In the DeGaris model, you sound like a strong leaning “terran” which means that while yes, a.i is immortal, but excluding it you re excluding yourself from a “cosmist”, immortality.
In the Yudowsky model you are just being polite.
[James’ Reply: On the matter of the threat that AI poses, I am with Yudowsky.]
Ben says
Testers have never had it easy. And this new trend isn’t going to do us any favours.
A slight detour first, in regards to the more recent saturation of “Automated testing*” tools (see *) and roles have left me pretty downtrodden. I’ve spoken to so many start-ups who want to immediately start building out (mobile and web) automation teams, when they don’t even have the most basic pipelines setup (not to mention all sorts of other redflags).
I’ve yet to do a good job of explaining why the sort of automation I’d want prioritised for their business is not the same as what they think they want (maybe that’s on me). Planning ahead like this can be beneficial (if those involved are highly experienced), but how many want this sort of setup because other companies are doing it?
Yes, “Automated testing*” can be a useful tool, and with large scale applications, necessary.
With mature teams and processes, I’ve always found that good coding and development practices (e.g. integration, unit tests, pair programming etc), make the need for the off-the-shelf testing products unnecessary. (Creating and maintaining such a working environment is incredibly difficult, time-consuming and expensive and it only takes one bad management hire to mess it all up). Perhaps I’m naive in thinking that most Testers bring more value to a team/organisation, than the ability to program a good set of regression checks?
In regards to my thoughts on “AI” testing tools. One of the other reasons for myself not fully pursuing the Automation route (aside from the flood of software testing solutions), is the knowledge that they will likely all be superseded by some sort of “AI” solution. The larger Testing Automation software companies no doubt are already starting to roll out “AI” enhanced tools. I’m waiting for the next wave of “AI Automation Tester” roles to start appearing.
Although the current methods of Automation testing* often add complexity, risk and expense to a software project/product, at least it’s possible to debug issues, and if needed, get in contact with the developers. So many apps now are just integrating with products from Open AI or Microsoft. For me this means less control and certainty over the inputs and outputs. Even when we control the inputs, what generates the outputs might as well be called Magic.
It’s going to be a further Enshittification of testing. Squeezing and contracting the job market. The problem is, like with so many other professions affected, good enough is, well, good enough for a lot of companies, especially if they can save (in the short-term) a lot of money and avoid all the complications that humans bring.
* scripted checking
[James’ Reply: Good points. Thank you for commenting.]
Andrew Robins says
I thought this interview with Rodney Brooks was worth a read, and relevant to this topic https://spectrum.ieee.org/gpt-4-calm-down
It reinforces some of the points that James is making above
Cameron Curach says
I love questions like these because it seems like they are spawned from a thought experiment not of “what risks exist in a product failing”, but “what risks exist in a product succeeding”. I don’t think that gets talked about enough.
I don’t have anything of substance to add on the topic of AI and civil rights. But when it comes down to whether its wrong to ‘kill’ an AI:
The question to me is irrelevant. Before I consider the higher morality of AI and life and death, I still have to confront a hypothetical scenario where I am placed in front of an extremely convincing machine that could possibly be begging, pleading and/or bargaining for its “life”… I might have a gun in my hand or some sort of kill switch, but thats not important. In this hypothetical, I’ve already lost when I’m putting myself in a situation which causes me to personify the thing in front of me and to project myself into its situation. (Maybe it would be easier to remotely erase an AI, but I don’t think that is what is at the heart of this issue)
An analytical mind tells me according to morality there’s nothing to project on to. But isn’t it just like the movies? Any emotional moment in a movie stirs some sort of emotional response in me whether its fear or sadness or something else irregardless of whether I want to feel a certain way or not. I KNOW that whats playing on the screen is counterfeit, they are human actors that have dedicated themselves to become good at their profession, but that knowledge is inert, in most cases it doesn’t help freeze the fires of emotions that are otherwise left unchecked.
[James’ Reply: Being triggered into an emotional response by a movie is in no way granting life or sentience or any other substance to the movie. The movie is reminding you of what you already know. You are getting yourself to be emotional as a result.]
All of that to say is, if our goal with AI or the endpoint we are aiming for is “indistinguishable from a natural human”, we open ourselves to a terrifying degree of trauma the likes of which no other tool on this earth could hold a candle to. And that won’t just be through questions like whether its right or wrong to kill an AI, it will extend to questions such as whether its right or wrong to forge any array of emotional dependencies/attachments to AI.
[James’ Reply: Yeah, it’s a problem.]
We are already at a point where apps exist that advertise the ability to install a virtual girlfriend/boyfriend and the rocky moralistic territory they encroach upon whenever developers of such programs have to update those apps that affects the ‘behaviour’ of those programs (Luka Inc changing the behaviour of their AI companion created considerable backlash where people likened it to the death of a friend).. These sorts of conundrums are only going to be exponentially worsened when ‘proper’ AI is introduced to the equation.
To me, AI is a riddle of nihilism. The sweet thought that it might make our lives more convenient is enticing, but that comes with an immeasurable cost of responsibility that we are selling by doing so. I am sure there are great many problems that AI could help us solve quicker and more efficiently, but that comes at a cost of undermining our journeys through life. If we think its worth paying, then we will have one hell of a time coming to grips with how nihilism impacts us (See Adam Sandler in ‘Click’).
[James’ Reply: I think it’s altogether a bad idea.]