If you want to understand the decisions of AI companies, it’s pretty easy – just think like a psychopath. Safe Superintelligence, Inc. seeks to safely build AI far beyond human capability, according to Ars Technica but the decision smacks less of selfless humanity than it does ‘pivoting to avoid a market risk.’ No AI company has ‘human progress’ as a KPI, they have MARKET SHARE. AI is the new oil, and a lot of proto-robber barons are lining up to own the well.
If I’m being honest, I don’t care that ‘Ilya Sutskever is pursuing safe superintelligence in a straight shot, with one focus, one goal, and one product.’ Good for him, I still don’t trust him, nor do I trust any of these cats and with good reason. There’s a very narrow line between ‘visionary CEO’ and ‘criminal psychopath.’ Sutskever is on the ‘waiting and watching’ list until he proves himself one way or the other.
If humanity is going to get better, we need to act like we deserve it and stop taking ‘it’s gonna be okay’ for an answer. Prove it. Tell me how you know, tell me how we’ll know when we get there. If you can’t do that, then rushing your product to market isn’t going to make me feel better. Making me feel better about your product will make me feel better, but that would require you to have empathy for me, the customer. I don’t hear a lot of empathetic language out of ‘visionary CEOs’ and that’s a red flag.
When it comes to understanding the motivation of visionary CEOs, there’s a common narrative that ‘we can’t understand because we’re not geniuses.’ Then we get the bad news in a few years – the true motivation of these visionary CEOs is revealed and it’s a scattered form of greedy psychopathy (Looking at you, Elizabeth Holmes, Sam Bankman-Fried, Martin Shkreli, and now maybe Dave Calhoun). Their sad pursuit of growth at any cost is labeled as ‘part of the game of being a leader,’ but that’s an oversimplified excuse for lethally craven, cold-hearted behavior.
How Do You Recognize a Pyschopath?
I started thinking about this and it hit me, as a person familiar with abnormal psychology (thanks, adverse childhood experiences!) – the behavior of these individuals fall into a specific pattern. The naïve indifference to human life, the bland rejection of empathy for others’ distress? Performative regret for mistakes that led to death or destruction? That’s what a psychopath does. How does a psychopath think? Here’s a simple definition, according to Verywellmind.com:
- Pretend to care
- Display cold-hearted behavior
- Fail to recognize other people’s distress
- Have relationships that are shallow and fake
- Maintain a normal life as a cover for criminal activity
- Fail to form genuine emotional attachments
- May love people in their own way
I’m not alone in thinking this. According to this study, 20% of business CEOs can be labeled as psychopaths. It’s terrifying to think that such a disregulated type of person should be allowed to have that much control over our lives.
Yet, they do.
Earning our trust is pretty simple, but it requires those actors to reckon with their intentions. Do I want the money or do I really want to help people? Truth is, they want the money and if it helps people along the way – no harm, no foul. If a few people get hurt along the way, the ends justify the means. It’s a road to hell paved with good intentions and we’ve been here before (Looking at you, Borders Books and Bank of America).
Acting like they only have to care to a point, or love people in their own way – if that’s not psychopathy, it’s certainly a hole big enough for a psychopath to swim through. Is that what we want for our future?
How Do We Avoid This?
An AI company – with keys to the rest of the future – should be acting with good intentions. AI companies should be working to earn our trust. How will we know when to trust an AI company’s intentions? Pretty easy – they aren’t doing it for the money. Imagine a world where Jeff Bezos, Mark Zuckerberg, Marc Benioff or Larry Ellison goes:
“I have invested $15 billion in the next generation of AI to ensure that artificial intelligence will benefit human society and contribute to the progress of civilization. Since it’s important to make sure everything is handled in the best way – there is a board of supervisors, and a board of directors. The supervisors will interpret day-to-day operations based on the governing principles of the directors, and the directors have been appointed to maintain the following general laws of artificial intelligence:
- An AI may not injure a human being or, through inaction, allow a human being to come to physical, financial, emotional, or mental harm
- An AI can be used to build tools to benefit humans, except where such tools would conflict with the First Law
- An AI can protect its own existence as long as such protection does not conflict with the First or Second Law
“Let’s make this perfectly clear – I will not benefit in any way financially from this investment. A society grows great when old men plant trees in whose shade they shall never sit.’ I will never sit under these trees, but their shade will cool the human problems that threaten to burn us all alive. In a hundred, or perhaps a thousand years, humans will be able to look back upon us and say ‘good job.'”
Sadly – we don’t seem to be at that part of the future yet. We’re witnessing many craven, shamelessly self-interested decisions marketed as ‘necessary.’ None of it’s necessary, not if you aren’t a psychopath.
If you’re a regular human being, you want this tool – whatever it becomes – to be used wisely and YOU’LL SAY SO. Then – shocking – YOU’LL ACTUALLY ACT WITH THAT INTENTION.
Now What?
I’m not naïve – I know that kind of message would probably get a CEO slaughtered by their shareholders. Maybe out there in all the people publishing those ‘dire warnings about AI’ are the people who actually care. They’re staking their reputation on warning us about the risks of such a powerful tool. I’d trust that person a hundred times more than the person going ‘hey, it’s all gonna be okay.’
That’s what a psychopath says.
My advice for assimilating the news about AI companies is pretty simple. If you want to understand what AI companies are doing, or what their intentions are based on their stated goals, just think like a psychopath. It’ll make that much more sense.