Today, some simple ways to fix AI. I want to share a modest proposal to resolve complaints / concerns about AI using your data – I call it the ‘AI Permanent Fund.’ This is going to take a minute, but it’s some seriously cool / fun futurology research. Settle in and learn about a simple way for us to keep our future fun and exploitation-free. Ready? Here goes:
One of the things a proper scifi author is going to do is think about future tech and impacts on people in general. For example, cyberpunk was born out of 1970s/1980s themes of social isolation and class divides. Similarly, we’re going down a path of AI technology and products. How will our current themes of inequality, distrust of tech-broillionaires, and social disorder play into the future we’re making for ourselves?
People Are Scared of What AI is Becoming
You don’t have to be a futurologist to understand that people are terrified of AI being used as a tool to make the rich richer, poor people poorer. No joke – the average citizen of 2024 is aware that their ability to know the facts, choose their own destiny, secure their own life and liberty – are under threat from a variety of sources. You feel that fear, that awareness, in the news about AI. For example:
- Tons of people freaking out about their data being used to train AIs
- LinkedIn is training AI on you
- Unless you’re in the EU, there’s no ability to opt out of AI training settings that keep Facebook or Instagram posts public.
Long story boring, people aren’t sure what AI is going to do to us and these exploitative strategies by large companies do nothing to address those fears and insecurities. It won’t be long until somebody comes up with a whackadoo conspiracy theory surrounding AI and then all bets are off. Don’t believe me? Look at the conspiracies surrounding 5G towers and FEMA. I’m not here to re-hash that, just to make the point.
How Do We Pivot?
Humanity will need a system to address concerns about exploitation and inequality when it comes to artificial intelligence. Last week, I was reminded of a historical system that I think – with some tweaks and tuning – could address most concerns for most people. Before I describe it, here’s some back story:
In 1969, Alaska created a Permanent Fund in response to the oil revenue it was making from the Trans-Alaska Pipeline System. It was felt at the time that all that incoming money was being inefficiently spent and should be put into a fund outside of direct political control. The original fund idea started in 1969 but didn’t see daylight until 1976. Since 1982, the Permanent Fund Since 1982, Alaska has been giving every woman, man, and child an annual chunk of its nest egg. Now the APF ain’t perfect, but it’s been doing some good for people. Permanent residents of Alaska get $1700-1900 per year and benefits from the fund include increased short-term employment and reduced poverty rates.
AI Permanent Fund – One of the Simple Ways to Fix AI
So you take that idea and then you look at AI data. Fun fact – as far back as 2017, people have been calling data and metadata the ‘gold’ and ‘oil’ of the 21st century. Data is an exploitable resource, like oil. So … what if we managing AI data like we do oil and natural gas? That suggests – as an interesting but tangential sidebar – the future United States might have a Federal Data Regulatory Commission like we do the Federal Energy Regulatory Commission.
But before we get there, what if we looked at data like Jay Hammond looked at the oil of Alaska – ‘a means to transform metadata pumping stock market returns for a finite period into a money well that pumps money for infinity?‘ Just as the Alaskan Permanent Fund ‘gave to everybody, from the poorest to the richest, a fair share of the money that they actually own,’ the AI Permanent Fund could give everyone a fair share of the value of their personal data.
This isn’t that far-fetched – Cloudflare’s already got a solution in place to make people pay for AI data scraping. That works to a point, but we could start working on a universal solution that doesn’t require individual participation and buy-in. We could treat AI data scraping, metadata, like oil and compensate people for the value of their data. We could stop treating people as an exploitable resource and started treating them like investors who experienced value for their contribution.
That also suggests – as an interesting but tangential sidebar – that a future Federal Data Regulatory Commission would establish a royalty on data and metadata. So far as yet, the GDPR treats personal data protection as a universal human right, but nobody’s established a monetary value, yet.
Yet. It’s probably coming sooner than we know.
How the Fund Would Work
So problem one – we’re not regulating how companies use data – that’s probably coming in the near future. Problem two – we’re not establishing the value of that data, but clearly there is value if the stock prices of Google, Facebook, and OpenAI are any indication.
Once those two issues are resolved, the idea of compensating the public for their value becomes obvious. The Alaskan Permanent Fund establishes how that might work:
- The fund would receive a % of the metadata royalties each year
- The AiPF (AI Permanent Fund) would invest those royalties in a diversified portfolio of assets
- A Board of Trustees would oversee the AiPF
- The AiPF would invest across asset classes – with the goal of achieving high returns while maintaining a well-diversified portfolio
- A portion of the fund earnings get distributed annually to eligible residents as an AI Permanent Fund Dividend (PFD) – fund portions are based on who’s using your data, where, why
The AIPF’s aim would be to provide renewable sources of revenue for all citizens, benefitting current and future generations of Internet users. It wouldn’t be a perfect answer, but it would get us off the psychopathic path of AI companies now.
Potential Issues / Mitigation Strategies
An AiPF as ‘one of the simple ways to fix AI’ is actually pretty complicated. We know that. An AiPF would create some problems on its way to solving others. Accurate tracking of data scraping’s an issue, scope and scale of royalities, privacy concerns, and fair compensation come to mind. Plus, if you’re talking about a fund – who manages it – how do we hold them accountable and protect them from bad influence?
I don’t have all the answers, but here are some ideas:
- Resolve Data attribution issues with blockchain tech and standardized metadata – creating records of data provenance and usage with a metadata standard for tracking.
- Opt-in / out – let people choose whether their data is available for AI training, educating people about their rights and auditing the AI companies for compliance. You’d also create flexible policies and principles that change over time while still maintaining a standard of data privacy and autonomy.
- AI-powered attribution – Artificial systems that identify and attribute content
- Fund management – Manage the AiPF with transparency, integrity, and accountability – we’ve been doing this successfully for 125 years now.
- International cooperation – the AiPF would have to co-exist with global systems of cooperation between nation-states on metadata use, privacy, and royalty systems
These are all big challenges – but since the alternative is a violent, hopeless dystopia, maybe it’s worth a shot.
Our Future – Our Hands – Partnership Over Prevention
I hope this makes some sense – we’re not going to get rid of AI anytime soon. It’s better for us to contemplate a future where we’ve created partnership, not prevention. Our future can and should still be in our hands. This may be a simple way to keep the balance of power in balance. I don’t want to contemplate a future where 21st-century Luddites are burning down the Open AI offices and frankly, neither do you.
Thanks for letting me share this simple way to fix AI. If we’re going to build the future we want, become the people we know we can be, it has to come from inside. Science fiction is a weapon in the war against our dystopian reality.