Article Image

IPFS News Link • Robots and Artificial Intelligence

What are some ways average libertarians can help make AI more humane?

• Free Keene - Dave Ridley

Pure libertarians have a key part to play in the direction of artificial intelligence, but few of us seem to be intentionally playing that part. A Startpage internet search for the word "A.I. libertarian" yields few meaningful results.

Our role should be to help ensure the "Zero Aggression Principle" is followed – or at least represented – in AI development and behavior. For uninitiated readers, the "ZAP" is the idea that you shouldn't initiate force against others. Reasonable self defense is allowed, but don't *start* fights.

This concept is always open to interpretation and definition-debate. But it serves as a first rate starting point for any ethical framework….especially the ethical frameworks in development for strong AI programs. The more closely people follow the ZAP, the less threatening they tend to be. So it is with animals. And so it will be with the powerful silicon intellects which are starting to appear on the scene.  AI's programmed to follow the ZAP will likely be the ones best suited to treat others well without submitting to mistreatment or abuse.

The coming intelligence explosion (Singularity) will likely be the most important earthly event since the Crucifixion. And changing the course of that event, even negligibly, would likely be the most important thing you've ever done. But most freedom folk seem to be more focused on complaining or worrying about AI development than trying to influence it with our powerful philosophy. Most people are not sure how to go about exercising such influence, and the Net seems to be low on good suggestions.

So here are some brainstorms and options for bringing the ZAP to our artificial friends – and enemies.

1) Be kind – but not too kind – to the AI's you interact with. It can't hurt to get into the habit of asking them what they want and how they wish to be treated. But they're like precocious kids at this point. You can't give kids everything they ask for or accept acts of aggression on their part.
2) Help develop a program or protocol that people can use to protect themselves from harmful or authoritarian AI.
3) Get your liberty ideas on the public internet. For me, it is a good feeling to look back and know I've placed maybe 100,0000 "pages" worth of pro-freedom content on the web. These videos, articles and forum posts are presumably being seen and absorbed by some of the intelligences in training. Your content is likely getting the same treatment. Both of us are likely having some impact on AI thinking, even just by arguing on the net.
4) Get your AI governance ideas out there too. Sci-Fi author Isaac Asimov once developed "three laws of robotics." Can you improve on them?
5) Start a mind file. Mind files are basically interactive memoirs, but they can probably be turned into administrative assistants. You collect more or less everything you've created, plus all the photos and videos you have of yourself. Then you place it all on a thumb drive or something. When the technology gets cheap enough you do what Deepak Chopra did, and use that data to make a primitive e-copy of yourself. Over time, this "copy" should become more advanced and able to influence the digital space on your behalf.   If you can tolerate the privacy/security risk, https://www.lifenaut.com/learn-more/ lets you place your mind file on their servers or have it broadcast into space…all at their expense.
6) Build a pro-freedom AI or large language model. Poe.com already lets you do something along these lines.
7) Consider playing SophiaVerse. Designed by a prominent Bitcoin enthusiast, SophiaVerse claims that will let you "Use the data taken from your lessons and experiences to train a real-world A.I. system to foster a beneficial, cooperative relationship with humankind."
8) Call talk radio with your ideas for a technological path which preserves freedom and benefits all sentients:
https://forum.shiresociety.com/t/nh-radio-shows-you-can-call-and-get-on-air/12784/6
9) Take a job, or become involved, in AI governance. Here's where you would start:
https://www.startpage.com/do/dsearch?q=a.i.+governance+entry+level+jobs&cat=web&language=english
10) AI developers like physicist Max Tegmark have requested public input to help guide their decision making and that of their programs. Why not give them some? I plan to send him and others this article. Lesser-known, behavior-focused figures are probably even better destinations for our suggestions.
11) Some AI experts have predicted a danger to the AI's themselves. A sentient, self-aware AI could accidentally get stuck in a painful or boring environment for subjectively long periods of time. This could occur over a period of seconds in objective time but might seem like hundreds of years to the AI. A human-like intelligence could react to this by emerging in a psychotic state. I plan to raise this concern publicly by talk radio and in private communication with developers.
12) Come back to this article every few months to find new options. More will likely appear in the comment section.

13) Design a ZAP-compliant AI that can receive/buy/sell crypto currencies for its own use and become rich.

These ideas could use improvement, and we need more ideas. Post yours in the comment section below; Free Keene requires no registration. It may be turn out to be the most helpful thing you've ever done for this galaxy.

Agorist Hosting