March 28, 2024 1:53 AM
Search

Opinion: Concrete Steps for Our Brave New AI-Infested World – Inside Sources

A few days ago, I tried Chat GPT and DALLE-2 for the first time. My mind immediately fell into an abyss. I was consumed by visions of a dystopian future in which human creativity ceased to be, a future in which Artificial Intelligence programs such as these were the only artists and composers and writers left working. 

And then, right when I thought I hit bottom, the abyss laughed and said, “Caleb, you ignorant fool,” and it opened up a trap door, and I fell again, deeper and deeper until — finally — I was in a wasteland the likes of which neither Isaac Asimov nor Aldous Huxley could have imagined. I lived there for two days. It was bleak. I’d probably be there still if it wasn’t for my wife. She’s the one that pulled me from my doom by doing what she does best. She called me on my crap.

“You loathe defeatist sad-sacks,” she pointed out, “so why are you being one?”

It was a simple point and a good one. Clearly, she was right. So, naturally, I argued with her for a very long time. Then, after an hour or five, I finally acknowledged that she was right in this case. Then I began my ascent back into the light, and now, with the sun once more on my face and the fight back in my belly, I write to offer up some concrete steps that we can take to deal with this brave new AI-infested world.

First, we need to acknowledge that neither the government nor the free market should be given too much power over tools of this nature. The former is far too likely to use it in authoritarian ways, and the latter is far too likely to sacrifice long-term goals such as individual growth and human flourishing for the short-term goal of quick profits. This stance will undoubtedly ruffle the feathers of some extremists on both the big-government left and the free-market absolutist right. That’s fine.

They should voice their disagreement, and we should listen. But we should not get bogged down by them because we can’t afford to. We need to find practical solutions as quickly as we responsibly can, and to do that we need to sketch out some loose boundaries and create a reasonable size playing field in which to bat around plausible policy solutions. So, that’s what I’ve done. And now that we have that field marked off, here are some initial ideas to start the batting.

One, all products (books, paintings, cartoons, etc.) that are made in whole or in part by AI should be clearly labeled as such, and the label should be big. I’m not talking about a made-in-XYZ-country label; I’m talking about an impossible-to-ignore size label. Such a conspicuous marker will obviously not stop everyone from purchasing a song or book written by AI, but it will stop some, and that will be enough to keep people employed in creative fields, which will, in turn, be enough to keep subjects like writing and music alive in our schools.

Two, copyrights should not be granted to anything created by AI. There are a few reasons for this, but the only one that needs to be stated is the most obvious. Namely, it is patently absurd to think that someone should be able to copyright something that they “created” by typing a few words into an AI program. It took Chuck Close four months of painting and a lifetime of practice to produce his breakthrough work, “Big Self Portrait.” Obviously, I should not get the same legal protections for an image that I “created” by typing “realistic painting of a scruffy white guy in glasses smoking a cigarette” into a text box.

Three, AI that is taught on a person’s compendium of work and then uses that data to “create” something “new” should either pay royalties to the original copyright holder or be held accountable for plagiarism and copyright violation. The first thing I entered into Chat GPT was, “Tell me about dogs in the style of Charles Bukowski,” and what Chat GPT produced blew me away — at least at first. But then I realized that the program wasn’t creating anything. It was just paraphrasing Bukowski’s work. This sort of thing might not be a huge problem when it is a handful of individual writers doing it, but when it is an AI program being used by billions of people, that’s a different problem altogether and it must be addressed.

Four, Open AI and other similar companies (including search engines) should be required to display very clearly and very publicly the specific values and goals included in their algorithm. This will help people realize that the information they are receiving is not the capital-T truth but rather a curated set of information rooted in specific values and goals that the individual user may or may not share.

That’s it. That’s all I’ve got. Is it enough? No. But is it a step in the right direction? Possibly, also no. I’m a writer and political scientist, not a computer programmer or IP attorney or AI developer. Perhaps what I proposed is entirely unworkable. If so, fine. Tell me. But then help out. Come up with some concrete solutions of your own. If enough of us stop handwringing and pontificating and start offering viable  solutions, I’m confident we can all avoid the abyss and stay in the sun. 

Facebook
Twitter
LinkedIn
Pinterest

Related Articles