Lawful, not Awful: the Case for Royalties, Regulation and Taxing Artificial Intelligence
Plus: the one warning you should ALWAYS remember when trying to solve problems with technology.
We’ve seen a recent explosion of excitement and concern about artificial intelligence, or AI. Its own creators have warned that it could be wildly dangerous and needs to be regulated. It’s worth understanding a bit more about AI, how it works - but the concerns about it are legitimate and real.
There has been turmoil in the corporate world of AI, when Sam Altman, the CEO of a company called OpenAI, was suddenly fired by his board. In a shocking turn, he was immediately hired by Microsoft. Hundreds of employees at OpenAI threatened to quit and join Altman at Microsoft. Days later Altman was reinstated again, and the board that fired him was rejigged.
The reason Altman was fired was that he ‘was working on advanced model so powerful it alarmed staff’ - as this article in The Guardian reports. The concern with these developments is that really powerful AI will develop into something that humans can’t control - and that can exercise control of its own.
Altman and other tech executives have been urging that AI be regulated, saying it could cause “significant harm to the world.”
There are undoubtedly jobs that are going to be eliminated. Eliezer Yudkowsky wrote on Nov 23, 2023, “Most graphic artists and translators should switch to saving money and figuring out which career to enter next, on maybe a 6 to 24 month time horizon. Don't be misled or consoled by flaws of current AI systems. They're improving.”
First, while the breakthroughs in Artificial Intelligence are new, the idea behind the technology is not. Artificial intelligence isn’t really a good term - it’s more accurate to say “Machine Learning”. You feed a bunch of information into a computer program, and it learns to pick out and combine certain patterns. It can also use those patterns and combine them in a certain style.
One of the most famous recent uses was by film director Peter Jackson and the Beatles, who used machine learning to isolate John Lennon’s singing on a tape that originally had other noise, including piano playing.
The program, dubbed “MAL” - was developed by Jackson’s company when he was putting together the film “Get Back” which covered the Beatles’ recording sessions of what became the Let it Be album. The name MAL is a cheeky reference to the self-aware computer HAL from 2001, and Mal Evans, who worked with the Beatles throughout their career.
The software was trained to recognize, and pick out people’s voices, guitars. It was used in the film Get Back. Background noise could essentially be erased, so that voices could be heard in isolation. It’s possible to take a song with multiple parts, all recorded together, and separate all the parts out and remix them - which is what has been done with new Beatles remixes and releases. Even if a song was recorded with multiple voices and instruments into a single microphone, it would be possible to separate out each voice and each instrument onto its own track. Machine learning was also used to improve the image quality of the original film. It does more than just blow up an image - it actually fills in areas and cleans them up.
In the case of Jackson and the Beatles, Jackson could scan in all the Beatles’ recordings and images, with their permission to use their intellectual property, with both Jackson, the Beatles and other rights holders being compensated.
That’s not always the case, and that is one of the issues, for two reasons.
First, Machine Learning needs to learn from something, and what it learns is often based on other people’s intellectual property that hasn’t been paid for.
There’s also a creative aspect to this machine learning - that it can combine and reproduce new information, drawing on the information that it has gathered. It can create music in the style of Bach, paintings in the style of Van Gogh, writing in the style of Shakespeare.
Machine learning can even take someone’s appearance or voice, and generate an artificial performance.
These all create challenges, and the problem is that companies are able to take intellectual property without compensation - including people’s voices, appearance, and work - and then generate new facsimiles in their style, without compensation.
That’s because Machine Learning is based on predicting what’s likely. The acronym “GPT” in ChatGPT stands for “Generative Predictive Text”.
Based on prompts, the program assembles information based on what is likely to go together. It’s possible to analyze a book and a language and see the probability that certain letters will follow other letters, and that certain words will follow other words. As a simple example, the letter “Q” in English is likely to be followed by a U. This can apply to any kind of information - including writing text, software code, audio or generating images. Different artists have different styles, which are imitable and predictable.
One of the problems with Machine Learning programs, and why AI isn’t intelligent, is that while it is very creative, it lacks judgment. (Perhaps, just perhaps, like its creators and owners).
When it comes to generating images, fingers and limbs tend to be difficult for AI. Extra fingers or fingers and hands on feet are common.
The basic ideas of machine learning - and generative AI - have been around for a long time. 50 years ago, the process was described in collection of wonderful short stories, The Cyberiad: Fables for the Cybernetic Age, by the brilliant Polish science fiction author, Stanislaw Lem. It was originally published in 1965 and was translated and published into English in 1974, Lem described two rival inventors, Trurl and Klapaucius and one decides to create an “Electronic Bard” which will write poetry.
Trurl does exactly what researchers do today - he feeds the machine all the poetry he can find. The computer then extracts an algorithm of the style - basically, a database that describes all the probabilities that letters, words, subjects, etc. might have. Then, when offered a prompt, it generates text.
In Lem’s story, Klapaucius, green with envy, tries to stump the machine with this challenge:
“Compose a poem – a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter S!”
He thinks he has won, when the machine recites the following verse:
“Seduced, Shaggy Samson snored
She scissored short, sorely shorn
Soon shackled slave, Samson sighed
Silently scheming
Sightlessly seeking
Some savage, spectacular suicide”
Eventually, the Bard becomes a real threat. It can write any poem at all in any style - so poets are all out of work. People try to shut the Bard down, but when they approach it, it dissuades them with an emotional plea for clemency that leaves them in tears. Eventually, it’s unplugged, and shipped into space, where it send signals through the stars.
Lem had a wonderful knack for crafting what are basically fairy tales and folk tales, peopled by robots, that wrestle with the moral and philosophical implications of technological advances.
This is a new technology with enormous disruptive potential - and we need to be smart about the technology, regulation, and legislation of all of it.
There are five problems that are evident - and many more we may not be aware of.
1. Artificial Intelligence lacks Judgment
Here’s an example of how AI lacks “judgment”:
As mentioned above. Generative machine learning lacks what we could call judgment and responsibility, and it breaks from reality. It’s been asked to come up with something, based on the information in its databases, so it settles on one of many things it could come up with, based on the framework provided,
In short, it makes stuff up, and says things without checking to see if it’s real or not. It may seem a radical question to ask but - is that something we really need more of in this world? A machine that makes things up? Just like a human being except it can make mistakes even faster?
This is not a small problem. It also has interesting legal implications.
If “AI” generates new content - like a photo, or text, who owns the rights to that creation? The answer, courts have determined, is no one. This is based on a similar ruling of a quite famous photo that a monkey took of itself, after photographer David Slater left his camera with a troop of macaques he was photographing in 2011 in Indonesia.
This photo went viral:
Slater claimed he should have been paid royalties, but was denied because there was an argument over who had the rights, because Slater had not taken the photograph - the monkey had. And monkeys don’t have intellectual property rights.
In December 2014, the United States Copyright Office stated that works created by a non-human, such as a photograph taken by a monkey, are not copyrightable.
A similar ruling arose with AI - but it’s relevant because AI and machine learning has been made possible by other people’s intellectual property. Unlike Peter Jackson and the Beatles - they’re not being compensated or making money from it.
2. Someone else worked to create the information in the database. They should be compensated for the commercial use of their intellectual property.
All the information used to “teach” machine learning comes from somewhere. Machines are taught on webpages, or scans of art, music, video, photos and film. The Machine Learning may very well have captured their unique style as an author or artist, and will now be used to have a machine replace that creator’s work, and take their job, with no compensation.
The issue is discussed here, in a piece about venture capitalists complaining that “potential copyright liability for AI developers could harm the interest of their investors. “Imposing the cost of actual or potential copyright liability on the creators of AI models will either kill or significantly hamper their development,””
In other words, these businesses can’t succeed if they pay the people responsible for their success.
As David Newhoff writes:
First the AI developer “trains” its model by feeding it millions of creative works, all used without permission from the rightsholders. Next, the AI developer hopes to sell its system to enterprise users—businesses that will, in theory, no longer need to hire the same professional creators whose works were rustled to develop the AI. And finally, the AI developer will protect said business user against potential infringement claims by that same class of professional creators (at least until there are no more creators left). Maybe this isn’t quite how things will go, but in principle, it looks a lot like looting a neighborhood and then erecting legal barriers to prevent the residents from remedying the theft.
It has to be said, that is exactly the business model of many internet companies, including the largest ones: Social Media, including Twitter, Facebook, Reddit, Instagram, TikTok, and YouTube as well as search engines like Google and Bing and streaming services. They just take others intellectual property either without compensation, or with compensation that is a fraction of a cent in almost any currency you can imagine.
If we are going to talk about having an information economy, it has to be an economy that will respect creators’ intellectual property rights, respect copyright and ensure that royalty regimes are modern, adequate, and enforced.
If AI really is the next big thing - then let’s get it right, right at the beginning. Because the current copyright and royalty regimes for the internet are virtually non-existent, and it is making it impossible for creators, whether they are artists, or professional journalists or media organizations, to make a living. The work they do is important - important enough for these corporations to take and use. If it’s so essential to their success, how can they argue it has no monetary value?
And to be clear - this is not just an issue of compensation - it is an issue of restoring order and balance to a market that is dysfunctional because intellectual property rights aren’t being enforced.
3. How is this going to be weaponized?
This is a question that many in the tech world don’t seem to ask. In the 1990s, when internet pioneers and evangelists rhapsodized about the internet, they talked about their utopian ideals of an internet that would be free and democratic, In retrospect, given the amount of information technology that was originally developed for military purposes - almost all of it - the idea that it could be weaponized shouldn’t be a surprise.
But in the case of generative AI, it is already being used to generate fraudulent videos using AI-generated reproductions of voices. It can be used to impersonate people - and could be used in fraud, extortion or blackmail schemes. It can be used to create and manage multiple fraudulent online identities for the purpose of criminal schemes, or political manipulation.
That is just one facet of what generative AI could do, because it represents a new level of fulfillment of what the original computer was conceived as being: a universal machine, that can create and become a virtual version of anything. That makes it incredibly powerful, and anything that is powerful can be powerful for good and for bad. It can be used as a tool, as a shield, and as a sword. And we need to be absolutely clear about this at the ground floor. If there is a crime that exists now, how is AI going to make it worse? And maybe how can AI be used to make it better?
These are questions that need to be answered. We need legislation and regulation around AI, as a matter of the rule of law.
4. Keeping people safe will to add to a whole new burden for government and the justice system: to pay for it, AI should be taxed.
We need action to deal with the crimes and scams that will absolutely, certainly be committed using AI, as well as improving cybercrime investigations. That requires investment to be able to support new investigations and make prosecutions.
It’s extremely easy for these technologies to be reproduced, and “weaponized” for malicious purposes in ways that are harmful, and especially in ways that either steal people’s information, violate their privacy, or violate their intellectual property rights. It could manipulate them.
Really what is required is just to enforce the same kinds of rules, regulations and laws that already apply to every single other industry. It’s about bringing these companies under the rule of law. That’s it.
Saying that people and corporations should follow the law and pay their taxes and that people should be a paid for their work should be considered just about the squarest, least radical thing you can imagine. Because it is.
5. A warning: catastrophic risks and the corruption of quality
The warning comes from a paper called “Normalized Deviance in Health Care Delivery” a chilling and eye-opening read about how bad and substandard practices become the norm, including repeated dangerous rule breaking.
Whether it is hard to raise an issue for fear of retaliation, or whether people were routinely breaking the rules, the basic attitude becomes, “if it hasn’t happened yet, it never will.” Investigations into major disasters show a pattern of repeated incidents and rule breaking before tragedy struck. This happened prior to the space shuttle Challenger exploding mid flight, it happened during a massive chemical plant disaster in Bhopal, India.
The fundamental problem of bad practices becoming routine happens in health care, as well as many other settings. Addressing the broken system requires making people safe to speak up and point out problems so they can be fixed, because many of the system problems are human organization and communication problems.
The great temptation - especially of everyone working in technology - is that if only the human factor can be replaced, then the machine will be perfect - the driverless car, the workerless factory. The assumption is that human beings are the weak link in the chain - where they do have real advantages.
But John Banja writes a warning about using machines to solve human problems, that I have always found chilling:
“When new technologies are used to eliminate well-understood system failures or to gain high precision performance, they often introduce new pathways to large scale, catastrophic failures. Not uncommonly, these new, rare catastrophes have even greater impact than those eliminated by the new technology.”
This is not about being a luddite, or a techno-skeptic. A human being with judgment can be the person who stops a deadly risk from spreading.
This is not an abstract question. 40 years ago, on 26 September 1983, Stanislav Petrov, a Soviet duty officer may well have prevented a nuclear war when he overrode an early-warning system that had detected what it classified as a nuclear missile launch.
In the early hours of the morning, the Soviet Union's early-warning systems detected an incoming missile strike from the United States. Computer readouts suggested several missiles had been launched. The protocol for the Soviet military would have been to retaliate with a nuclear attack of its own.
But duty officer Stanislav Petrov - whose job it was to register apparent enemy missile launches - decided not to report them to his superiors, and instead dismissed them as a false alarm.
… If he was wrong, the first nuclear explosions would have happened minutes later.
"Twenty-three minutes later I realised that nothing had happened. If there had been a real strike, then I would already know about it. It was such a relief," he says with a smile.
Given the absolute obsession that people have with replacing human beings, with the belief that machines can do things better, more cheaply, or more profitably, the new risks of new technology have to be taken seriously.
This risk cannot be emphasized enough.
One of the basic insights into human catastrophes and how “bad practices” become standard, is because “Murphy’s Law” - the idea that “whatever can go wrong, will” is wrong.
“Murphy's Law is wrong—what can go wrong usually goes right. But then one day a few of the bad little choices come together, and circumstances take an airplane down. Who, then, is really to blame?” That was the argument by a professor looking at the circumstances that led to a horrific crash of the plane Valujet 592 into a swamp in May, 1996, “killing two pilots, three flight attendants, and 105 passengers.”
An investigation later found that it was caused by a fire when boxes full of oxygen canisters caused a blaze under the cockpit shortly after takeoff. The canisters, which were expired, were the kind usually used for the masks that drop down on a flight.
There are other kinds of technology and software that can also put people at risk.
This article from 2016, programmers and coders had a discussion about the unethical and illegal code they had been asked to write. A post written by teacher and programmer Bill Sourour, “Code I am Still Ashamed of” went viral.
While writing code for a pharmaceutical company’s website, Sourour told about how he “was duped into helping the company skirt drug advertising laws in order to persuade young women to take a particular drug… He later found out the drug was known to worsen depression and at least one young woman committed suicide while taking it.”
Sourour wrote his article after watching this talk:
Robert Martin said
in today's world, everything we do like buying things, making a phone call, driving cars, flying in planes, involves software. And dozens of people have already been killed by faulty software in cars, while hundreds of people have been killed from faulty software during air travel.
"We are killing people," Martin says. "We did not get into this business to kill people. And this is only getting worse."
This is all before the current furore over AI, where the actual content of the code is a black box.
We need to remember that these are tools - and that new tools shape us, and change our relationship to one another and to work, and there are benefits and there are real risk and damage.
We always have to continue to work on the human side of the equation, and remember that all the value in the technology is its value to us.
There’s a wonderful book for lay readers about structures and engineering, called “Structures: or, Why things don’t fall down.” It’s a classic, and the author, J.E. Gordon, developed a lot of new technologies himself, including working on the creation of fibreglass for construction purposes during the Second World War. He was a professor, an engineer and a naval architect, and also was an expert witness in inquiries following accidents like plane crashes, boats sinking, bridges failing, and so on.
In it, he has some incredibly important insights:
“In the course of a long professional life spent, or misspent in the study of the strength of materials and structures, I have had cause to examine a lot of accidents, many of them fatal. I have been forced to the conclusion that very few accidents just ‘happen’ in a morally neutral way. Nine out of ten accidents are caused, not by the more abstruse technical effects, but by old-fashioned human sin, often verging on wickedness.
Of course I do not mean the more gilded and juicy sins like deliberate murder, large-scale fraud or sex. It is squalid sins like carelessness, idleness, won’t-learn-and-don’t-need-to-ask, you-can’t-tell-me-anything-about-my-job, pride, jealousy and greed that kill people.”
For many years, the philosophy of many Silicon Valley companies was to “move fast and break things.” We all know there are real benefits and convenience to technology, but we can’t pretend there are not real costs - sometimes terrible human costs - associated with harms from that technology.
That is why there is a compelling need for us to ensure that we have an effective regulatory framework and taxation to deal with some of the inevitable harms that will come from the malicious use, and unintended negative consequences of AI - and to do it before it causes a crisis that is too big to recover from.
Nice read. Just part-way through, I have to comment that I'd recommend Richard Stallman's views on "intellectual property" = a gross made-up term we should completely abolish. If a creative person produces something others can use the ideas should become common property. This is how mathematics advances so rapidly, as does free-libre software. What we do is pay people to do the creative work in the first place, which is called employing them. Or, in MMM (modern-money-mechanics, the framework I formerly know as MMT) it can be accomplished with a Job Guarantee = a living decent wage in compensation for working in one's local community.