Ethical Concerns Indicate Need For AI Regulation

July 3, 2023


The latest AI software to draw attention is ChatGPT. Hundreds of articles, webinars and podcasts have tackled the subject since the release of its free version in November. However, ChatGPT is just one example of how artificial intelligence is shaping lives, industries and social understanding.

Experts and news outlets have celebrated its potential advantages, as the capabilities of AI allow it to function in a much more efficient way than human beings, said Rashi Maheshwari of Forbes. He added that AI has “pushed the boundaries of the way computer machines used to operate and functions to make human lives easier.”

There are several reasons to celebrate this technological development and its potential. For example, AI can reduce human errors, automate repetitive tasks and processes, smoothly handle big data, and facilitate quick decisions, according to Maheshwari.

However, as with many new technologies, there is a lot that could go wrong with AI. While there are drawbacks to AI’s use in business, there also are ethical concerns. The developing technology poses “challenges to our freedoms and standard of living,” wrote Alice Norga of the Civil Liberties Union for Europe.

Egypt is ranked second in Africa, after Mauritius, according to a research paper by Draya on the readiness of world governments to implement AI technology in 2022. As Egypt tackles its own AI implementation and challenges, it is essential to look at what other countries are doing.

Robot apocalypse?

AI poses several risks for businesses, such as expensive implementation, damage to infrastructure, reputation-damaging mistakes, and increased vulnerability to cyberattacks, according to Tyler Weitzman of Forbes.

However, most individuals, institutions, and policymakers seem to be most concerned about ethics and the potential “loss of control of our civilization,” according to an article in The Economist.

While a “robot apocalypse” is an extreme view, it is not entirely without merit. The “nightmare” scenario, according to The Economist, is “advanced AI causes harm on a massive scale by making poisons or viruses, or persuading humans to commit terrorist acts.” Ultimately, researchers worry “that future AI may have goals that do not align with those of their human creators.”

However, these scenarios require “a leap from today’s technology,” according to The Economist. Additionally, they would need advanced AI to have unlimited access to energy, money, and computing power, which are scarce resources in today’s world and far from a guarantee for the future.

Most computer engineers, philosophers and policymakers agree AI will not take over the world (only 10% believed AI’s effects would be “extremely bad,” according to a survey of AI researchers in 2022).

However, experts are concerned about the more minor effects the technology might have on people’s lives and livelihoods.

AI risks assessed today are more closely related to bias, privacy, and intellectual property rights. One example found in a report by the U.S. Department of Commerce is that some facial recognition AI software tends to misidentify people of color. If used by law enforcement, these biases could potentially lead to wrongful arrests. According to Bernard Marr of Forbes, that is mainly because human programmers choose the datasets and information used to train AI software and algorithms. He adds that without “extensive testing and diverse teams, it is easy for unconscious biases to enter machine learning models.”

The risks AI could pose to privacy are closely related to those it poses to business: mistakes and vulnerability to cyberattacks. These pose a risk due to “the potential for data breaches and unauthorized access to personal information” that increases due to increasing volumes of data that companies are able to collect through AI, according to an article by the Economic Times of India. Information falling into the wrong hands is a risk for a spectrum of reasons, from unwanted spam calls and emails and all the way to identity theft.

For generative AI software such as ChatGPT and DALL-E, copyright infringements are a significant issue. The least of them is it “poses legal questions,” said Gil Appel, Juliana Neelbauer and David A. Schweidel for Harvard Business Review.

Generative AI can blur the lines of ownership of art, they explain, as it is unclear who owns poems or artwork software produces. Is it the artist whose work was used (without consent, which is an additional ethical concern) to train the software? Or is it the company that trained the software? Is it the user who inputs the prompt?

Until these issues are resolved, Appel, Neelbauer and Schwiedel suggest companies and individuals “need to take new steps to protect themselves.”

Governing AI

While not a sci-fi movie’s version of a robot apocalypse, existing risks of AI still show “regulation is needed,” according to The Economist.

Business leaders agree. DataRobot’s State of AI Bias report shows 81% of business leaders want government regulation to define and prevent AI bias. Legislation could clear up a lot of ambiguity and allow companies to move forward and step into the enormous potential of AI, explained Ted Kwartler, vice-president of trusted AI at DataRobot.

The approach to regulations the United Kingdom, United States, Europe and China are taking right now is on the governmental and legislative levels. However, those countries are divided in how to approach regulation, The Economist article said.

For example, the U.S. and UK have taken “light touch” measures, “with no new rules or regulatory bodies, but applying existing regulations to AI systems,” said the article. The UK hopes the steps will “boost investment and turn Britain into an “AI superpower.” However, the U.S. might already be bowing out of its light touch approach, as the “Biden administration is now seeking public views on what a rulebook might look like,” according to The Economist.

A stricter approach is the one investigated by the EU. It categorizes the different types of AI according to their degree of risk and applies “increasingly stringent monitoring and disclosure as the degree of risk rises,” The Economist article said.

For example, under the EU’s AI Act, self-driving cars would be more scrutinized than in other countries. Subliminal advertising and remote biometrics are banned under the EU’s measures. However, these measures are criticized by some as too “stifling.” Others criticize them as not stringent enough, especially regarding their use in medicine.

The toughest of all governmental approaches might be China’s requirement that AI products be registered and undergo a security review before release, The Economist article said. However, that approach also is criticized for having less to do with safety than more to do with politics, as a condition of approval is to reflect the “core value of socialism.”

Other alternatives fall more on the company level. For example, Stability.AI, which developed Stable Diffusion, a text-to-image model, said it would allow artists to “opt out” of having their work used as part of the training dataset.

However, this “puts the onus on content creators to actively protect their intellectual property rather than requiring AI developers to secure the IP,” wrote Appel of Neelbauer and Schwiedel. Other companies are applying measures to ensure “data is collected and processed transparently and securely and that individuals have control over it,” according to Vipin Vindal, CEO of Quarks Technosoft.

The Associated Press has developed its own strategy to “leverage artificial intelligence and automation to bolster its core news report.” However, it is not clear how that might address ethical issues in journalism. 


While there is a near consensus on the need to regulate AI, what regulations should look like is going to be “an important and complicated problem,” wrote Harvard Professor Noah Feldman in an editorial for Bloomberg. He anticipated that complexity would make this a topic of debate for months, if not years, to come.

The main complication with regulations is vagueness. “Imagine if someone asked, ‘When should we start putting regulations on computer software?'” was how Intelligent Artifacts, a software developer, attempted to explain it in an editorial for Medium, a digital magazine. “That’s not a very precise question. What kind of software? Video games? Spreadsheets? Malware?” While we now know enough about software to distinguish its types and regulate them accordingly, there is still a lot that we do not know about AI.

That can cause significant issues when drafting regulations, as subtle differences can shift the meaning entirely. For example, researchers often refer to software that infers patterns from large data sets as “machine learning,” explained Matt O’Shaughnessy, a visiting fellow in the technology and international affairs program at Carnegie Endowment for International Peace.

However, in policy, it is generally referred to as “AI,” “conjuring the specter of systems with superhuman capabilities rather than narrow and fallible algorithms,” he added.

Defining AI and its types delves into very technical territories on complex and classical algorithms that suggest Feldman is likely right in that it will take a long time to discuss and debate.

However, while regulating AI might span years, The Economist believes implementing a “measured approach today can provide the foundations on which further rules can be added.” Especially with new AI categories and software entering the mainstream daily, “the time to start building those foundations is now.”