The widespread use of generative AI poses a threat to trust, and security as bad actors can use it to create fake reviews, personalities, and information. AI regulation is needed to prevent such malicious activities.
- Over-regulation can also have adverse effects as it may hinder innovation and monopolize the market.
- The goal is to ensure that generative AI is used in ways that benefit society while minimizing harm.
Imagine you receive a video call from a loved one in distress. In an age of generative AI, the content, voice and lip-synching of the video could be completely fake or near impossible to detect. In this scenario, how are you going to respond? Will you be attentive to their needs, or will you first ask, “How do I know you are real?”
If used properly, generative AI can revolutionize human welfare. For example, there will be no shortage of expert customer service agents that are incredibly responsive. There will be plenty of specialist medical doctors willing to listen to you and explain your condition in great detail. Thoughtful and caring teachers will abound in under-developed parts of the world. Generative AI can turn you into a software engineer where instead of writing code you can just express what you want and check if you are satisfied with the result. Similarly, generative AI can turn you into a poet, designer and artist at the same time.
This will create a lot of value if AI is in the hands of good actors. But what happens if it falls into the wrong hands? Up until recently, generating natural-looking content (text, image, voice, video, and interaction) was a distinctive human ability. Our instinct of trust is built upon the assumption that if content looks natural, it is real. This instinct will soon stop working and instead will turn into a vulnerability. Bad actors will soon learn how to leverage this instinct. Generative AI will soon become a money-making machine for scammers. It will become more difficult than ever to identify fake reviews, audiences, personalities and information. How are we going to protect ourselves from such nefarious actors?
The first solution is to pass regulations on the use of artificial intelligence. AI Regulation nevertheless risks adversely affecting companies that are already compliant with national laws, while scammers and criminals get away with non-compliance. We don’t want a world where good actors are prevented from innovation while bad actors are allowed to reap all of the benefits of AI.
Over-regulation sometimes sends the wrong signal to the players, especially if a reliable verification regime is not in place. Unfortunately, generative AI is a black-box, so verification can be difficult. As a result, while authorities are busy regulating law-abiding companies, shadowy AI-powered criminals can go uncontrolled and turn into a serious threat.
The threat of bad regulation is not limited to criminals. Sometimes regulations can prevent competition and cause inefficiencies. For example, if every AI app is required to apply for some type of clearance, this may lead to delays, discrimination, monopoly, and waste of resources.
Sometimes AI regulations defeat their purpose. For example, in the medical industry, strict regulations were originally introduced with the aim of protecting patients. However, it turns out that in some countries, the same regulations have made healthcare unaffordable for the average citizen and helped Big Pharma instead. Some pharmaceutical patents drive prices unreasonably high for end users. Lengthy review processes delay patients’ access to new drugs – recall the urgency of approving COVID-19 vaccines in 2020. Legal barriers scare off innovators and prevent healthy competition in the market. We don’t want this to happen with AI regulations that restrict the access of ordinary citizens while helping big tech companies monopolize the market.
Does this mean that good actors should be able to work with AI with no strings attached? After all, sometimes things can go wrong even with the best of intentions. Self-driving cars are not made to crash, but they do and jeopardize people. ChatGPT is not designed to give a wrong medical diagnosis, but it sometimes does. Of course banning or heavily regulating self-driving cars and chatbots is sometimes convenient, but it is not going to solve the problem.
We are entering a world with extraordinary technologies that need extraordinary measures to contain their most adverse effects. Here are a few ways to promote the use of generative AI while mitigating adverse effects:
- Governments should raise public awareness with campaigns highlighting the opportunities and threats posed by generative AI. The general public should be given opportunities to carefully assess technologies and determine for themselves whether they are of use or just a salad shooter aimlessly slicing and spraying synthetic content.
- Researchers should try to understand and characterize protection techniques. One key area of research is verification because it is key to successful AI regulation. We need reliable techniques and protocols to verify that generative models are compliant, fair, and safe. One promising direction is designing better algorithms that both reveal and integrate human preferences into the system
- Tech companies and creators of commercial Generative AI tools need to prioritize the public’s interest while governments should incentivize honesty and transparency. Tech companies have a tendency to advertise regulations for their own benefit but governments should optimize citizens interest first and foremost. It is appreciated that tech companies have expressed concern over generative AI. Tech companies should make the policy discourse public. However, governments should make sure that the response does not get hijacked to serve Big Tech rather than the ordinary citizens. A potential monopolization of the technology can be harmful.
- Judicial systems should act faster and more efficiently to counter anonymous actors, and there should be more cross-border cooperation. AI-based criminal activities such as phishing, disinformation and impersonation attacks occur so fast that current law enforcement frameworks are unable to keep up. We will need more efficient ways to authenticate “real humans” and act aggressively against identity theft and data privacy leak. Without regulatory restraint every piece of digital footprint will get consumed by an LLM!
- The scientific community should revise its goals. The current race to improve these systems with known built-in bias and little reliability is convenient but not helpful. We should be addressing fundamental problems that have real scientific and practical merit. The average person hears and reads no more than a billion words throughout their life. But Large Language Models (LLMs) need a trillion words to learn. The fact that LLMs are so sample inefficient is a disadvantage, both from resource point of view and also factual reliability. Also, let’s not forget that general intelligence requires the perception of the non-verbal world that includes craft and emotion. Finally, to improve generative models, we need to develop more explanation techniques. A new science is needed to explain an engineering marvel, make it safe and robust and not fragment human civilization.
Like all major technological innovations, generative AI will be put to good and malicious use. Governments should not be hapless bystanders and let market forces decide where this technology will settle. There should be a strategy of intervention that balances technological progress with overall social good and harmony. Lessons should be learnt from previous experience; a prominent example is the uninhibited deployment of large-scale social media platforms, the resulting “algorithmic amplification” and their impact on society at large.
Dr. Amin Sadeghi, Dr. Safa Messaoud, Dr. Enes Altinisk, Dr. Sanjay Chawla
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.