Who should regulate AI?

Ali Jan
3 min readFeb 28, 2021

Just as laws are created to give semblance to human society and creating laws is deemed to be an explicit function of the government, the government has to take a proactive role to regulate and monitor technology, especially AI.

The field of Machine Learning is growing at a breakneck speed. Each year, hundreds if not thousands of research papers pertaining to AI are presented in conferences worldwide. Some are mildly upsetting, some are fallacious, while others, seemingly good algorithms, are out-rightly dangerous because of their potential misuse. In one of the conferences last year, an AI researcher demonstrated an algorithm that recognizes human voice to generate the face of the speaker (speech2face).

As is apparent, proponents argued that this algorithm can be deployed to catch criminals. However, there were awful lot of AI ethicists who were critical of this algorithm and its potential misuse of ‘profiling individuals’ based on voice and face. The most vociferous criticism came from trans-genders, whose voice and, perhaps, face would not match, thereby, singling them out as outliers. Is this an unethical AI problem or is it an application of AI problem? That is up for debate. However, problems like these raise a more prevalent question that who is responsible for regulating AI?

For research in biology, specifically, genetic study may have stringent ethical checkpoints in or outside of academia but no such restriction exists for computer science or AI. To AI’s defense, it maybe true that it’s a new field and many peer-review academicians are not well-versed in the ethical dimension of technology but this does not absolve journals, research institutions and conferences from establishing and empowering ethical board committees, who are trained to identify potential unethical usage of AI.

There is already ample research on how AI technology can be harnessed and weaponized against the population through facial recognition and geo-tagging for surveillance or through automated-weapons which can use machine learning to identify and neutralize the target — be that a good actor or bad. Amidst all this, there is a missing link. What should be the role of the government in order to regulate AI?

It’s often stated that platforms, computer scientists or individual researchers should bear the responsibility to produce products, here algorithms, under a strict ethical framework. However, this idea is somewhat misplaced. The corporations, academic institutions, research firms and even ethical board committees need a set of rules to abide by. This is precisely what governments should do. Just as laws are created to give semblance to human society and creating laws is deemed to be an explicit function of the government, here too, the government has to take a proactive role to regulate and monitor technology, especially AI. This does not mean the government should micromanage or dictate innovation in AI to suit its objectives. Instead, by creating basic rules and evolving them with time will reduce the downstream of technologies.

There is no way that AI development can be stopped. It should not be. However, it can certainly be given the right direction based on an ethical framework, which, I argue, needs to be created from ground up since humanity’s collective well-being depends on it.

I write about the big picture of the digital society. I tweet @alijanburdi

--

--

Ali Jan

Writes about AI, Tech Policy and Digital Society