To regulate, or not to regulate, that is the question. When it comes to AI, there are passionate opinions on both sides of the debate. In fact, AI regulation has rapidly escalated to become one of the burning issues of our time. 

A decade ago it was social media networks struggling with balancing free speech with protecting users. Today society is navigating similar uncertain waters as we try to figure out what we want our future to look like. 

In the news: AI has everyone’s attention

This week, Sam Altman, chief executive officer and co-founder of OpenAI has had to testify before Congress in a hearing entitled “Oversight of AI: Rules for Artificial Intelligence,” while recently he and other tech CEOs were at the White House meeting Vice President Kamala Harris and discussing “Advancing Responsible Artificial Intelligence Innovation.” 

In Europe, the Artificial Intelligence Act seeks to regulate AI in a similar way to how the EU regulated data and privacy through the GDPR. 

These are further indicators of how seriously this issue is being taken by lawmakers – and the public they represent. Though Sam Altman laid out a plan for regulating AI, many were still uneasy at the hearing, with some Senators airing concerns about the over-concentration of AI development in too few hands.

Sam Altman’s three point plan broadly aims to restrict AI’s self-replication, require independent audits of AI models, and empower a new government agency to focus on compliance with these standards. 

The Senate Judiciary subcommittee also heard testimony from IBM executive Christina Montgomery, who advocated for a risk-based, versus development-based assessment and appropriate regulation of AI, much in the same way that Europe has approached this challenge. 

AI regulation: what’s at stake

AI has the potential to do so much good. In fact, it’s been doing an enormous amount of good for quite some time. 

For example in healthcare, AI is drastically improving diagnosis, treatment, and patient care. It enables more accurate medical imaging analysis, the early detection of diseases, personalized treatment plans, and drug discovery. 

Or in education, AI is enabling personalized learning experiences and adaptive tutoring systems at scale. It can analyze student data, identify individual strengths and weaknesses, and tailor educational content accordingly, as well as facilitate interactive learning, provide real-time feedback, and support educators in creating more effective teaching methods.

Similar advances are being made, thanks to AI, in fields such as environmental conservation, agriculture, cybersecurity, energy management, and social services. 

There is also consensus that we are only scratching the surface of what AI is capable of.  But – and it’s a big “but” – there are also potential risks to unregulated AI given that we don’t yet know what we don’t yet know, regarding what might lie beneath the surface of the technology. 

If used prematurely, incorrectly, or unethically, AI can cause havoc. Imagine an autonomous self-driving car that has to make difficult moral decisions, such as an answer to the Trolley problem. Or sensitive AI systems that are released without proper cybersecurity precautions, and are vulnerable to hacking. Of course, we’ve made huge progress in solving some of these challenges already, but many more lay ahead. 

Similar issues need to be dealt with that pertain to bias and discrimination, privacy, job displacement, and economic inequality. There is also an inherent lack of transparency when it comes to many AI models and their applications, which when coupled with a lack of accountability, might provide a perfect storm of unintended – and generally negative – consequences. 

These fears, while not unfounded, are tempered by our very awareness of them. Progress usually carries perceived fears out of proportion to the reality that comes to pass. Perhaps our fears are the very catalyst that drives us to self-impose guardrails. 

Let’s dig into some of the core concerns that invite regulation, as well as the arguments against an attempt to stifle innovation. 

The argument to regulate

Elon Musk recently said that “I think we need to regulate AI safety…it is, I think, actually a bigger risk to society than cars or planes or medicine.” He went on to call it “one of the biggest risks to the future of civilization.”

Musk may have a point. There are several compelling reasons why AI should be regulated. Some of the most compelling reasons for this include:

Safety and Ethical Concerns

AI systems can have far-reaching impacts on society, and regulation is crucial to ensure their safety and ethical use. Without appropriate safeguards, AI systems may pose risks to human life, privacy, and fundamental rights. Regulations can establish guidelines for responsible development, testing, and deployment of AI, minimizing the potential for harm and ensuring AI operates within ethical boundaries.

Bias and Discrimination

AI algorithms can inadvertently perpetuate biases and discrimination present in the training data. Unregulated AI systems can amplify societal biases, leading to unfair outcomes in areas such as hiring, lending, and law enforcement. By implementing regulations, transparency and fairness in AI algorithms, guardrails can be established (to some degree at least), promoting equal treatment and reducing the risk of discrimination.

Accountability and Transparency

As AI systems become more complex and autonomous, it becomes essential to establish mechanisms for accountability and transparency. Regulation can require developers and organizations to document and disclose the underlying algorithms, data sources, and decision-making processes of AI systems. Though Sam Altman didn’t advocate for model transparency in his three-point plan, such an approach has become a central facet of the EU’s approach to responsible AI. 

Economic Impacts and Job Displacement

AI and automation have the potential to significantly impact the job market, leading to potential unemployment and economic inequality. Regulation can help mitigate these challenges by encouraging the responsible deployment of AI, supporting the retraining and upskilling of workers, and fostering new job opportunities in emerging AI-related fields. We’ve explored this aspect more deeply in ‘The Future of Accounting and AI’, reflecting on how most industrial development that at first threatened jobs, empowered workers to focus on more strategic versus manual work towards greater productivity, and sustained long-term employment.  

National Security and Privacy

Unregulated AI can raise concerns about national security and privacy. Malicious actors can exploit AI technologies for cyberattacks, misinformation campaigns, or surveillance. Regulations can ensure the development of AI systems adheres to stringent security and privacy standards, safeguarding critical infrastructure, sensitive data, and individual privacy rights.

As Musk puts it, regulation “may slow down AI a little bit, but I think that that might also be a good thing.” 

All this makes perfect sense. But what about the arguments against regulation?

The argument not to regulate

There are several arguments against excessive AI regulation – and for some, against any regulation at all. 

Innovation and Progress

AI is a rapidly evolving field with immense potential for innovation and progress. Overregulation can stifle innovation by creating barriers and bureaucratic hurdles for developers and organizations. By allowing a more flexible environment, AI can continue to advance and offer transformative solutions to societal challenges.

Adaptability and Agility

As AI technology evolves at a rapid pace, imposing rigid regulations may hinder its adaptability. AI systems need the flexibility to learn from new data and adapt their algorithms accordingly. Stricter regulations may impede the ability to respond swiftly to emerging needs or make timely improvements, slowing the development and deployment of beneficial AI applications.

Regulatory Burden

Introducing comprehensive AI regulations can place a significant burden on businesses, especially small and medium-sized enterprises (SMEs) that may lack the necessary resources and expertise to comply. Strict regulations can lead to increased costs, bureaucratic processes, and barriers to entry, limiting the participation of smaller players in the AI ecosystem.

International Competitiveness

Excessive regulation can have severe consequences for a country’s international competitiveness. If a particular country imposes stringent regulations while others adopt a more flexible approach, it may disadvantage domestic companies and discourage foreign investment. 

Unintended Consequences

Many economists will be wary of over-regulation and unintended consequences. Regulation can not only stifle creativity, but it can inadvertently favor certain approaches or companies. Overregulation may also discourage risk-taking and experimentation – which is how we got this far in the first place.

Finding the middle ground

For many, there is space for a balanced approach to regulating AI; one that enables us to enjoy the majority of the benefits of AI, while responsibly acting to mitigate the most serious risks (so far as we can anticipate them). This seems to be a balance advocated by Elon Musk, Sam Altman, and Christina Montgomery as well as many others, in various forms. 

For example, regulation might focus on high-risk foundational AI systems that have the potential to have a negative impact at scale, while allowing for more flexible regulation for low-risk applications and SME-deployed systems. 

Moreover, regulation should be designed in a way that encourages innovation and competition, while ensuring that businesses are accountable for the risks associated with their AI systems. Once again, the EU has made some strides in this respect, placing AI use justification, and audit trail transparency of source data at the center of its approach. 

A balanced approach can also involve collaboration between policymakers, businesses, and civil society. Policymakers can work with businesses to develop ethical guidelines and best practices for AI development and use. Businesses can provide input on the potential benefits and risks of AI, as well as help to develop and implement the regulations.

Civil society can help to ensure that the regulations are aligned with the values and needs of society, and provide feedback on the impact of AI on individuals and communities.  In many respects self adopted controls have the potential to augment imposed regulation, much in the same way that environmentally favorable policies have been adopted by business in response to social and cultural expectations. 

Towards responsible AI

Ultimately, regulation and an increased focus by lawmakers represent an important step towards responsible AI development and use. 

While there are risks associated with over-regulating AI, a balanced approach can help to promote trust, transparency, and accountability while also fostering innovation and competition. 

Policymakers, businesses, and civil society must work together to ensure that the regulations are designed in a way that benefits everyone. Innovation is difficult (often impossible) to contain, but by agreement and a shared sense of purpose can be carefully nurtured, for the benefit of all. 

 

Share