Public input is vital for the development of artificial intelligence (AI), as stated by social scientists. In order to create AI systems that benefit society, it is important for public opinions and values to be considered. This article explores the significance of public input in AI development and emphasizes the need for collaboration between developers and the public to ensure ethical and responsible AI technologies.
As AI technology continues to advance, democratic societies must grapple with the challenges it presents. From algorithmically assigning limited supplies during pandemics to fueling an arms race of disinformation creation and detection, AI has the potential to shape and influence various aspects of society. However, research shows that democratic societies struggle to hold nuanced debates about new technologies, including AI. These discussions should not only be informed by science but also consider ethical, regulatory, and social considerations.
One of the key issues in assimilating emerging technologies is the lack of broad public engagement. Without the participation of a diverse range of stakeholders, societies are limited in their ability to anticipate and mitigate unintended consequences of rapidly advancing technologies. Looking back at the Asilomar Conference, which determined the future of recombinant DNA research, the minimal public input resulted in blind spots. If wider input had been sought, issues of cost and access could have been addressed alongside the science and ethics. The lack of affordability of recent CRISPR-based sickle cell treatments could have potentially been avoided.
AI experts themselves are concerned about the lack of preparedness in society when it comes to responsibly implementing AI. A study conducted by the University of Wisconsin-Madison revealed that 90.3% of researchers predicted unintended consequences of AI applications, while 75.9% believed society is not prepared for the potential effects.
Who gets a say in the development and regulation of AI is another crucial question. While industry leaders, policymakers, and academics have been slow to adapt, there is a growing desire among the public to shape the future. A 2020 survey found that two-thirds of Americans believe the public should have a say in how scientific research and technology are applied in society. However, there is a lack of trust in government and industry when it comes to AI development, highlighting the need for transparent and inclusive decision-making processes.
There is a healthy dose of skepticism surrounding the involvement of key regulatory and industry players in shaping AI regulation. Efforts to develop effective regulatory systems often face challenges due to conflicts of interest. While tech leaders can provide technical input, determining the appropriate applications and uses of AI requires public debates that engage a broad set of stakeholders.
As AI increasingly disrupts various aspects of life, societies have a limited opportunity to engage in meaningful debates and collaboratively work towards effective AI regulation. By involving a diverse range of voices and considering societal values, ethics, and fairness, democratic societies can navigate the challenges posed by AI and ensure that the technology serves the best interests of all.