top of page
Writer's pictureOzzie Paez

The Fallacy of making AI Technologies safe

The notion that AI technologies, particularly Large Language Models (LLMs), can be made inherently safe is a flawed premise that misunderstands the nature of AI and the challenges of ensuring its safety. While the demand for inherently safe AI may mirror historical regulatory responses to emerging technologies like the Internet, it is crucial to understand that the complexity of AI, particularly LLMs, renders this goal unfeasible.


Technical Complexity and Opacity of LLMs

LLMs, such as OpenAI's ChatGPT, Microsoft's Copilot, Meta's Llama, and Google's Gemini, are built on well-documented and widely available theoretical foundations, architectures, and implementations. Hundreds of LLMs are accessible on platforms like GitHub, ranging from open-source models to curated, pre-trained versions. Despite this transparency in availability, the functional operations of LLMs remain largely inscrutable due to their reliance on large neural networks (LNNs).


Large Neural Networks are like block boxes due to their complexity. Even those who create them cannot fully understand and define how these remarkable technologies generate specific outputs.


LNNs’ exhibit high complexity characterized by, in geek-speak, high dimensionality, non-linear transformations, and complex training dynamics, and emergent properties that limit interpretability and explainability. In common-speak, even teams of technical experts at companies like OpenAI cannot fully demonstrate and understand how LNNs and LLMs generate their specific outputs. This intrinsic opacity raises critical questions about the feasibility of evaluating and certifying LLMs as inherently “safe.” If the inner workings of these models are not fully comprehensible, it is technically impossible to guarantee their safety in all contexts.


The Multi-Use Nature of AI Technologies

LLMs, like all advanced tools, have many uses and can be employed for both beneficial and harmful purposes. For instance, AI technologies that enhance healthcare, education, and public safety can also be exploited for fraud, manipulation, and violent criminality. Criminals and terrorists can similarly weaponize and use the same AI-controlled autonomous vehicles used by families and emergency responders. These multi-use capabilities underscore how AI technologies are neither inherently safe nor unsafe.


Regulatory Focus: Use Over Technology

Given these realities, legislators and regulators should recognize that pursuing "inherent safety" in AI technologies is ill-defined and impractical. A more practical strategy may be to shift the focus of laws and regulations to purposes and manners of use. This approach will free innovators to continue innovating and developing these remarkable technologies without being hindered by unattainable safety demands. The historical example of the Internet illustrates this concept and the impracticality of regulating technologies with broad and open-ended uses. The Internet empowered users to pursue their aims and objectives in new ways previously unavailable. It could not, however, ensure that those aims and objectives were safe, noble, and legal.


In conclusion, no one should expect, much less demand, that AI technologies, including LLMs, be inherently safe. The complexity and multi-use nature make such expectations unrealistic. Instead, the focus should be on creating regulations that address how these technologies are used.


References                                                                                                            

  1. Interpretability and Explainability of Neural Networks: See “Interpretable Machine Learning” by Christoph Molnar for an in-depth discussion on the challenges of making neural networks interpretable.

  2. Dual-Use Technology and AI: Melanie Mitchell's "AI and Dual-Use Technology" discussion in “Artificial Intelligence: A Guide for Thinking Humans” explores the implications of dual-use AI technologies.

  3. Regulatory Frameworks for Emerging Technologies:  “The Age of Surveillance Capitalism” by Shoshana Zuboff uses the Internet as an example where regulations focus on use cases instead of their underlying technologies.

5 views0 comments

Kommentare


bottom of page