A Chronology of AI Failures in Safety and Cybersecurity

Authors

DOI:

https://doi.org/10.3126/nprcjmr.v1i6.71734

Keywords:

Artificial Intelligence, Cyber security, Safety Engineering, breaching networks

Abstract

Background: Artificial intelligence (AI) has rapidly evolved, leading to significant technological advancements and raising important questions about its implications for society. This study examines the progress and potential risks associated with AI development, drawing on historical milestones and expert predictions to outline both achievements and failures. Ray Kurzweil's forecasts highlight a future where AI could surpass human intelligence, potentially leading to new and challenging scenarios.

Aim: This research aims to explore the safety and security challenges posed by AI, particularly the risks associated with malicious AI and AI failures, and to propose strategies for developing safe and reliable AI systems.

Methodology: The study reviews key milestones in AI development, analyzes documented AI failures, and compares AI safety approaches to cybersecurity principles. It also examines Roman Yampolskiy's contributions to AI safety engineering and the broader implications of AI's integration into various sectors.

Results: Historical analysis reveals numerous AI failures, from misidentifying objects to causing financial market disruptions, and highlights the challenges in ensuring AI safety. The study identifies intentional and unintentional failures and emphasizes the potential dangers of malevolent AI. A comparison with human safety efforts underscores the complexity of creating inherently safe AI systems.

Findings: Ensuring AI safety requires a multidisciplinary approach, incorporating techniques from cybersecurity, software engineering, and ethics. The study stresses the importance of proactive measures and adversarial testing to mitigate risks. It concludes that while current AI systems pose significant challenges, the development of comprehensive safety mechanisms is crucial to prevent catastrophic outcomes in future AI advancements.

Downloads

Download data is not yet available.
Abstract
1
PDF
0

Author Biographies

Ashish Gautam, Lincoln University College, Malaysia

PhD Scholar

Suman Thapaliya, Lincoln University College, Malaysia

IT Department

Downloads

Published

2024-11-21

How to Cite

Gautam, A., & Thapaliya, S. (2024). A Chronology of AI Failures in Safety and Cybersecurity. NPRC Journal of Multidisciplinary Research, 1(6), 1–12. https://doi.org/10.3126/nprcjmr.v1i6.71734

Similar Articles

1 2 > >> 

You may also start an advanced similarity search for this article.