The UN and Autonomous Weapons Systems: A Missed Opportunity? | INSS
Select any text and click on the icon to listen!
ByGSpeech
go to header go to content go to footer go to search
INSS logo The Institute for National Security Studies, Strategic, Innovative, Policy-Oriented Research, go to the home page
INSS
Tel Aviv University logo - beyond an external website, opens on a new page
  • Contact
  • עברית
  • Support Us
  • Research
    • Topics
      • Israel and the Global Powers
        • Israel-United States Relations
        • Glazer Israel-China Policy Center
        • Russia
        • Europe
      • Iran and the Shi'ite Axis
        • Iran
        • Lebanon and Hezbollah
        • Syria
        • Yemen and the Houthi Movement
        • Iraq and the Iraqi Shiite Militias
      • Conflict to Agreements
        • Israeli-Palestinian Relations
        • Hamas and the Gaza Strip
        • Peace Agreements and Normalization in the Middle East
        • Saudi Arabia and the Gulf States
        • Turkey
        • Egypt
        • Jordan
      • Israel’s National Security Policy
        • Military and Strategic Affairs
        • Societal Resilience and the Israeli Society
        • Jewish-Arab Relations in Israel
        • Climate, Infrastructure and Energy
        • Terrorism and Low Intensity Conflict
      • Cross-Arena Research
        • Data Analytics Center
        • Law and National Security
        • Advanced Technologies and National Security
        • Cognitive Warfare
        • Economics and National Security
    • Projects
      • Preventing the Slide into a One-State Reality
      • Contemporary Antisemitism in the United States
      • Perceptions about Jews and Israel in the Arab-Muslim World and Their Impact on the West
  • Publications
    • -
      • All Publications
      • INSS Insight
      • Policy Papers
      • Special Publication
      • Strategic Assessment
      • Technology Platform
      • Memoranda
      • Posts
      • Books
      • Archive
  • Database
    • Surveys
    • Spotlight
    • Maps
    • Real-Time Tracker
  • Events
  • Team
  • About
    • Vision and Mission
    • History
    • Research Disciplines
    • Board of Directors
    • Fellowship and Prizes
    • Internships
    • Newsletter
  • Media
    • Communications
      • Articles
      • Quotes
      • Radio and TV
    • Video gallery
    • Press Releases
  • Podcast
  • Newsletter
New
Search in site
  • Research
    • Topics
    • Israel and the Global Powers
    • Israel-United States Relations
    • Glazer Israel-China Policy Center
    • Russia
    • Europe
    • Iran and the Shi'ite Axis
    • Iran
    • Lebanon and Hezbollah
    • Syria
    • Yemen and the Houthi Movement
    • Iraq and the Iraqi Shiite Militias
    • Conflict to Agreements
    • Israeli-Palestinian Relations
    • Hamas and the Gaza Strip
    • Peace Agreements and Normalization in the Middle East
    • Saudi Arabia and the Gulf States
    • Turkey
    • Egypt
    • Jordan
    • Israel’s National Security Policy
    • Military and Strategic Affairs
    • Societal Resilience and the Israeli Society
    • Jewish-Arab Relations in Israel
    • Climate, Infrastructure and Energy
    • Terrorism and Low Intensity Conflict
    • Cross-Arena Research
    • Data Analytics Center
    • Law and National Security
    • Advanced Technologies and National Security
    • Cognitive Warfare
    • Economics and National Security
    • Projects
    • Preventing the Slide into a One-State Reality
    • Contemporary Antisemitism in the United States
    • Perceptions about Jews and Israel in the Arab-Muslim World and Their Impact on the West
  • Publications
    • All Publications
    • INSS Insight
    • Policy Papers
    • Special Publication
    • Strategic Assessment
    • Technology Platform
    • Memoranda
    • Posts
    • Books
    • Archive
  • Database
    • Surveys
    • Spotlight
    • Maps
    • Real-Time Tracker
  • Events
  • Team
  • About
    • Vision and Mission
    • History
    • Research Disciplines
    • Board of Directors
    • Fellowship and Prizes
    • Internships
  • Media
    • Communications
      • Articles
      • Quotes
      • Radio and TV
    • Video gallery
    • Press Releases
  • Podcast
  • Newsletter
  • Contact
  • עברית
  • Support Us
bool(false)

Publications

Home Publications INSS Insight The UN and Autonomous Weapons Systems: A Missed Opportunity?

The UN and Autonomous Weapons Systems: A Missed Opportunity?

INSS Insight No. 707, June 9, 2015

עברית
Liran Antebi
Listen to this content
Plays:-Audio plays count
0:00
-:--
1x
Playback Speed
  • 0.5
  • 0.6
  • 0.7
  • 0.8
  • 0.9
  • 1
  • 1.1
  • 1.2
  • 1.3
  • 1.5
  • 2
Audio Language
  • English
  • French
  • German
  • Italian
  • Russian
  • Spanish
Open text
the un and autonomous weapons systems: a missed opportunity? after years of focusing on the legality of targeted assassinations with attack-enhanced remotely-operated uavs, un circles came to the realization that the development of lethal autonomous weapons systems (laws), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge. meanwhile, the human rights-based initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed at any time and in any place – and not because these systems are automated. while protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. therefore, continued action by the un that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy. last year the united nations addressed the development of autonomous weapons systems. after years of focusing on the legality of targeted assassinations using attack-enhanced remotely-operated uavs, un circles came to the realization that the development of lethal autonomous weapons systems (laws), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge. on april 13-17, 2015, an international forum in the united nations office in geneva dealt with the development of autonomous weapons systems, and raised the possibility of adding an appendix to the convention limiting the use of certain conventional weapons (ccw from 1980) and banning the use of advanced autonomous systems altogether. the convention already limits the use of cluster bombs and other weapons, and the un has – via this convention – likewise singled out not only the use but also the development of blinding laser weapons. the claim essentially posed before the un committee discussion on autonomous weapons – an outgrowth of the public discourse and the activity of human rights organizations opposed to such weapons – was that autonomous weapons systems are not a force majeure and that it is possible and desirable to limit their use before intensive development begins and the systems start to play a leading role on the battlefield. in advance of the un committee deliberation, human rights watch published a detailed research report, the second of its type, on lethal autonomous weapons systems. while the organization’s previous paper was meant to raise general awareness of the problematic nature of these weapons and draft a uniform set of concepts for debating the topic, the second paper focused on the legal difficulty in attributing accountability to such systems. the report, along with the key points of the un committee discussion resulting from its publication, stressed the difficulty that such weapons pose when it comes to applying the concept of accountability. the primary claim is that fighters, commanders, and even decision makers at the political echelon bear legal responsibility for committing war crimes, a fact that is supposed to deter them from doing so. by contrast, it is impossible to ascribe the same type of accountability to autonomous systems, because trying a robot in a court of law and punishing it are meaningless acts; furthermore, it is difficult – and makes little sense – to put the engineer or company that developed the autonomous weapon system on trial because of the damage suffered by innocent bystanders years after the system’s development. this sensitive question is not unique to autonomous weapons systems and challenges legislatures and regulatory bodies of different countries not only when armed systems operate on the battlefield but also in the context of autonomous vehicles, which have become increasingly popular in recent years. thus, for example, the united states recently approved the use of autonomous trucks on the highways, though the us regulatory agency has so far made their use conditional on the presence of a human driver who is involved in some of the operation, partly because no solution has yet been found for the question of legal liability. the human rights watch report and the international campaign to stop killer robots do not single out autonomous weapons systems, insofar as their primary concern is how violations of the laws of war will affect human rights during warfare and the lack of a person who can answer for human rights violations at the international criminal court. however, the report also indicates that in some areas, those opposed to the development and use of these robots have softened their stance. for example, it makes a clearer distinction than in the past between aerial defense systems with autonomous capabilities, such as the iron dome and patriot systems, and systems that are more likely to harm humans, e.g., armed ground systems and autonomous uavs, such as the us x-47b uav, now in testing but which can already take off and land from an aircraft carrier and is capable of refueling in the air, all without any human involvement. the report’s bottom line calls for a total ban on the development and manufacturing of lethal autonomous weapons systems (as was the case with blinding laser weapons) by formulating and implementing international legal prohibitions, along with state-based and implemented prohibitions. the report and the campaign to stop killer robots are significant, sharing the same element that has also largely dictated the tone of the debates on the topic in the un committee. human rights watch held five events with an identical agenda on the same days the committee was convened in geneva; these events even appeared on the official website of the un committee. however, at the public debate preceding the un committee meetings – the human rights watch report and the events held concurrently with the discussions – arguments in favor of the development of autonomous weapons systems were not presented. apparently arguments of that nature were also not debated at the un committee itself. could it be that the un is missing an opportunity to challenge the inclination to reject automatically the development and use of autonomous weapons systems? the answer, which appears to be an unequivocal “yes”, is problematic, as the issue is far more complicated. lethal autonomous weapons systems that reduce the need for human soldiers bring a reduced risk to the lives of fighters, especially those engaged in warfare against terrorists and guerrilla organizations. in addition, such systems might allow the un’s own peacekeeping forces to grow, without having to resort to national forces. moreover, it is possible to reduce the risk to innocents that is a byproduct of warfare, the very issue of concern to human rights organizations, thanks to the improved precision of autonomous weapons systems resulting from the use of sensors and calculating abilities superior to human capabilities. furthermore, it seems that the un is missing an opportunity to reduce actions violating the laws of warfare. such a reduction would be made possible, in principle, if the autonomous systems would undertake only the tasks they have been programmed to carry out and based on the information with which they are equipped – programming that would conform to international law. similarly, an opportunity is being missed to develop the discourse and activity about limiting artificial intelligence in general, the same ai that lies at the core of any autonomous system, not just armed ones. the topic has made headlines over the last year because of some worrisome pronouncements made separately by scientists and technologists, including elon musk, stephen hawking, and bill gates, on dangers to humanity inherent in the uncontrolled development of ai occurring at an accelerated pace all over the world. the initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed on the battlefield or elsewhere, at any time and in any place, and not because these systems are automated. while protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. therefore, continued action by the un that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy.
Download audioDownloaded:0
Open context player
Close context player
the un and autonomous weapons systems: a missed opportunity? after years of focusing on the legality of targeted assassinations with attack-enhanced remotely-operated uavs, un circles came to the realization that the development of lethal autonomous weapons systems (laws), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge. meanwhile, the human rights-based initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed at any time and in any place – and not because these systems are automated. while protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. therefore, continued action by the un that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy. last year the united nations addressed the development of autonomous weapons systems. after years of focusing on the legality of targeted assassinations using attack-enhanced remotely-operated uavs, un circles came to the realization that the development of lethal autonomous weapons systems (laws), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge. on april 13-17, 2015, an international forum in the united nations office in geneva dealt with the development of autonomous weapons systems, and raised the possibility of adding an appendix to the convention limiting the use of certain conventional weapons (ccw from 1980) and banning the use of advanced autonomous systems altogether. the convention already limits the use of cluster bombs and other weapons, and the un has – via this convention – likewise singled out not only the use but also the development of blinding laser weapons. the claim essentially posed before the un committee discussion on autonomous weapons – an outgrowth of the public discourse and the activity of human rights organizations opposed to such weapons – was that autonomous weapons systems are not a force majeure and that it is possible and desirable to limit their use before intensive development begins and the systems start to play a leading role on the battlefield. in advance of the un committee deliberation, human rights watch published a detailed research report, the second of its type, on lethal autonomous weapons systems. while the organization’s previous paper was meant to raise general awareness of the problematic nature of these weapons and draft a uniform set of concepts for debating the topic, the second paper focused on the legal difficulty in attributing accountability to such systems. the report, along with the key points of the un committee discussion resulting from its publication, stressed the difficulty that such weapons pose when it comes to applying the concept of accountability. the primary claim is that fighters, commanders, and even decision makers at the political echelon bear legal responsibility for committing war crimes, a fact that is supposed to deter them from doing so. by contrast, it is impossible to ascribe the same type of accountability to autonomous systems, because trying a robot in a court of law and punishing it are meaningless acts; furthermore, it is difficult – and makes little sense – to put the engineer or company that developed the autonomous weapon system on trial because of the damage suffered by innocent bystanders years after the system’s development. this sensitive question is not unique to autonomous weapons systems and challenges legislatures and regulatory bodies of different countries not only when armed systems operate on the battlefield but also in the context of autonomous vehicles, which have become increasingly popular in recent years. thus, for example, the united states recently approved the use of autonomous trucks on the highways, though the us regulatory agency has so far made their use conditional on the presence of a human driver who is involved in some of the operation, partly because no solution has yet been found for the question of legal liability. the human rights watch report and the international campaign to stop killer robots do not single out autonomous weapons systems, insofar as their primary concern is how violations of the laws of war will affect human rights during warfare and the lack of a person who can answer for human rights violations at the international criminal court. however, the report also indicates that in some areas, those opposed to the development and use of these robots have softened their stance. for example, it makes a clearer distinction than in the past between aerial defense systems with autonomous capabilities, such as the iron dome and patriot systems, and systems that are more likely to harm humans, e.g., armed ground systems and autonomous uavs, such as the us x-47b uav, now in testing but which can already take off and land from an aircraft carrier and is capable of refueling in the air, all without any human involvement. the report’s bottom line calls for a total ban on the development and manufacturing of lethal autonomous weapons systems (as was the case with blinding laser weapons) by formulating and implementing international legal prohibitions, along with state-based and implemented prohibitions. the report and the campaign to stop killer robots are significant, sharing the same element that has also largely dictated the tone of the debates on the topic in the un committee. human rights watch held five events with an identical agenda on the same days the committee was convened in geneva; these events even appeared on the official website of the un committee. however, at the public debate preceding the un committee meetings – the human rights watch report and the events held concurrently with the discussions – arguments in favor of the development of autonomous weapons systems were not presented. apparently arguments of that nature were also not debated at the un committee itself. could it be that the un is missing an opportunity to challenge the inclination to reject automatically the development and use of autonomous weapons systems? the answer, which appears to be an unequivocal “yes”, is problematic, as the issue is far more complicated. lethal autonomous weapons systems that reduce the need for human soldiers bring a reduced risk to the lives of fighters, especially those engaged in warfare against terrorists and guerrilla organizations. in addition, such systems might allow the un’s own peacekeeping forces to grow, without having to resort to national forces. moreover, it is possible to reduce the risk to innocents that is a byproduct of warfare, the very issue of concern to human rights organizations, thanks to the improved precision of autonomous weapons systems resulting from the use of sensors and calculating abilities superior to human capabilities. furthermore, it seems that the un is missing an opportunity to reduce actions violating the laws of warfare. such a reduction would be made possible, in principle, if the autonomous systems would undertake only the tasks they have been programmed to carry out and based on the information with which they are equipped – programming that would conform to international law. similarly, an opportunity is being missed to develop the discourse and activity about limiting artificial intelligence in general, the same ai that lies at the core of any autonomous system, not just armed ones. the topic has made headlines over the last year because of some worrisome pronouncements made separately by scientists and technologists, including elon musk, stephen hawking, and bill gates, on dangers to humanity inherent in the uncontrolled development of ai occurring at an accelerated pace all over the world. the initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed on the battlefield or elsewhere, at any time and in any place, and not because these systems are automated. while protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. therefore, continued action by the un that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy.
After years of focusing on the legality of targeted assassinations with attack-enhanced remotely-operated UAVs, UN circles came to the realization that the development of lethal autonomous weapons systems (LAWS), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge. Meanwhile, the human rights-based initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed at any time and in any place – and not because these systems are automated. While protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. Therefore, continued action by the UN that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy.

Last year the United Nations addressed the development of autonomous weapons systems. After years of focusing on the legality of targeted assassinations using attack-enhanced remotely-operated UAVs, UN circles came to the realization that the development of lethal autonomous weapons systems (LAWS), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge.

On April 13-17, 2015, an international forum in the United Nations Office in Geneva dealt with the development of autonomous weapons systems, and raised the possibility of adding an appendix to the convention limiting the use of certain conventional weapons (CCW from 1980) and banning the use of advanced autonomous systems altogether. The convention already limits the use of cluster bombs and other weapons, and the UN has – via this convention – likewise singled out not only the use but also the development of blinding laser weapons. The claim essentially posed before the UN committee discussion on autonomous weapons – an outgrowth of the public discourse and the activity of human rights organizations opposed to such weapons – was that autonomous weapons systems are not a force majeure and that it is possible and desirable to limit their use before intensive development begins and the systems start to play a leading role on the battlefield.

In advance of the UN committee deliberation, Human Rights Watch published a detailed research report, the second of its type, on Lethal Autonomous Weapons Systems. While the organization’s previous paper was meant to raise general awareness of the problematic nature of these weapons and draft a uniform set of concepts for debating the topic, the second paper focused on the legal difficulty in attributing accountability to such systems. The report, along with the key points of the UN committee discussion resulting from its publication, stressed the difficulty that such weapons pose when it comes to applying the concept of accountability. The primary claim is that fighters, commanders, and even decision makers at the political echelon bear legal responsibility for committing war crimes, a fact that is supposed to deter them from doing so. By contrast, it is impossible to ascribe the same type of accountability to autonomous systems, because trying a robot in a court of law and punishing it are meaningless acts; furthermore, it is difficult – and makes little sense – to put the engineer or company that developed the autonomous weapon system on trial because of the damage suffered by innocent bystanders years after the system’s development.

This sensitive question is not unique to autonomous weapons systems and challenges legislatures and regulatory bodies of different countries not only when armed systems operate on the battlefield but also in the context of autonomous vehicles, which have become increasingly popular in recent years. Thus, for example, the United States recently approved the use of autonomous trucks on the highways, though the US regulatory agency has so far made their use conditional on the presence of a human driver who is involved in some of the operation, partly because no solution has yet been found for the question of legal liability.

The Human Rights Watch report and the international Campaign to Stop Killer Robots do not single out autonomous weapons systems, insofar as their primary concern is how violations of the laws of war will affect human rights during warfare and the lack of a person who can answer for human rights violations at the International Criminal Court. However, the report also indicates that in some areas, those opposed to the development and use of these robots have softened their stance. For example, it makes a clearer distinction than in the past between aerial defense systems with autonomous capabilities, such as the Iron Dome and Patriot systems, and systems that are more likely to harm humans, e.g., armed ground systems and autonomous UAVs, such as the US X-47B UAV, now in testing but which can already take off and land from an aircraft carrier and is capable of refueling in the air, all without any human involvement. The report’s bottom line calls for a total ban on the development and manufacturing of lethal autonomous weapons systems (as was the case with blinding laser weapons) by formulating and implementing international legal prohibitions, along with state-based and implemented prohibitions.

The report and the Campaign to Stop Killer Robots are significant, sharing the same element that has also largely dictated the tone of the debates on the topic in the UN committee. Human Rights Watch held five events with an identical agenda on the same days the committee was convened in Geneva; these events even appeared on the official website of the UN committee.

However, at the public debate preceding the UN committee meetings – the Human Rights Watch report and the events held concurrently with the discussions – arguments in favor of the development of autonomous weapons systems were not presented. Apparently arguments of that nature were also not debated at the UN committee itself. Could it be that the UN is missing an opportunity to challenge the inclination to reject automatically the development and use of autonomous weapons systems? The answer, which appears to be an unequivocal “yes”, is problematic, as the issue is far more complicated.

Lethal autonomous weapons systems that reduce the need for human soldiers bring a reduced risk to the lives of fighters, especially those engaged in warfare against terrorists and guerrilla organizations. In addition, such systems might allow the UN’s own peacekeeping forces to grow, without having to resort to national forces. Moreover, it is possible to reduce the risk to innocents that is a byproduct of warfare, the very issue of concern to human rights organizations, thanks to the improved precision of autonomous weapons systems resulting from the use of sensors and calculating abilities superior to human capabilities. Furthermore, it seems that the UN is missing an opportunity to reduce actions violating the laws of warfare. Such a reduction would be made possible, in principle, if the autonomous systems would undertake only the tasks they have been programmed to carry out and based on the information with which they are equipped – programming that would conform to international law.

Similarly, an opportunity is being missed to develop the discourse and activity about limiting artificial intelligence in general, the same AI that lies at the core of any autonomous system, not just armed ones. The topic has made headlines over the last year because of some worrisome pronouncements made separately by scientists and technologists, including Elon Musk, Stephen Hawking, and Bill Gates, on dangers to humanity inherent in the uncontrolled development of AI occurring at an accelerated pace all over the world.

The initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed on the battlefield or elsewhere, at any time and in any place, and not because these systems are automated. While protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. Therefore, continued action by the UN that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy.

The opinions expressed in INSS publications are the authors’ alone.
Publication Series INSS Insight
TopicsAdvanced Technologies and National Security
עברית

Events

All events
The 18th Annual International Conference
25 February, 2025
08:15 - 16:00
Photo: Ronen Topelberg

Related Publications

All publications
Rafael
Boost-Phase Interception (BPI) of Ballistic Missiles
Intercepting missiles and rockets at the earliest stage of their launch is an offensive action with a distinctly defensive character. Its goal is to prevent direct or collateral damage in the attacked state while inflicting maximum harm on the aggressor. For Israel, this interception method has a compounded advantage due to its small size, the threats posed by Iran and its proxies in both the near and far “ring of fire,” and the high potential for damage from strikes on civilian and military infrastructure. This article reviews the challenges and technological developments in missile and rocket interception, relevant geopolitical aspects, and lessons for the State of Israel.
09/03/25
Shutterstock
The Temptation of Chinese AI: Israel Must Not Be Blinded by DeepSeek
How strong and effective is the Chinese artificial intelligence model—and what conclusions should be drawn in Israel?
18/02/25
Shutterstock
The Unmanned Maritime Threat: Implementing Lessons From the Aerial Theater
Unmanned maritime systems, alongside with suicide drones: The unmanned systems, which are being discovered more intensely in the context of the fighting in the north, may also be used against Israeli vessels. Is Israel prepared for the challenge?
03/07/24

Stay up to date

Registration was successful! Thanks.
  • Research

    • Topics
      • Israel and the Global Powers
      • Israel-United States Relations
      • Glazer Israel-China Policy Center
      • Russia
      • Europe
      • Iran and the Shi'ite Axis
      • Iran
      • Lebanon and Hezbollah
      • Syria
      • Yemen and the Houthi Movement
      • Iraq and the Iraqi Shiite Militias
      • Conflict to Agreements
      • Israeli-Palestinian Relations
      • Hamas and the Gaza Strip
      • Peace Agreements and Normalization in the Middle East
      • Saudi Arabia and the Gulf States
      • Turkey
      • Egypt
      • Jordan
      • Israel’s National Security Policy
      • Military and Strategic Affairs
      • Societal Resilience and the Israeli Society
      • Jewish-Arab Relations in Israel
      • Climate, Infrastructure and Energy
      • Terrorism and Low Intensity Conflict
      • Cross-Arena Research
      • Data Analytics Center
      • Law and National Security
      • Advanced Technologies and National Security
      • Cognitive Warfare
      • Economics and National Secutiry
    • Projects
      • Preventing the Slide into a One-State Reality
      • Contemporary Antisemitism in the United States
      • Perceptions about Jews and Israel in the Arab-Muslim World and Their Impact on the West
  • Publications

    • All Publications
    • INSS Insight
    • Policy Papers
    • Special Publication
    • Strategic Assessment
    • Technology Platform
    • Memoranda
    • Database
    • Posts
    • Books
    • Archive
  • About

    • Vision and Mission
    • History
    • Research Disciplines
    • Board of Directors
    • Fellowship and Prizes
    • Internships
    • Support
  • Media

    • Communications
    • Articles
    • Quotes
    • Radio and TV
    • Video Gallery
    • Press Release
    • Podcast
  • Home

  • Events

  • Database

  • Team

  • Contact

  • Newsletter

  • עברית

INSS logo The Institute for National Security Studies, Strategic, Innovative, Policy-Oriented Research, go to the home page
40 Haim Levanon St. Tel Aviv, 6997556 Israel | Tel: 03-640-0400 | Fax: 03-744-7590 | Email: info@inss.org.il
Developed by Daat A Realcommerce company.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Listen to this content
the un and autonomous weapons systems: a missed opportunity? after years of focusing on the legality of targeted assassinations with attack-enhanced remotely-operated uavs, un circles came to the realization that the development of lethal autonomous weapons systems (laws), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge. meanwhile, the human rights-based initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed at any time and in any place – and not because these systems are automated. while protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. therefore, continued action by the un that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy. last year the united nations addressed the development of autonomous weapons systems. after years of focusing on the legality of targeted assassinations using attack-enhanced remotely-operated uavs, un circles came to the realization that the development of lethal autonomous weapons systems (laws), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge. on april 13-17, 2015, an international forum in the united nations office in geneva dealt with the development of autonomous weapons systems, and raised the possibility of adding an appendix to the convention limiting the use of certain conventional weapons (ccw from 1980) and banning the use of advanced autonomous systems altogether. the convention already limits the use of cluster bombs and other weapons, and the un has – via this convention – likewise singled out not only the use but also the development of blinding laser weapons. the claim essentially posed before the un committee discussion on autonomous weapons – an outgrowth of the public discourse and the activity of human rights organizations opposed to such weapons – was that autonomous weapons systems are not a force majeure and that it is possible and desirable to limit their use before intensive development begins and the systems start to play a leading role on the battlefield. in advance of the un committee deliberation, human rights watch published a detailed research report, the second of its type, on lethal autonomous weapons systems. while the organization’s previous paper was meant to raise general awareness of the problematic nature of these weapons and draft a uniform set of concepts for debating the topic, the second paper focused on the legal difficulty in attributing accountability to such systems. the report, along with the key points of the un committee discussion resulting from its publication, stressed the difficulty that such weapons pose when it comes to applying the concept of accountability. the primary claim is that fighters, commanders, and even decision makers at the political echelon bear legal responsibility for committing war crimes, a fact that is supposed to deter them from doing so. by contrast, it is impossible to ascribe the same type of accountability to autonomous systems, because trying a robot in a court of law and punishing it are meaningless acts; furthermore, it is difficult – and makes little sense – to put the engineer or company that developed the autonomous weapon system on trial because of the damage suffered by innocent bystanders years after the system’s development. this sensitive question is not unique to autonomous weapons systems and challenges legislatures and regulatory bodies of different countries not only when armed systems operate on the battlefield but also in the context of autonomous vehicles, which have become increasingly popular in recent years. thus, for example, the united states recently approved the use of autonomous trucks on the highways, though the us regulatory agency has so far made their use conditional on the presence of a human driver who is involved in some of the operation, partly because no solution has yet been found for the question of legal liability. the human rights watch report and the international campaign to stop killer robots do not single out autonomous weapons systems, insofar as their primary concern is how violations of the laws of war will affect human rights during warfare and the lack of a person who can answer for human rights violations at the international criminal court. however, the report also indicates that in some areas, those opposed to the development and use of these robots have softened their stance. for example, it makes a clearer distinction than in the past between aerial defense systems with autonomous capabilities, such as the iron dome and patriot systems, and systems that are more likely to harm humans, e.g., armed ground systems and autonomous uavs, such as the us x-47b uav, now in testing but which can already take off and land from an aircraft carrier and is capable of refueling in the air, all without any human involvement. the report’s bottom line calls for a total ban on the development and manufacturing of lethal autonomous weapons systems (as was the case with blinding laser weapons) by formulating and implementing international legal prohibitions, along with state-based and implemented prohibitions. the report and the campaign to stop killer robots are significant, sharing the same element that has also largely dictated the tone of the debates on the topic in the un committee. human rights watch held five events with an identical agenda on the same days the committee was convened in geneva; these events even appeared on the official website of the un committee. however, at the public debate preceding the un committee meetings – the human rights watch report and the events held concurrently with the discussions – arguments in favor of the development of autonomous weapons systems were not presented. apparently arguments of that nature were also not debated at the un committee itself. could it be that the un is missing an opportunity to challenge the inclination to reject automatically the development and use of autonomous weapons systems? the answer, which appears to be an unequivocal “yes”, is problematic, as the issue is far more complicated. lethal autonomous weapons systems that reduce the need for human soldiers bring a reduced risk to the lives of fighters, especially those engaged in warfare against terrorists and guerrilla organizations. in addition, such systems might allow the un’s own peacekeeping forces to grow, without having to resort to national forces. moreover, it is possible to reduce the risk to innocents that is a byproduct of warfare, the very issue of concern to human rights organizations, thanks to the improved precision of autonomous weapons systems resulting from the use of sensors and calculating abilities superior to human capabilities. furthermore, it seems that the un is missing an opportunity to reduce actions violating the laws of warfare. such a reduction would be made possible, in principle, if the autonomous systems would undertake only the tasks they have been programmed to carry out and based on the information with which they are equipped – programming that would conform to international law. similarly, an opportunity is being missed to develop the discourse and activity about limiting artificial intelligence in general, the same ai that lies at the core of any autonomous system, not just armed ones. the topic has made headlines over the last year because of some worrisome pronouncements made separately by scientists and technologists, including elon musk, stephen hawking, and bill gates, on dangers to humanity inherent in the uncontrolled development of ai occurring at an accelerated pace all over the world. the initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed on the battlefield or elsewhere, at any time and in any place, and not because these systems are automated. while protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. therefore, continued action by the un that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy.
Read content
audio content is empty.
the un and autonomous weapons systems: a missed opportunity? after years of focusing on the legality of targeted assassinations with attack-enhanced remotely-operated uavs, un circles came to the realization that the development of lethal autonomous weapons systems (laws), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge. meanwhile, the human rights-based initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed at any time and in any place – and not because these systems are automated. while protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. therefore, continued action by the un that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy. last year the united nations addressed the development of autonomous weapons systems. after years of focusing on the legality of targeted assassinations using attack-enhanced remotely-operated uavs, un circles came to the realization that the development of lethal autonomous weapons systems (laws), which choose their targets without human involvement, is already well underway and may prove to be a profound global challenge. on april 13-17, 2015, an international forum in the united nations office in geneva dealt with the development of autonomous weapons systems, and raised the possibility of adding an appendix to the convention limiting the use of certain conventional weapons (ccw from 1980) and banning the use of advanced autonomous systems altogether. the convention already limits the use of cluster bombs and other weapons, and the un has – via this convention – likewise singled out not only the use but also the development of blinding laser weapons. the claim essentially posed before the un committee discussion on autonomous weapons – an outgrowth of the public discourse and the activity of human rights organizations opposed to such weapons – was that autonomous weapons systems are not a force majeure and that it is possible and desirable to limit their use before intensive development begins and the systems start to play a leading role on the battlefield. in advance of the un committee deliberation, human rights watch published a detailed research report, the second of its type, on lethal autonomous weapons systems. while the organization’s previous paper was meant to raise general awareness of the problematic nature of these weapons and draft a uniform set of concepts for debating the topic, the second paper focused on the legal difficulty in attributing accountability to such systems. the report, along with the key points of the un committee discussion resulting from its publication, stressed the difficulty that such weapons pose when it comes to applying the concept of accountability. the primary claim is that fighters, commanders, and even decision makers at the political echelon bear legal responsibility for committing war crimes, a fact that is supposed to deter them from doing so. by contrast, it is impossible to ascribe the same type of accountability to autonomous systems, because trying a robot in a court of law and punishing it are meaningless acts; furthermore, it is difficult – and makes little sense – to put the engineer or company that developed the autonomous weapon system on trial because of the damage suffered by innocent bystanders years after the system’s development. this sensitive question is not unique to autonomous weapons systems and challenges legislatures and regulatory bodies of different countries not only when armed systems operate on the battlefield but also in the context of autonomous vehicles, which have become increasingly popular in recent years. thus, for example, the united states recently approved the use of autonomous trucks on the highways, though the us regulatory agency has so far made their use conditional on the presence of a human driver who is involved in some of the operation, partly because no solution has yet been found for the question of legal liability. the human rights watch report and the international campaign to stop killer robots do not single out autonomous weapons systems, insofar as their primary concern is how violations of the laws of war will affect human rights during warfare and the lack of a person who can answer for human rights violations at the international criminal court. however, the report also indicates that in some areas, those opposed to the development and use of these robots have softened their stance. for example, it makes a clearer distinction than in the past between aerial defense systems with autonomous capabilities, such as the iron dome and patriot systems, and systems that are more likely to harm humans, e.g., armed ground systems and autonomous uavs, such as the us x-47b uav, now in testing but which can already take off and land from an aircraft carrier and is capable of refueling in the air, all without any human involvement. the report’s bottom line calls for a total ban on the development and manufacturing of lethal autonomous weapons systems (as was the case with blinding laser weapons) by formulating and implementing international legal prohibitions, along with state-based and implemented prohibitions. the report and the campaign to stop killer robots are significant, sharing the same element that has also largely dictated the tone of the debates on the topic in the un committee. human rights watch held five events with an identical agenda on the same days the committee was convened in geneva; these events even appeared on the official website of the un committee. however, at the public debate preceding the un committee meetings – the human rights watch report and the events held concurrently with the discussions – arguments in favor of the development of autonomous weapons systems were not presented. apparently arguments of that nature were also not debated at the un committee itself. could it be that the un is missing an opportunity to challenge the inclination to reject automatically the development and use of autonomous weapons systems? the answer, which appears to be an unequivocal “yes”, is problematic, as the issue is far more complicated. lethal autonomous weapons systems that reduce the need for human soldiers bring a reduced risk to the lives of fighters, especially those engaged in warfare against terrorists and guerrilla organizations. in addition, such systems might allow the un’s own peacekeeping forces to grow, without having to resort to national forces. moreover, it is possible to reduce the risk to innocents that is a byproduct of warfare, the very issue of concern to human rights organizations, thanks to the improved precision of autonomous weapons systems resulting from the use of sensors and calculating abilities superior to human capabilities. furthermore, it seems that the un is missing an opportunity to reduce actions violating the laws of warfare. such a reduction would be made possible, in principle, if the autonomous systems would undertake only the tasks they have been programmed to carry out and based on the information with which they are equipped – programming that would conform to international law. similarly, an opportunity is being missed to develop the discourse and activity about limiting artificial intelligence in general, the same ai that lies at the core of any autonomous system, not just armed ones. the topic has made headlines over the last year because of some worrisome pronouncements made separately by scientists and technologists, including elon musk, stephen hawking, and bill gates, on dangers to humanity inherent in the uncontrolled development of ai occurring at an accelerated pace all over the world. the initiative against lethal autonomous weapons systems is gathering momentum because the systems are armed and present the danger that innocent bystanders will be harmed on the battlefield or elsewhere, at any time and in any place, and not because these systems are automated. while protection of human rights in areas of conflict is a very important but relatively limited interest, the threat emanating from the uncontrolled development of autonomous systems in general is liable to challenge much greater numbers of people, if not humanity overall. therefore, continued action by the un that is informed only by the agenda of human rights organizations may well lead to unnecessary complexities, if not lasting tragedy.
Close context player
Read content
Options
0:00
-:--
1x
Playback Speed
  • 0.5
  • 0.6
  • 0.7
  • 0.8
  • 0.9
  • 1
  • 1.1
  • 1.2
  • 1.3
  • 1.5
  • 2
Audio Language
  • English
  • French
  • German
  • Italian
  • Russian
  • Spanish
Open text
audio content is empty.
audio content is empty.
Select and listen