Confronting complexity and new technologies: a need to return to first principles of international law.

Author:Doswald-Beck, Louise


New weapons are constantly being invented. The basic principles of international humanitarian law (IHL) regulating them are not controversial. All states agree that weapons that are inherently indiscriminate or which cause superfluous injury or unnecessary suffering may not be used. The difficulty arises with the application of these principles to actual weapons systems. With the exception of the prohibition of exploding bullets in 1868, when the government whose scientists invented the new bullet called a conference to ban it, (1) all governments defend their new developments almost as a matter of principle. Unless a new system clearly falls within an existing treaty ban (e.g., an obviously lethal chemical weapon), gigantic efforts have to be made in order to get governments seriously to consider whether a weapon should be banned. This author has had personal experience of this in the effort to ban blinding laser weapons, (2) and everyone is aware of the enormous public media effort that was involved prior to the adoption of treaties banning anti-personnel landmines (3) and cluster munitions. (4) So far, however, the weapons considered were able to be evaluated in the context of IHL principles, even if for the most part these principles were not overtly the basis of the treaty negotiations.

The emergence of cyber warfare and robots (including drones), however, poses a different type of challenge. This presentation will argue that such systems might be capable of respecting the basic principles of IHL but nevertheless seriously undermine international law. In this regard the basic principles of international law, in particular the post-Charter grundnorm of international peace and security, need to be revisited in the analysis of the future of these new technologies.


Cyber Warfare

There is no obvious reason why cyber warfare should of itself violate the basic principles of IHL. All will depend on whether attacks are directed at military objectives, whether precautions have been taken to avoid collateral effects that are disproportionate, or the extent of which are unpredictable, and that such attacks are not perfidious. A particular program that can only attack civilian or protected sites, or which is by nature indiscriminate, would fall foul of these rules, but it would be difficult to imagine a conference to ban specific computer programs, although this might not be completely impossible. A more complex question is the status of those undertaking the attacks if they are not members of the military. However, this question is not qualitatively different from the basic issue of how to interpret "taking a direct part in hostilities." The most difficult issue could well be identifying the source of the attack. This could easily lead to flawed counter-attacks, but would not violate IHL if the analysis of the source of the attack were undertaken in good faith. Rather, the more serious problem is the effect of this on the prohibition of inter-state conflict (which will be analyzed further in sections 3-5 below).


For the purpose of my remarks, robots are systems that are programmed to undertake missions with the user at a distance. To some degree, earlier technologies such as cruise missiles and even mines can fall into this category. However, robots are usually associated with a greater degree of autonomy and sophistication. Drones are the robots already in extensive use, and dozens of states now have armed drones in their inventory. The argument is commonly made that an operator of a drone can take more care in accurately choosing military objectives and avoiding collateral casualties because he or she is not subject to the stress of fighter pilots. (5) However, such attacks are not immune from the usual problems associated with conflicts conducted entirely or primarily by air warfare: faulty intelligence leading to mistakes, a problem exacerbated by the covert nature of many drone strikes; and the inability to accept surrender and to search and care for casualties. The extent of collateral casualties is contested so that claims that these are avoided or minimized cannot simply be accepted on faith. Furthermore, suspected collaborators are murdered by the local population, which is a totally foreseeable result. (6) I should note that these problems are associated with a lack of personnel on the ground, rather than viewed as problems specific to drones.

Autonomous robots, however, might present additional problems. If these are ground-based, then the question is whether they should be able to respect IHL rules better than air and missile warfare. In particular, the questions will be raised as to whether they would be able to distinguish between combatants and protected persons (including but not limited to civilians); whether they could evaluate which objects amount to military objectives and whether likely civilian losses are not excessive; as well as whether they would be able to accept surrender and undertake the required tasks to care for the wounded. For the time being we remain a long way from deploying a fully autonomous robot. However, preprogrammed ones with set tasks are already with us. For the time being humans are still involved in their operation. The major question with possible autonomous robots is whether they could in reality function at least as well as humans. If the answer to this is yes, then in principle there is no problem under IHL. However, the reality is likely to be that manufacturers' claims and users' expectations will exceed what robots' mental capacities will actually be. The other problem is psychological. There is likely to be abhorrence at the idea of humans fighting fully autonomous robots. In other words, the Martens clause may become relevant in this context. A proper international review and discussion of principles should take place before any such systems are further developed.


The most serious problem relating to these systems is not IHL, but rather the fact that they decrease resistance to using force as they eliminate the prospect of battlefield casualties on the side of the user. This is not a theoretical hypothesis--it is already happening. As described, for example, by Peter Singer in his article "Drone Strikes on Democracy," hundreds of drone strikes were carried out by the United States in 2011 in six countries. As he puts it:

[W]e now possess a technology that removes the last political barriers to war. The strongest appeal of unmanned systems is that we don't have to send someone's son or daughter into harm's way. [T]echnologies that remove humans from the battlefield, from unmanned systems like the Predator to cyberweapons like the Stuxnet computer worm, are becoming the new normal in war. (7) The likelihood of this behavior being followed by others is not lost on anyone. Peter Singer, in the same article, makes the following point:

C.I.A. strikes outside of declared war zones are setting a troubling precedent that we might not want to see followed by close to 50 other nations that now possess the same unmanned technology--including China, Russia, Pakistan and Iran. Some other authors similarly note the likelihood of such systems encouraging attacks on other states' territory, but do so only in passing, concentrating rather on extolling the virtues of such systems. (8) It is this author's opinion, however, that this issue is not peripheral, but needs to be treated as the central question. In other words, it is a mistake to evaluate new weapons technologies exclusively within the confines of IHL. Rather, they need to be considered...

To continue reading