Preloader Image 1

Robots have killed people

BILLIONrobot revolution started long ago and so did the killing. One day in 1979, a robot at a Ford Motor Company foundry had a problem – workers determined it wasn’t running fast enough. So Robert Williams, 25, was asked to climb into a storage rack to help move the furniture. The one-ton robot continued to work silently, hitting Williams in the head and killing him instantly. This is believed to be the first incident in which a robot killed a human; More will follow.

At Kawasaki Heavy Industries in 1981, Kenji Urada died under similar circumstances. According to Gabriel Hallevy in his 2013 book, a malfunctioning robot he went to check killed him when he got in its way. When robots kill people: Artificial intelligence under criminal law. As Hallevy put it, the robot simply determined that “the most effective way to remove the threat is to push workers to an adjacent machine.” Between 1992 and 2017, robotics in the workplace caused 41 documented deaths in the US — and that’s likely an underestimate, especially when you consider the knock-on effects. from automation, such as job loss. A robotic anti-aircraft gun killed nine South African soldiers in 2007 when a possible software bug caused the machine to violently sway itself and fire dozens of deadly bullets in less than a second. In a 2018 trial, a medical robot was implicated in the killing of Stephen Pettitt during a routine surgery that occurred several years earlier.

When robots kill people: Artificial intelligence under criminal law

Via Gabriel Hallevy

You get the picture. Robots—“smart” not—have been killing people for decades. And the development of more advanced artificial intelligence only increases the chances of machines doing harm. Self-driving cars are already on the streets of the United States, and robotic “dogs” are being used by law enforcement. Computer systems are being given the ability to use tools, allowing them to directly affect the physical world. Why worry about the theoretical emergence of an all-powerful, super-intellectual program when more pressing problems are on our doorstep? Regulation must drive companies towards safety innovation and safety innovation. We are not there yet.

Historically, major disasters needed to happen to drive regulation—the kinds of disasters that we should ideally foresee and avoid in today’s AI model. The Grover Shoe Factory disaster of 1905 led to regulations governing the safe operation of boilers. At the time, the companies considered the large steam automatons too complicated to make hasty safety regulations. Of course, this leads to overlooked safety flaws and escalating disaster. It wasn’t until the American Society of Mechanical Engineers demanded transparency and risk analysis that the dangers posed by these huge boiling water tanks, once considered a mystery, became understandable. The 1911 Triangle Shirtwaist Factory fire led to regulations on sprinkler systems and fire exits. And the preventable sinking of the Titanic in 1912 led to new regulations regarding lifeboats, safety checks and onboard radios.

Perhaps the best analogy is the development of the Federal Aviation Administration. The deaths in the first decades of mandatory aviation regulation necessitated new developments in both legislation and technology. Beginning with the Commercial Aviation Act of 1926, Congress recognized that integrating aerospace technology into human lives and our economy requires the most scrutiny. Today, every air crash is rigorously examined, promoting new technologies and processes.

Any regulation of industrial robots is rooted in existing industrial regulation, which has been developed over decades. The Occupational Health and Safety Act of 1970 established safety standards for machines and the Robotics Industry Association, now merged into the Association for the Promotion of Automation, played an important role in developing and updating robot-specific safety standards since their inception in 1974. Those standards, with lesser-known names like R15.06 and ISO 10218, emphasize safe design inherent security, protection measures and rigorous risk assessment for industrial robots.

But as technology continues to change, governments need more clarity on how and when robots can be used in society. The law needs to make it clear who is responsible and what the legal consequences are when the robot’s actions cause harm. Yes, accidents happen. But the lessons of aviation and workplace safety demonstrate that accidents are preventable when they are openly discussed and closely monitored by experts.

AI and robotics companies don’t want this to happen. For example, OpenAI is said to have struggled to “reduce” safety regulations and reduce AI quality requirements. According to an article in Time, it has lobbied European Union officials against classifying models like ChatGPT as “high risk”, which would introduce “strict regulatory requirements including transparency, ability to traceability and human oversight”. The alleged reason is that OpenAI does not intend to put its product into high-risk use – a change that is as logical as the Titanic owners’ lobbying that the ship should not be inspected. lifeboat on the principle that it is a “general purpose” ship. It is also possible to sail in warm waters where there are no icebergs and humans can float for days. (OpenAI did not comment when asked about its stance on regulation; in the past, it has said that achieving our mission requires us to work to mitigate both current and long-term risks. ” and that they are working towards that goal by “collaborating with policymakers, researchers and users.”)

Large corporations tend to develop computer technology in order to shift the burden caused by their shortcomings to society at large, or to claim that social safeguards regulations impose a cost. unfairness to corporations themselves, or basic security standards that impede innovation. We have heard all of this before and we should be extremely skeptical of such claims. Today’s AI-related robot deaths are no different from the robot accidents of the past. Those industrial robots malfunctioned and the human operators trying to assist were killed in unexpected ways. Since the first known death due to the feature in January 2016, Tesla’s Autopilot feature has been implicated in more than 40 deaths as estimated by the official report. Teslas malfunctioning on Autopilot have deviated from advertised capacity by misreading road markings, slamming into another car or tree, crashing into clearly marked service vehicles, or ignoring lights. red, stop signs and pedestrian crossings. We’re concerned that AI-driven robots have gone beyond accidentally killing people in the name of efficiency and “deciding” to kill someone to achieve unknown and remote-controlled goals.

As we move towards a future where robots become more and more integral to our lives, we cannot forget that safety is an important part of innovation. Real technological progress comes from the application of comprehensive safety standards across multiple technologies, even in the most futuristic and captivating robotic visions. By learning from past deaths, we can improve safety procedures, correct design flaws, and prevent further unnecessary loss of life.

For example, the British government has made a statement that safety is very important. Lawmakers must step back in history to focus more on the future on what we must demand now: modeling threats, computing potential scenarios, generating scenarios Design engineering and ensure responsible engineering to build within the limits of general social protection. Decades of experience have given us empirical evidence to guide our actions towards a safer future with robots. Now we need the political will to regulate.


​When you buy a book using the link on this page, we get a commission. Thanks for your support Atlantic.

#Robots #killed #people

Written By

Leave a Reply

Leave a Reply

Your email address will not be published. Required fields are marked *