How will we be kept safe from artificial intelligence?

Artificial intelligence is actually something we’ve used for years, in the form of security surveillance, online customer support, fraud detection and more. We’ve used artificial intelligence to help us as part of everyday life, but there are many concerns that we’re making it too advanced for our own good. Click on this link to see how advanced it has truly become.

Artificial intelligence is becoming a part of our everyday lives, and a group of experts have created a new list of guidelines to ensure that humanity are protected from these creations. The British Standards Institution, who develop technical and quality guidelines for goods sold in the UK and issues the famous Kitemark certificate, have drawn up a new standard specifically for robots.

These new guidelines have been made for robotics designers to help ensure that robots do not cause a risk to humans. They say that the growing use of robots in homes, shops, and industry poses an ‘ethical hazard’. It states that robots should not be designed to kill or harm humans, echoing Asimov’s first law of robotics.
When do they become dangerous?

The first place this becomes an immediate threat is where artificial intelligence is used in the army, in this case clearly the job of the A.I is to kill or incapacitate other human being- going directly against Asimov’s first law of robotics.

The most alarming peace of robot machinery is probably the MIT Cheetah robot. This nifty piece of A.I has been trained to walk, jump and run up to 30 mph autonomously and can jump obstacles up to 18 inches tall- more than half of the robot’s own height. If this machine was programmed to do something devastating, I would not like to be on the receiving end.

In the hands of the wrong person, these weapons could easily cause mass casualties. Furthermore, an A.I arms race would more likely than not lead to an A.I war, only contributing to these casualties.

A.I might even come to harm us when programmed to do something beneficial. This can happen whenever we fail to fully align the AI’s goals with ours, which is a tough balance. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there unlawfully so by speeding or driving on the pavement, pushing other cars out the way – doing what you asked, not what you necessarily wanted.

As with self driving cars, accidents caused as a result of artificial intelligence gone wrong legally need someone or some company to be held accountable. The debate is ongoing as to who should be legally responsible for any such instances, with the decisions affecting us for decades to come.