On April 13, a United Nations meeting was held to discuss the potential ban of developing "killer robots". As stated by a UK Spokesman, "We do not see the need for a prohibition as international humanitarian law already provides sufficient regulation". The idea of these killer robots is basically that they will have the power to select and execute targets WITHOUT any human input.

To the UK's defense they said that they do not plan to make any drones or planes that are completely autonomous; however, since they didn't agree on banning the law, other countries are allowed to keep or make these such robots.

How is a robot able to tell what threats are dangerous and what isn't? A human brain can see something and for the most part determine whether or not it is a threat. How can we rely on a solely non-human robot which lacks distinctly human traits, such as fear, hate, sense of honor and dignity, compassion and love which are all considered to be desirable in combat?

In my opinion, I think that this is insane. There is no way that a robot should have total control over the potential to destroy something or someone. The fact that this is even being considered is scary to me. I can't even imagine a world where a robot has more control over my life then I do because if that drone think's I am a threat doing something harmless and wants to kill me it has all the potential to if it wants. And who's to say that it won't just kill me? What are the standards of being a threat? The definition? The components that determine whether or not I am a target and I should die?

No comments:
Post a Comment