Elon Musk’s Open Letter to the United Nations Warns Against Autonomous Killer Robots

“Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend,” the letter states, according to a press release issued by the Future of Life, an organization with Musk and famous astrophysicist Stephen Hawking as two of the several members of the board of advisers. “These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways.”

Musk has expressed his dire concerns over artificial intelligence and robotics several times in recent months, despite developing and employing the technology for his self-driving cars.

“Musk of all people should know the future is always rife with uncertainty—after all, he helps construct it with each new revolutionary undertaking,” Ryan Hagemann, director of technology policy at the think tank the Niskanen Center, wrote in a blog post. “Imagine if there had been just a few additional regulatory barriers for SpaceX or Tesla to overcome.”

During a National Governors Association meeting in July, Musk described AI as the “biggest risk we face as a civilization.”

“Until people see robots going down the street killing people,” Musk said, “they don’t know how to react because it seems so ethereal.”

Musk said earlier in August that AI poses more of a risk than a nuclear weapon-armed North Korea.

SEE: KILLER ROBOT REFERENCE PAGE

Tech-focused groups have criticized Musk for his seemingly anti-AI position.

The Information, Technology, and Innovation Foundation (ITIF), for example, called the Tesla CEO an “alarmist” in 2015 for pledging $1 billion to prevent the proliferation of autonomous robots, adding that he and his ilk stoke fear about an upcoming artificial intelligence revolution.

Certain tech executives have also indirectly criticized Musk for his doomsday clamoring. Facebook CEO Mark Zuckerberg said in July that people who raise fears over the advent of AI are “pretty irresponsible.”

“I think you can build things and the world gets better. With AI especially, I am really optimistic,” Zuckerberg said, according to Axios. “I think people who are naysayers and try to drum up these doomsday scenarios — I just I don’t understand it … In the next five to 10 years, AI is going to deliver so many improvements in the quality of our lives.”

While he did not name anyone in particular and is presumably referring to general fears about the prospect of such capabilities, the tech wunderkind is in all likelihood alluding to specific comments made by fellow bigwig Musk, and perhaps even Hawking.

But uneasiness of autonomous functionality does not just originate from some of the people who purportedly understand it the most. A large majority of both major political affiliations believe there should be some form of regulations on AI, according to a fairly recent Morning Consult poll. Seventy-three percent of Democrats, 74 percent of Republicans, and 65 percent of independents, respectively, answered in the affirmative when asked if the U.S. should impose regulations.

Other noteworthy signatories of the letter were Mustafa Suleyman, founder of Google’s DeepMind, and Esben Østergaard, founder and CTO of Universal Robotics in Denmark.

With the apparent schism of opinion on AI in the larger tech field, Musk and several others in the robotics industry are pushing ahead with their intense apprehension towards specific applications of the technology.

“We do not have long to act,” the letter urged. “Once this Pandora’s box is opened, it will be hard to close.”

READ THE LETTER: “An Open Letter to the United Nations Convention on Certain Conventional Weapons

 

HAL 9000 Artificial Intelligence (from movie 2001)

“Robotics and AI have become one of the most prominent technological trends of our century.
The fast increase of their use and development brings new and difficult challenges to our society”, writes Delvaux. Therefore, the reasoning goes, “robots and artificial intelligence (AI) would increase their interaction with humans”, raising “legal and ethical issues which require a prompt intervention at EU level”.

I’m Afraid I Can’t Do That*

Science fiction meets science fact and thee three laws of robots just appeared in a draft European Parliament committee report on robots and artificial intelligence titled: Workshop on Robotics & Artificial Intelligence.

While it is a non-binding document, these rules could be adopted by the EU this month.

Read the full article: http://delano.lu/d/detail/news/im-afraid-i-cant-do/132457

IF you’re not familiar with where the title of the article comes from, you MUST watch the historical scene (2:11 min)  from the ground-breaking 1968 movie 2001.
The movie is partially based on Arthur C. Clarke’s short story The Sentinal, first published in a fantasy magazine (1951).

In the clip, astronaut Dave argues with HAL9000 AI:

3 Laws For Robots

The Three Laws of Robotics (often shortened to The Three Laws or known as Asimov’s Laws) are a set of rules devised by the science fiction author Isaac Asimov involving artificial intelligence and explained by him in the archived video below.

The rules were first introduced in his 1942 short story “Runaround“, although they had been foreshadowed in a few earlier stories. The Three Laws, quoted as being from the “Handbook of Robotics, 56th Edition, 2058 A.D.”, are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

#TxEduChat LOGOSunday, April 24th at 8PM Central (6PM Pacific)

Thanks to @Tom_Kilgore and the popular @TxEduChat for asking me to host another Twitter Chat!

I’m excited to have some fun and explore the topic:

AI & Robots: How can we “future proof” students?

Artificial Intelligence and Robots are developing are a RAPID pace!

With two new grandchildren, I’m investigating more seriously the advancing new technologies in an effort to understand the knowledge and skills necessary to achieve happiness and success in a technological future.

I’m certain that the ubiquitous nature of technology will have a considerable impact on a new generation and believe that adults–educators, parents, and more, will be critical in mentoring young people as they navigate and work through all the changes, along with new moral and ethical decisions mankind has never had to deal with.

Sunday’s TXeduchat is intended to be a fun-filled hour to consider the possibilities and get us all thinking about how to future proof our students in a world where deep learning, automation, artificial intelligence, and robotics are accelerating.

It will be hard to accomplish too much in the hour we have online but, there are a lot of amazing and brilliant people that frequent the #TxEduChat, so I’m confident it’s going to be a frenetic paced evening!

For those of you who are here for the first time. I’ll post a reference link below after the chat.

 

SAN FRANCISCO — Until recently, Robyn Ewing was a writer in Hollywood, developing TV scripts and pitching pilots to film studios.

Now she’s applying her creative talents toward building the personality of a different type of character — a virtual assistant, animated by artifical intelligence, that interacts with sick patients.

Ewing works with engineers on the software program, called Sophie, which can be downloaded to a smartphone. The virtual nurse gently reminds users to check their medication, asks them how they are feeling or if they are in pain, and then sends the data to a real doctor.

As tech behemoths and a wave of start-ups double down on virtual assistants that can chat with human beings, writing for AI is becoming a hot job in Silicon Valley. Behind Apple’s Siri, Amazon’s Alexa and Microsoft’s Cortana are not just software engineers. Increasingly, there are poets, comedians, fiction writers, and other artistic types charged with engineering the personalities for a fast-growing crop of artificial intelligence tools.

“Maybe this will help pay back all the student loans,” joked Ewing, who has master’s degrees from the Iowa Writer’s Workshop and film school.

Unlike the fictional characters that Ewing developed in Hollywood, who are put through adventures, personal trials and plot twists, most virtual assistants today are designed to perform largely prosaic tasks, such as reading through email, sending meetings reminders or turning off the lights as you shout across the room.

But a new crop of virtual assistant start-ups, whose products will soon flood the market, have in mind more ambitious bots that can interact seamlessly with human beings.

Continue Reading…


The Future of Employment Research Report from Oxford University

In this report, Oxford University addresses the question: How susceptible are jobs to computerization?

Doing so, they build on the existing literature in two ways. First, drawing upon recent advances in Machine Learning (ML) and Mobile Robotics (MR), they develop a novel methodology to categorize occupations according to their susceptibility to computerization.

702 Occupations Examined

Second, they implement this methodology to estimate the probability of computerization for 702 detailed occupations, and examine expected impacts of future computerization on US labor market outcomes.

Continue Reading…

The rapid pace of artificial intelligence (AI) has raised fears about whether robots could act unethically or soon choose to harm humans. Some are calling for bans on robotics research; others are calling for more research to understand how AI might be constrained. But how can robots learn ethical behavior if there is no “user manual” for being human?

Continue Reading…