Return to site

Five Things You Need to Know about Artificial Intelligence

· Debates

Speaking of Artificial Intelligence, what image comes to your mind? Would it be the Hollywood dystopian view of humanoid-robots becoming conscious and even evil, and spelling the end of human race? Be careful then, because AI is not everything that is depicted in the pop culture. To help you get better understanding of the topic, here we summarized five misconceptions you might have regarding artificial intelligence.

Misconception No. 1

AI needs a robot body.

What is an AI? Robots that keep performing repetitive tasks on assembly lines are not qualified as AI agents, because no intelligence is required in that process. Some people may blame AI for taking lower-paid, repetitive jobs away from people, when it is really just automation, or general technology, to be blamed. We wouldn’t say that a car was AI when it put horse and carts out of business. Rather, the spam detection system in your email is an AI, because it has the ability to sort out information, and decide for you if the information is valuable. AI is a much broader concept than robots. AI can be found in nearly every aspect of daily life, be it search engines such as Google, medical diagnoses, autonomous vehicles, or even prediction of judicial decisions.

So during case construction, your examples should go beyond Siri. The development of AI in our society has an implication far beyond increasing working efficiency. It will transform the basic concepts of running businesses, treating diseases, and living life.

Misconception No. 2

AI can do everything human minds can.

The most common approach to building an AI is to define a specific task, and design software to solve that particular problem. Guru Banavar, the head of the team at IBM responsible for creating Watson, said, "We can teach a computer to recognize a car, but we can't ask that same computer, 'How many wheels does that car have?'" (1) This kind of AI agents is known as Weak AI or Narrow AI, as opposed to Strong AI or Artificial General Intelligence.

Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI) is the concept people are more familiar with, examples of which include G3P-O from Star Wars, and HAL 9000 from 2001: Space Odyssey. Since the superintelligence era is yet in the far future, if it ever comes, do not let the image of AI in pop culture dominate your discussion of its impact on the society during debate rounds.

Misconception No. 3

The future of AI has to be the Terminator.

On the one hand, concerns of Intelligence Explosion are shared by big names. Nick Bostrom argues in his book Superintelligence: Paths, Dangers that "the creation of a superintelligent being represents a possible means to the extinction of mankind". (2) Stephen Hawking also warns that advanced AI “could spell the end of human race”. (3) On the other hand, there are skeptics such as Rodney Brooks, who believes that extreme AI predictions are “comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner.” (4)

Most researchers don't agree that AI will definitely lead to some sort of Doomsday for humanity. In your analysis of Doomsday Scenario, there are four major questions you need to explain to the judges.

  1. Is Artificial General Intelligence ever possible to achieve?
  2. If AGI is achieved, will the intelligent machines have ethics same as humans?
  3. Can a machine have consciousness, or intentionally cause harm?
  4. Can a machine have sentient, and thus deserve certain rights?

Misconception No. 4

AI is only about the military.

The economic, political, and social implication of AI cannot be underestimated. The implementation of AI in more and more domains would bring about a wide-scale decimation of jobs. When Didi company can save a huge amount of money by using self-driving automobiles only, Didi drivers will be driven out of work. Enormous wealth would be concentrated in relatively few hands, while enormous numbers of people unemployed. If the society cannot successfully deal with unemployment and wealth inequality, could there be another Great Depression?

Another problem rests on the big data we are now feeding AI — AI is picking up deeply ingrained race and gender prejudices the same time they learn the pattern of language use. The word-embedding association research published on Science shows that men are closer associated with work, math, and science, while women with family and the arts in AI systems. When a computer is searching resume for computer programmers, men’s resumes are more likely pop to the top. (5) Consequently, women will be less likely to get jobs, African American people will be less likely to get bank loans, and racial and gender bias will be intensified.

Misconception No. 5

AI turning evil is the only fear in Doomsday Scenario.

If AI never turn evil or become conscious as predicted in films and literature, are we entirely safe? Zach Musgrave and Bryan W. Roberts write in the news report in The Atlantic, “Humans, not robots, are the real reason artificial intelligence is scary.” Many AI specialists, leading by Elon Musk, are more concerned about AI weapons. (6) With fully autonomous weapons, impassible soldiers, and possible AI arm race, the third revolution in warfare will go beyond human control.

There is a chance of AI turning evil and decide to overthrow humans, while there is a chance of AI simply working for the evil goals of human beings. There are many possibilities of future AI development yet for you to research on. Whatever argument you want to make, make sure to do more analysis on the clue you have found in present-day reality, or the deduction process you have taken, before jumping to the final conclusion.

Citation

  1. Reese, Hope. "The 7 biggest myth about artificial intelligence." 15 03 2015. TechRepublic. 24 08 2017 <http://www.techrepublic.com/article/the-7-biggest-myths-about-artificial-intelligence/>.
  2. Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. 2014.
  3. "CTV News." 03 12 2014. Artificial intelligence could spell the end of human race, Stephen Hawking warns. 24 08 2017 <http://www.ctvnews.ca/mobile/sci-tech/artificial-intelligence-could-spell-the-end-of-human-race-stephen-hawking-warns-1.2130533>.
  4. Ford, Paul. "Our Fear of Artificial Intelligence." 11 02 2015. MIT Technology Review. 24 08 2017 <https://www.technologyreview.com/s/534871/our-fear-of-artificial-intelligence/>.
  5. Hutson, Matthew. "Even artificial intelligence can acquire biases against race and gender." 10 04 2017. Science. 25 08 2017 <http://www.sciencemag.org/news/2017/04/even-artificial-intelligence-can-acquire-biases-against-race-and-gender>.
  6. "AN OPEN LETTER TO THE UNITED NATIONS CONVENTION ON CERTAIN CONVENTIONAL WEAPONS." Future of Life Institute. 25 08 2017 <https://futureoflife.org/autonomous-weapons-open-letter-2017>.

关于人工智能,你不得不知道的五件事

谈到人工智能,你脑海里会浮现出什么样的画面?是不是人形机器人拥有了意识,变得邪恶,宣告人类的终结,这样的经典好莱坞反乌托邦场景?如果是的话你就要小心了,因为人工智能并不完全是大众文化里所描绘的样子。为了让你更好的理解这个辩题,我们在这里总结了五条有关人工智能你可能会有的错觉。

错觉1

人工智能就是机器人。

什么是人工智能?在标准化生产线上做着重复性工作的机器人没有资格被称为人工智能,因为重复的工作中不需要智能。有些人可能责怪人工智能抢走了人类低薪资重复性的工作,然而真正抢走他们工作的其实是机械化或者通用技术。当小轿车让马和马车失业,我们不会说小轿车就是人工智能吧。却不如说你的邮箱里的垃圾邮件侦查系统是人工智能,因为它能够整理信息,并且为你把控哪些信息是有意义的。人工智能的概念比机器人大很多。人工智能走进了我们生活中的方方面面,不管是谷歌这样的检索引擎,还是医疗诊断,还是无人驾驶汽车,甚至是对司法裁决的预测。

所以在这个辩题下,你可用的例子可不止Siri。人工智能的发展对我们社会的影响远远不止提高工作效率那么简单。它将转变我们对经商、治病、过日子这些事的概念。

错觉2

人脑能做什么,人工智能就能做什么。

开发人工智能最常用的方法是规定一个很具体的任务,开发一个软件,只为了完成这一个任务。创造了人工智能Watson的IBM团队的领袖Guru Banavar说,“我们可以教电脑识别汽车,却不能问同一个电脑那个汽车有几个轮子。”这种人工智能被称为弱人工智能或狭义人工智能,与之相对的是强人工智能或通用人工智能。

人们通常对通用人工智能和超级人工智能的概念更熟悉一些,像电影星战里的G3P-O和电影2001太空漫游里的HAL9000都是很好的例子。即使真正的超级人工智能时代真的到来,它也还在很远的未来。所以,不要让大众文化中的人工智能画面完全控制了你在辩论比赛中关于人工智能影响社会的讨论。

错觉3

人工智能未来一定会发展为人类终结者。

一方面,很多名人确实都对智能爆炸表示忧虑。Nick Bostrom在他的书《超级人工智能:路线图、危险性与应对策略》里表示,“超级智能物种的创造代表了一种人类灭绝的可能。”Stephen Hawking也表示先进的人工智能将会“终结全人类”。另一方面,也有像Rodney Brooks这样的怀疑论者,认为这些有关人工智能的偏激预测“就像看到了更高效的内燃机,就匆匆得到结论说,曲率驱动的实现也不远了啊”。

大多数研究者并不认为人工智能的发展一定会导致某种人类的末日。在你们对世界末日的阐述中,有四个主要问题,你必须向裁判解释清楚。

1. 通用人工智能有可能实现么?

2. 如果通用人工智能真的实现,人工智能会有和人类一样的道德标准么?

3. 一个机器可以有意识么?可以故意造成伤害么?

4. 一个机器可以有感情么?它应当有一定的权益么?

错觉4

人工智能只和军事相关。

人工智能的经济、政治和社会影响同样不容小觑。人工智能在越来越多领域中的应用将会造成大批工作岗位的流失。如果滴滴打车公司通过只雇佣无人驾驶汽车可以省下一大笔钱的话,很多的滴滴司机将会失业。巨额财富被掌握在少数人手中,平常百姓却大面积失业。如果社会不能成功地解决失业和财富分配不平等的问题,我们会不会经历又一个经济大萧条?

另一个问题在于我们填塞给人工智能的大数据——人工智能在学习语言使用模式的同时,也学会了深入其中的种族和性别歧视。在《科学》杂志上发表的有关词汇嵌入联想的研究表明,在人工智能系统中,男性和工作和理科相关性更高,而女性却和家庭和文科相关性更高。在电脑搜索程序员的简历的时候,男性的简历有更大的可能排在前列。这样一来,女性得到工作的机会就更少了,非裔美国人得到银行贷款的机会就更少了,种族和性别偏见却更深了。

错觉5

我们对世界末日唯一的恐惧是人工智能学坏。

即使人工智能并没有拥有意识,也并没有变坏,我们就完全安全了么?Zach Musgrave和Bryan W. Roberts在《大西洋月刊》的新闻报道中写到,“人类,而不是机器人,才是人工智能令人惊慌的真正原因。”在Elon Musk带领下,很多人工智能专家都表示,比起人工智能拥有意识,他们对于人工智能武器更加担忧。伴随着完全自动化的武器、没有感情的士兵和极有可能的人工智能军备竞争,第三次战争形式的革新将会大大超出人类可控的范围。

人工智能开始学坏,决定推翻人类的领导是有可能的,人工智能并没有推翻人类,而只是在人类领导下作恶也是有可能的。有关人工智能未来的发展还有很多的可能性,等待你去进一步调查研究。不管你要提出什么样的论点,都要确保你有足够的分析,比如你在当今现实中发现的线索,或者你推理的逻辑线路,然后再得出最后的结论。

All Posts
×

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OK