Five Things You Need to Know about Artificial Intelligence

· Debates

Speaking of Artificial Intelligence, what image comes to your mind? Would it be the Hollywood dystopian view of humanoid-robots becoming conscious and even evil, and spelling the end of human race? Be careful then, because AI is not everything that is depicted in the pop culture. To help you get better understanding of the topic, here we summarized five misconceptions you might have regarding artificial intelligence.

Misconception No. 1

AI needs a robot body.

What is an AI? Robots that keep performing repetitive tasks on assembly lines are not qualified as AI agents, because no intelligence is required in that process. Some people may blame AI for taking lower-paid, repetitive jobs away from people, when it is really just automation, or general technology, to be blamed. We wouldn’t say that a car was AI when it put horse and carts out of business. Rather, the spam detection system in your email is an AI, because it has the ability to sort out information, and decide for you if the information is valuable. AI is a much broader concept than robots. AI can be found in nearly every aspect of daily life, be it search engines such as Google, medical diagnoses, autonomous vehicles, or even prediction of judicial decisions.

So during case construction, your examples should go beyond Siri. The development of AI in our society has an implication far beyond increasing working efficiency. It will transform the basic concepts of running businesses, treating diseases, and living life.

Misconception No. 2

AI can do everything human minds can.

The most common approach to building an AI is to define a specific task, and design software to solve that particular problem. Guru Banavar, the head of the team at IBM responsible for creating Watson, said, "We can teach a computer to recognize a car, but we can't ask that same computer, 'How many wheels does that car have?'" (1) This kind of AI agents is known as Weak AI or Narrow AI, as opposed to Strong AI or Artificial General Intelligence.

Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI) is the concept people are more familiar with, examples of which include G3P-O from Star Wars, and HAL 9000 from 2001: Space Odyssey. Since the superintelligence era is yet in the far future, if it ever comes, do not let the image of AI in pop culture dominate your discussion of its impact on the society during debate rounds.

Misconception No. 3

The future of AI has to be the Terminator.

On the one hand, concerns of Intelligence Explosion are shared by big names. Nick Bostrom argues in his book Superintelligence: Paths, Dangers that "the creation of a superintelligent being represents a possible means to the extinction of mankind". (2) Stephen Hawking also warns that advanced AI “could spell the end of human race”. (3) On the other hand, there are skeptics such as Rodney Brooks, who believes that extreme AI predictions are “comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner.” (4)

Most researchers don't agree that AI will definitely lead to some sort of Doomsday for humanity. In your analysis of Doomsday Scenario, there are four major questions you need to explain to the judges.

  1. Is Artificial General Intelligence ever possible to achieve?
  2. If AGI is achieved, will the intelligent machines have ethics same as humans?
  3. Can a machine have consciousness, or intentionally cause harm?
  4. Can a machine have sentient, and thus deserve certain rights?

Misconception No. 4

AI is only about the military.

The economic, political, and social implication of AI cannot be underestimated. The implementation of AI in more and more domains would bring about a wide-scale decimation of jobs. When Didi company can save a huge amount of money by using self-driving automobiles only, Didi drivers will be driven out of work. Enormous wealth would be concentrated in relatively few hands, while enormous numbers of people unemployed. If the society cannot successfully deal with unemployment and wealth inequality, could there be another Great Depression?

Another problem rests on the big data we are now feeding AI — AI is picking up deeply ingrained race and gender prejudices the same time they learn the pattern of language use. The word-embedding association research published on Science shows that men are closer associated with work, math, and science, while women with family and the arts in AI systems. When a computer is searching resume for computer programmers, men’s resumes are more likely pop to the top. (5) Consequently, women will be less likely to get jobs, African American people will be less likely to get bank loans, and racial and gender bias will be intensified.

Misconception No. 5

AI turning evil is the only fear in Doomsday Scenario.

If AI never turn evil or become conscious as predicted in films and literature, are we entirely safe? Zach Musgrave and Bryan W. Roberts write in the news report in The Atlantic, “Humans, not robots, are the real reason artificial intelligence is scary.” Many AI specialists, leading by Elon Musk, are more concerned about AI weapons. (6) With fully autonomous weapons, impassible soldiers, and possible AI arm race, the third revolution in warfare will go beyond human control.

There is a chance of AI turning evil and decide to overthrow humans, while there is a chance of AI simply working for the evil goals of human beings. There are many possibilities of future AI development yet for you to research on. Whatever argument you want to make, make sure to do more analysis on the clue you have found in present-day reality, or the deduction process you have taken, before jumping to the final conclusion.


  1. Reese, Hope. "The 7 biggest myth about artificial intelligence." 15 03 2015. TechRepublic. 24 08 2017 <http://www.techrepublic.com/article/the-7-biggest-myths-about-artificial-intelligence/>.
  2. Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. 2014.
  3. "CTV News." 03 12 2014. Artificial intelligence could spell the end of human race, Stephen Hawking warns. 24 08 2017 <http://www.ctvnews.ca/mobile/sci-tech/artificial-intelligence-could-spell-the-end-of-human-race-stephen-hawking-warns-1.2130533>.
  4. Ford, Paul. "Our Fear of Artificial Intelligence." 11 02 2015. MIT Technology Review. 24 08 2017 <https://www.technologyreview.com/s/534871/our-fear-of-artificial-intelligence/>.
  5. Hutson, Matthew. "Even artificial intelligence can acquire biases against race and gender." 10 04 2017. Science. 25 08 2017 <http://www.sciencemag.org/news/2017/04/even-artificial-intelligence-can-acquire-biases-against-race-and-gender>.
  6. "AN OPEN LETTER TO THE UNITED NATIONS CONVENTION ON CERTAIN CONVENTIONAL WEAPONS." Future of Life Institute. 25 08 2017 <https://futureoflife.org/autonomous-weapons-open-letter-2017>.









开发人工智能最常用的方法是规定一个很具体的任务,开发一个软件,只为了完成这一个任务。创造了人工智能Watson的IBM团队的领袖Guru Banavar说,“我们可以教电脑识别汽车,却不能问同一个电脑那个汽车有几个轮子。”这种人工智能被称为弱人工智能或狭义人工智能,与之相对的是强人工智能或通用人工智能。




一方面,很多名人确实都对智能爆炸表示忧虑。Nick Bostrom在他的书《超级人工智能:路线图、危险性与应对策略》里表示,“超级智能物种的创造代表了一种人类灭绝的可能。”Stephen Hawking也表示先进的人工智能将会“终结全人类”。另一方面,也有像Rodney Brooks这样的怀疑论者,认为这些有关人工智能的偏激预测“就像看到了更高效的内燃机,就匆匆得到结论说,曲率驱动的实现也不远了啊”。


1. 通用人工智能有可能实现么?

2. 如果通用人工智能真的实现,人工智能会有和人类一样的道德标准么?

3. 一个机器可以有意识么?可以故意造成伤害么?

4. 一个机器可以有感情么?它应当有一定的权益么?







即使人工智能并没有拥有意识,也并没有变坏,我们就完全安全了么?Zach Musgrave和Bryan W. Roberts在《大西洋月刊》的新闻报道中写到,“人类,而不是机器人,才是人工智能令人惊慌的真正原因。”在Elon Musk带领下,很多人工智能专家都表示,比起人工智能拥有意识,他们对于人工智能武器更加担忧。伴随着完全自动化的武器、没有感情的士兵和极有可能的人工智能军备竞争,第三次战争形式的革新将会大大超出人类可控的范围。




确认邮件已发至你的邮箱。 请点击邮件中的确认链接,完成订阅。