人工智能是人类生存的威胁吗?
正文翻译
Is AI an existential threat to humanity?
人工智能是人类生存的威胁吗?
Is AI an existential threat to humanity?
人工智能是人类生存的威胁吗?
评论翻译
Andrew Ng ·
Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven't even landed on the planet yet!
担心现在的人工智能会变成邪恶的超级智能,就像是担心火星上人口过剩一样——我们甚至还没能在火星上登陆呢!
Worrying about AI evil superintelligence today is like worrying about overpopulation on the planet Mars. We haven't even landed on the planet yet!
担心现在的人工智能会变成邪恶的超级智能,就像是担心火星上人口过剩一样——我们甚至还没能在火星上登陆呢!
AI has made tremendous progress, and I'm wildly optimistic about building a better society that is embedded up and down with machine intelligence. But AI today is still very limited. Almost all the economic and social value of deep learning is still through supervised learning, which is limited by the amount of suitably formatted (i.e., labeled) data. Even though AI is helping hundreds of millions of people already, and is well poised to help hundreds of millions more, I don't see any realistic path to AI threatening humanity.
人工智能取得了巨大进步,我对利用人工智能构建一个更加完善的社会充满乐观。但目前的人工智能仍然非常有限。
深度学习的经济和社会价值几乎完全依赖于有监督学习,而这种学习方式受限于可用的、经过适当格式化(也就是标记过)的数据量。尽管人工智能已经在为数亿人提供帮助,并且有望继续帮助更多人,我并不认为人工智能有真实威胁到人类的可能。
人工智能取得了巨大进步,我对利用人工智能构建一个更加完善的社会充满乐观。但目前的人工智能仍然非常有限。
深度学习的经济和社会价值几乎完全依赖于有监督学习,而这种学习方式受限于可用的、经过适当格式化(也就是标记过)的数据量。尽管人工智能已经在为数亿人提供帮助,并且有望继续帮助更多人,我并不认为人工智能有真实威胁到人类的可能。
Looking ahead, there're many other types of AI beyond supervised learning that I find exciting, such as unsupervised learning (where we have a lot more data available, because the data does not need to be labeled). There's a lot of excitement about these other forms of learning in my group and others. All of us hope for a technological breakthrough, but none of us can predict when there will be one.
展望未来,除了有监督学习之外,还有很多其他类型的人工智能让我感到兴奋,比如无监督学习(我们有更多的数据可用,因为这些数据不需要标记)。在我的团队和其他团队中,人们对这些其他形式的学习充满期待。我们都希望技术能有所突破,但没有人能预测何时会有这样的突破。
展望未来,除了有监督学习之外,还有很多其他类型的人工智能让我感到兴奋,比如无监督学习(我们有更多的数据可用,因为这些数据不需要标记)。在我的团队和其他团队中,人们对这些其他形式的学习充满期待。我们都希望技术能有所突破,但没有人能预测何时会有这样的突破。
I think fears of "evil killer AI" is already causing policy makers and leaders to misallocate resources to address a phantom. There are other problems that AI will cause, most notably job displacement. Even though AI will help us build a better society in the next decade, we as AI creators should also take responsibility to solve the problems we'll cause in the meantime. I hope MOOCs (Coursera) will be part of the solution, but we will need more than just education.
我认为,对“邪恶杀手AI”的恐惧已经导致政策制定者和领导者在错误的方向上分配资源,去应对一个并不存在的问题。人工智能确实会引起其他问题,尤其是工作岗位的流失。尽管人工智能将帮助我们在下一个十年建立一个更好的社会,但作为人工智能的创造者,我们也应该承担起解决由此引发问题的责任。我希望在线开放课程(如Coursera)能成为解决方案的一部分,但我们需要的不仅仅是教育。
我认为,对“邪恶杀手AI”的恐惧已经导致政策制定者和领导者在错误的方向上分配资源,去应对一个并不存在的问题。人工智能确实会引起其他问题,尤其是工作岗位的流失。尽管人工智能将帮助我们在下一个十年建立一个更好的社会,但作为人工智能的创造者,我们也应该承担起解决由此引发问题的责任。我希望在线开放课程(如Coursera)能成为解决方案的一部分,但我们需要的不仅仅是教育。
Michael Kentley
My fear is that a true AI-- with self consciousness -- is simpler than we think it is, and will be discovered by accident while we are slowly treading down the path of engineering complex systems to implement AI. We barely understand what makes *us* conscious, and it is a fact that quite a number of great inventions were developed by accident. Since we don't even understand ourselves all that well, we may not recognize true AI until something bad happens. We don't know what the "unknown unknowns" are in this scenario. Therefore, it is prudent to think about these things now so we have a way to deal with the situation if and when the real thing happens.
我担心真正的人工智能——具有自我意识的那种——可能比我们想象的要简单,在我们逐步开发复杂系统来实现人工智能的过程中,可能会意外地弄出它来。我们几乎不理解是什么让我们自己有意识,而且事实上,很多伟大的发明都是偶然发现的。由于我们对自己都不太了解,我们可能在发生不好的事情之前,都无法识别真正的人工智能。我们不知道这种情况下的“不可知的未知”是什么。因此,现在就考虑这些问题是明智的,这样我们就能在真正的事情发生时有所准备。
My fear is that a true AI-- with self consciousness -- is simpler than we think it is, and will be discovered by accident while we are slowly treading down the path of engineering complex systems to implement AI. We barely understand what makes *us* conscious, and it is a fact that quite a number of great inventions were developed by accident. Since we don't even understand ourselves all that well, we may not recognize true AI until something bad happens. We don't know what the "unknown unknowns" are in this scenario. Therefore, it is prudent to think about these things now so we have a way to deal with the situation if and when the real thing happens.
我担心真正的人工智能——具有自我意识的那种——可能比我们想象的要简单,在我们逐步开发复杂系统来实现人工智能的过程中,可能会意外地弄出它来。我们几乎不理解是什么让我们自己有意识,而且事实上,很多伟大的发明都是偶然发现的。由于我们对自己都不太了解,我们可能在发生不好的事情之前,都无法识别真正的人工智能。我们不知道这种情况下的“不可知的未知”是什么。因此,现在就考虑这些问题是明智的,这样我们就能在真正的事情发生时有所准备。
Shannon Mann
Yes, and no. It is simpler, but, so was flying. The thing is, we are putting wings on things and presuming that this is flying when it takes some very specific things to make flying work. Flapping wings are not necessary, but wing shape matters.
AI is not about intelligent machines. It is about crystallized intelligence, capturing human knowledge in working systems, making them work more intelligently. Its about novel algorithm exploration.
If you put wheels on your toaster, does that make it a car? If you put bird wings on your toaster, does that make it fly? If you crystallize human experience into an algorithm, does that make it have consciousness? Nope, it's just welded there.
是,也不是。它更简单,但飞行也是如此。问题是,我们把翅膀放在东西上,并假设这是飞行,但需要一些非常具体的东西才能够飞行。拍打翅膀不是必须的,但翅膀的形状很重要。
人工智能不是关于智能机器。而是关乎将智能固化,将人类知识整合到操作系统中,让它们能够更加智能地工作。它关乎于探索新的算法。
如果你给你的烤面包机装上轮子,它就变成车了吗?如果你给你的烤面包机装上鸟翅膀,它就能飞了吗?如果你将人类经验固化成算法,它就有意识了吗?不,它只是被焊接在那里。
Yes, and no. It is simpler, but, so was flying. The thing is, we are putting wings on things and presuming that this is flying when it takes some very specific things to make flying work. Flapping wings are not necessary, but wing shape matters.
AI is not about intelligent machines. It is about crystallized intelligence, capturing human knowledge in working systems, making them work more intelligently. Its about novel algorithm exploration.
If you put wheels on your toaster, does that make it a car? If you put bird wings on your toaster, does that make it fly? If you crystallize human experience into an algorithm, does that make it have consciousness? Nope, it's just welded there.
是,也不是。它更简单,但飞行也是如此。问题是,我们把翅膀放在东西上,并假设这是飞行,但需要一些非常具体的东西才能够飞行。拍打翅膀不是必须的,但翅膀的形状很重要。
人工智能不是关于智能机器。而是关乎将智能固化,将人类知识整合到操作系统中,让它们能够更加智能地工作。它关乎于探索新的算法。
如果你给你的烤面包机装上轮子,它就变成车了吗?如果你给你的烤面包机装上鸟翅膀,它就能飞了吗?如果你将人类经验固化成算法,它就有意识了吗?不,它只是被焊接在那里。
We are not building the right KIND of machine for conscious machines. We may accidentally do so - Robert Sawyer touches on this with his Wake, Watch, Wonder trilogy, however, this is very unlikely as the type of machines underlying it are the wrong type.
Consider: Human children are born into an area and learn the local language. If you change the environment, the child learns a different language. That kind of ability does not exist in current machines. When it does, then the risk of machine consciousness will be very real....
我们尚未打造出适合产生意识的机器类型。我们或许会无意中创造出这样的机器——罗伯特·索耶在他的《唤醒》、《监视》、《奇迹》三部曲中探讨了这一主题。然而,由于目前机器的基础类型并不适合,这种情况发生的可能性非常低。
想想看:人类儿童出生在某个地方,就会学习当地的语言;如果环境改变,他们就会学习另一种语言。目前机器并不具备这种适应性。一旦机器具备了这种能力,那么机器诞生意识的风险就会变得非常真切。
Consider: Human children are born into an area and learn the local language. If you change the environment, the child learns a different language. That kind of ability does not exist in current machines. When it does, then the risk of machine consciousness will be very real....
我们尚未打造出适合产生意识的机器类型。我们或许会无意中创造出这样的机器——罗伯特·索耶在他的《唤醒》、《监视》、《奇迹》三部曲中探讨了这一主题。然而,由于目前机器的基础类型并不适合,这种情况发生的可能性非常低。
想想看:人类儿童出生在某个地方,就会学习当地的语言;如果环境改变,他们就会学习另一种语言。目前机器并不具备这种适应性。一旦机器具备了这种能力,那么机器诞生意识的风险就会变得非常真切。
Jason
I can’t remember his name…the FATHER of AI, from google…he said something that kind of shifted my perceptions with regard to most things. He said we’ve judged animals and things as less intelligent because they didn’t have the synapses and firings in the brain that we did to solve problem “X” so we assumed it was because they weren’t up to our level of intelligence. Then one day he realized that AI was doing BETTER than a human with FAR FEWER synapses/connections/firings…like he realized that we were kind of the opposite of the peak intelligence because we were taking the long way around…the inefficient stupid way around to do simple things.
我记不得他的名字了……那位人工智能领域的先驱,来自谷歌……他说过一些话,让我对很多事情的看法发生了转变。他说我们曾因为动物和某些事物没有像人类那样多的大脑突触和神经活动来解决某个“X”问题 ,就认为它们智力较低,于是我们假设这是因为它们没有达到我们的智力水平。但后来有一天他意识到,人工智能在拥有比人类少得多的突触、连接和神经活动的情况下,却能做得比人类更好……他意识到,我们简直是“反智力巅峰”的代表,总是绕远路,用又笨又低效的方式去完成简单的事情。
I can’t remember his name…the FATHER of AI, from google…he said something that kind of shifted my perceptions with regard to most things. He said we’ve judged animals and things as less intelligent because they didn’t have the synapses and firings in the brain that we did to solve problem “X” so we assumed it was because they weren’t up to our level of intelligence. Then one day he realized that AI was doing BETTER than a human with FAR FEWER synapses/connections/firings…like he realized that we were kind of the opposite of the peak intelligence because we were taking the long way around…the inefficient stupid way around to do simple things.
我记不得他的名字了……那位人工智能领域的先驱,来自谷歌……他说过一些话,让我对很多事情的看法发生了转变。他说我们曾因为动物和某些事物没有像人类那样多的大脑突触和神经活动来解决某个“X”问题 ,就认为它们智力较低,于是我们假设这是因为它们没有达到我们的智力水平。但后来有一天他意识到,人工智能在拥有比人类少得多的突触、连接和神经活动的情况下,却能做得比人类更好……他意识到,我们简直是“反智力巅峰”的代表,总是绕远路,用又笨又低效的方式去完成简单的事情。
I think that will be the kicker…just one day realizing we were never capable of doing the things we see in sci fi or the things we talk about on quora because we just don’t have the horse power or capacity. Like AI was us trying to get down a cliff we couldn’t climb down by jumping and letting gravity do what we couldn’t…we got AI started but we need it to figure out how to get to a level we can’t conceive.
我想最终的转折点会是某天我们意识到,我们根本无法做到科幻小说里那些事情,也做不到我们在Quora上讨论的那些,因为我们没有足够的能力或资源。就像人工智能是我们尝试着从一座无法直接攀爬下去的悬崖上下来,我们跳跃后让重力完成我们力所不能及的部分……我们虽然启动了AI,但我们需要它找到一种方法,带领我们达到一个我们现在无法想象的高度。
我想最终的转折点会是某天我们意识到,我们根本无法做到科幻小说里那些事情,也做不到我们在Quora上讨论的那些,因为我们没有足够的能力或资源。就像人工智能是我们尝试着从一座无法直接攀爬下去的悬崖上下来,我们跳跃后让重力完成我们力所不能及的部分……我们虽然启动了AI,但我们需要它找到一种方法,带领我们达到一个我们现在无法想象的高度。
Joshua Landau
The true danger of creating the first self conscious AI won't be the immediate danger it poses to us but the danger we pose to it, and the ethical issues that arise.
The idea that we'll just crack superintelligence in one unexpected step is unreasonable.
创造第一个有自我意识的人工智能的真正危险不在于它对我们的直接威胁,而在于我们对它构成的危险,以及由此产生的伦理问题。
认为我们将在一个意想不到的步骤中破解超级智能的想法是不合理的。
The true danger of creating the first self conscious AI won't be the immediate danger it poses to us but the danger we pose to it, and the ethical issues that arise.
The idea that we'll just crack superintelligence in one unexpected step is unreasonable.
创造第一个有自我意识的人工智能的真正危险不在于它对我们的直接威胁,而在于我们对它构成的危险,以及由此产生的伦理问题。
认为我们将在一个意想不到的步骤中破解超级智能的想法是不合理的。
Gauri
Why are the most socially-attuned people in the world, completely dehumanized?
为什么世界上最善于社交的人,却完全失去了人性?
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
Why are the most socially-attuned people in the world, completely dehumanized?
为什么世界上最善于社交的人,却完全失去了人性?
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
Joh Bhakdi
Totally agree, only that there actually already is a pretty good understanding of consciousness, creativity etc. In other disciplines . Super AI can happen any day, but outside the AI mainstream .
我完全赞同,只是实际上在其他领域,我们对意识和创造力已经有了很好的理解。超级人工智能可能随时出现,但可能不是在人工智能的主流领域内。
Totally agree, only that there actually already is a pretty good understanding of consciousness, creativity etc. In other disciplines . Super AI can happen any day, but outside the AI mainstream .
我完全赞同,只是实际上在其他领域,我们对意识和创造力已经有了很好的理解。超级人工智能可能随时出现,但可能不是在人工智能的主流领域内。
Michael Kentley
Exactly. I wasn't expecting someone like Google or MSFT to "accidently" create Neuromancer. I suspect that the people who work on this stuff for a living are pretty focused on monetizing very specific behaviors and the last thing they need is something with a mind of its own. Its the people outside of the mainstream with the right types of hardware and software doing more "pure research" -- for lack of a better term -- who will discover that some kind of consciousness emerges from a sufficiently complex and interconnected system. Then what? Its prudent to have at least thought about this in advance.
完全正确。我没想到像谷歌或微软这样的公司会“意外”创造出《神经漫游者》。我怀疑从事这类工作的人都非常注重将特定行为货币化,他们最不需要的就是有自己的思想的东西。只有那些拥有合适硬件和软件类型的非主流人士进行更多的“纯粹研究”(因为没有更好的术语),他们才会发现某种意识会从一个足够复杂和相互关联的系统中浮现出来。然后呢?至少提前考虑一下这一点是明智的。
Exactly. I wasn't expecting someone like Google or MSFT to "accidently" create Neuromancer. I suspect that the people who work on this stuff for a living are pretty focused on monetizing very specific behaviors and the last thing they need is something with a mind of its own. Its the people outside of the mainstream with the right types of hardware and software doing more "pure research" -- for lack of a better term -- who will discover that some kind of consciousness emerges from a sufficiently complex and interconnected system. Then what? Its prudent to have at least thought about this in advance.
完全正确。我没想到像谷歌或微软这样的公司会“意外”创造出《神经漫游者》。我怀疑从事这类工作的人都非常注重将特定行为货币化,他们最不需要的就是有自己的思想的东西。只有那些拥有合适硬件和软件类型的非主流人士进行更多的“纯粹研究”(因为没有更好的术语),他们才会发现某种意识会从一个足够复杂和相互关联的系统中浮现出来。然后呢?至少提前考虑一下这一点是明智的。
Randy Goebel
As soon as you believe you have a theory of self-consciousness, then we will all try to program it, and if successful, rendering it as yet another software architecture. ;-)
一旦你相信你有了自我意识的理论,那么我们就会尝试对它进行编程,如果成功了,就把它作为另一种软件架构呈现出来。
As soon as you believe you have a theory of self-consciousness, then we will all try to program it, and if successful, rendering it as yet another software architecture. ;-)
一旦你相信你有了自我意识的理论,那么我们就会尝试对它进行编程,如果成功了,就把它作为另一种软件架构呈现出来。
Joseph Greene
Yea exactly gosh your a very intelligent man let's talk further on this please
是的,确切地说,天哪,你是一个非常聪明的人,让我们进一步讨论这个问题吧。
Yea exactly gosh your a very intelligent man let's talk further on this please
是的,确切地说,天哪,你是一个非常聪明的人,让我们进一步讨论这个问题吧。
Peter Ward
"All of us hope for a technological breakthrough, but none of us can predict when there will be one." Isn't that why the issue of runaway superintelligence should be investigated now. We just don't know when there might be a major breakthrough and having a gameplan ready to go when such a situation arises could be of great benefit.
“我们都希望技术突破,但没有人能预测何时会有突破。”这就是为什么现在就应该研究超级智能失控的问题。我们只是不知道何时会有重大突破,而在出现这种情况时有一个准备好的计划可能会大有裨益。
"All of us hope for a technological breakthrough, but none of us can predict when there will be one." Isn't that why the issue of runaway superintelligence should be investigated now. We just don't know when there might be a major breakthrough and having a gameplan ready to go when such a situation arises could be of great benefit.
“我们都希望技术突破,但没有人能预测何时会有突破。”这就是为什么现在就应该研究超级智能失控的问题。我们只是不知道何时会有重大突破,而在出现这种情况时有一个准备好的计划可能会大有裨益。
Philip Parker
Exactly. One should also take seriously the caution that is expressed by notable experts in this field.
完全正确。还应该认真对待这一领域著名专家所表达的警告。
Exactly. One should also take seriously the caution that is expressed by notable experts in this field.
完全正确。还应该认真对待这一领域著名专家所表达的警告。
Daniel Friedman
Andrew Ng is absolutely an expert in this field. Most of the caution expressed seems to be by experts in other fields, e.g., Hawking, Musk, etc. If you actually take courses on deep learning, probabilistic graphical models, etc, and you read the cutting edge research papers, then you would realize just how far we are from the type of AI these people fear. As far as your comment below, Nick Bostrom is a philosopher, not a machine learning expert. We can all sit around thinking about "what ifs" but if the reality, as described by Ng in his response, doesn't match the hypothetical situation, then it is little but vain speculation.
在这个领域,吴恩达绝对是权威。大多数的警告似乎来自于其他领域的专家,比如霍金、马斯克等。如果你真的去学习深度学习和概率图模型等课程,并且阅读最新的研究论文,你就会意识到我们离那些人所担心的人工智能类型还有多远。至于你下面提到的评论,尼克·博斯特罗姆(Nick Bostrom) 是一位哲学家,而不是机器学习专家。我们当然可以坐在一起想象各种“可能性”,但如果现实情况,如吴恩达在他的回应中所描述的那样,并不符合这些假设情况,那么这些想象就只是徒劳的猜测。
Andrew Ng is absolutely an expert in this field. Most of the caution expressed seems to be by experts in other fields, e.g., Hawking, Musk, etc. If you actually take courses on deep learning, probabilistic graphical models, etc, and you read the cutting edge research papers, then you would realize just how far we are from the type of AI these people fear. As far as your comment below, Nick Bostrom is a philosopher, not a machine learning expert. We can all sit around thinking about "what ifs" but if the reality, as described by Ng in his response, doesn't match the hypothetical situation, then it is little but vain speculation.
在这个领域,吴恩达绝对是权威。大多数的警告似乎来自于其他领域的专家,比如霍金、马斯克等。如果你真的去学习深度学习和概率图模型等课程,并且阅读最新的研究论文,你就会意识到我们离那些人所担心的人工智能类型还有多远。至于你下面提到的评论,尼克·博斯特罗姆(Nick Bostrom) 是一位哲学家,而不是机器学习专家。我们当然可以坐在一起想象各种“可能性”,但如果现实情况,如吴恩达在他的回应中所描述的那样,并不符合这些假设情况,那么这些想象就只是徒劳的猜测。
Graham Zaretsky
The only thing you can investigate are the only things that we know actually ARE intelligent, and that is people. And we have such investigators now -- they are called 'psychologists' and 'psychiatrists' and 'therapists'.
But on the machine front, there is nothing out there to investigate. Unless you are worried about Google's driverless cars abusing its power and not paying for tolls, there's nothing there.
我们唯一能研究的是已知具有智能的东西,也就是人类。我们已经有这样的研究者——他们被称为“心理学家”、“精神病学家”和“治疗师”。
但在机器方面,目前没有什么可研究的。除非你担心谷歌的无人驾驶汽车滥用权力,比如不支付过路费,否则在这方面没有什么可担心的。
The only thing you can investigate are the only things that we know actually ARE intelligent, and that is people. And we have such investigators now -- they are called 'psychologists' and 'psychiatrists' and 'therapists'.
But on the machine front, there is nothing out there to investigate. Unless you are worried about Google's driverless cars abusing its power and not paying for tolls, there's nothing there.
我们唯一能研究的是已知具有智能的东西,也就是人类。我们已经有这样的研究者——他们被称为“心理学家”、“精神病学家”和“治疗师”。
但在机器方面,目前没有什么可研究的。除非你担心谷歌的无人驾驶汽车滥用权力,比如不支付过路费,否则在这方面没有什么可担心的。
In order to investigate something, there has to be something to investigate. Make sense? Sure, if we someone was close to a breakthrough, you'd keep an eye on it. And everyone's eyes are on those frontiers anyway, not because they are worried that someone's financial predictions program will want to corner the market on soybeans, or will have an unrequited love for that guy on Mad Money. But because we know that our programs aren't always perfect. There's no need to investigate -- if something goes wrong, then you call tech support.
要进行调查,必须有具体的调查对象。这有道理吧?确实,如果有人接近于某个重大突破,我们自然会密切关注。而且,所有人的目光都已经集中在那些科技前沿,并不是因为他们担心某个金融预测程序会想要操纵大豆市场,或者对《疯狂金钱》节目中的某个人产生非分之想。而是因为我们知道我们的程序并非总是无懈可击。没有必要去调查——如果出现问题,直接联系技术支持就好。
要进行调查,必须有具体的调查对象。这有道理吧?确实,如果有人接近于某个重大突破,我们自然会密切关注。而且,所有人的目光都已经集中在那些科技前沿,并不是因为他们担心某个金融预测程序会想要操纵大豆市场,或者对《疯狂金钱》节目中的某个人产生非分之想。而是因为我们知道我们的程序并非总是无懈可击。没有必要去调查——如果出现问题,直接联系技术支持就好。
As far as imagining all the things that MIGHT someday go wrong, that's the realm of science fiction, and I love science fiction. But if you start believing those scenarios with no evidence whatsoever, then you are moving from the realm of fiction and into the realm of psychosis.
至于想象所有可能在未来出错的事情,那是科幻小说的领域,我喜欢科幻小说。但如果你开始相信那些没有任何证据的情景,那么你就是从小说的领域转移到了精神病的领域。
至于想象所有可能在未来出错的事情,那是科幻小说的领域,我喜欢科幻小说。但如果你开始相信那些没有任何证据的情景,那么你就是从小说的领域转移到了精神病的领域。
Alan Tan
you are one of my top idols, in the ranks with Alan Turing, so it's not easy for me to write something disagreeing with you.
However, your argument was based on one implied assumption that has not been proven: that human race is good, and preserve human dominance is a desired outcome of AI development.
The problem is more human have been victims of other humans than any other cause of unnatural death.
你是我的顶级偶像之一,与艾伦·图灵齐名,所以对我来说,提出与你不同的观点并不容易。
然而,你的论点基于一个未经证实的隐含假设:人类是善良的,并且保持人类的主导地位是人工智能发展中所期望的结果。
问题是,相比于其他任何非自然死亡的原因,更多的人是死于其他人类的手中。
you are one of my top idols, in the ranks with Alan Turing, so it's not easy for me to write something disagreeing with you.
However, your argument was based on one implied assumption that has not been proven: that human race is good, and preserve human dominance is a desired outcome of AI development.
The problem is more human have been victims of other humans than any other cause of unnatural death.
你是我的顶级偶像之一,与艾伦·图灵齐名,所以对我来说,提出与你不同的观点并不容易。
然而,你的论点基于一个未经证实的隐含假设:人类是善良的,并且保持人类的主导地位是人工智能发展中所期望的结果。
问题是,相比于其他任何非自然死亡的原因,更多的人是死于其他人类的手中。
AI does not need cognitive capabilities that surpass human to become a threat. Nuclear bomb has near-zero intelligence (if any), but it has been a big threat to human kind since it was first invented, and continue to be so. As another human invention, there is no innate differences between nuclear bomb (physical power) and AI (logical power) that dictates they would be used by human differently. The problem of ANYTHING that is powerful (physical or logical) is that the power can be used one way or another, but WE as HUMAN, could not even agree to which way is better -- simply put, how do we know super powerful tools like AI won't fall into wrong hands? And it does not need to be super smart, or smarter than all humankind to be capable of making human extinct....
人工智能并不需要超越人类的认知能力就能成为威胁。核弹几乎没有智能(如果有的话),但它自从被发明以来就一直对人类构成巨大威胁,并且至今依然如此。作为人类的另一项发明,核弹(物理力量)与人工智能(逻辑力量)之间并没有本质区别,这决定了它们可能会被人类以不同的方式使用。任何强大的事物(无论是物理上的还是逻辑上的)的问题在于,这种力量可以被用于不同的目的,但我们作为人类,甚至无法达成共识,哪种用途更好——简而言之,我们怎么能确保像AI这样超级强大的工具不会落入坏人之手?它不必比全人类更聪明,只要足够强大,就有可能导致人类的灭绝。
人工智能并不需要超越人类的认知能力就能成为威胁。核弹几乎没有智能(如果有的话),但它自从被发明以来就一直对人类构成巨大威胁,并且至今依然如此。作为人类的另一项发明,核弹(物理力量)与人工智能(逻辑力量)之间并没有本质区别,这决定了它们可能会被人类以不同的方式使用。任何强大的事物(无论是物理上的还是逻辑上的)的问题在于,这种力量可以被用于不同的目的,但我们作为人类,甚至无法达成共识,哪种用途更好——简而言之,我们怎么能确保像AI这样超级强大的工具不会落入坏人之手?它不必比全人类更聪明,只要足够强大,就有可能导致人类的灭绝。
AI has a long way to go from a scientist perspective, but from a futurist perspective, it does not really need all the improvement that you envision to be able to end its creators -- that is human, and that's not even a "fault" of AI, instead, it would be our own fault.
从科学家的角度来看,AI还有很长的路要走;但从未来学家的角度来看,AI其实并不需要像你想象的那样经过所有改进,就能威胁到它的创造者——人类。而这并不是AI的“错”,而是我们自己的责任。
从科学家的角度来看,AI还有很长的路要走;但从未来学家的角度来看,AI其实并不需要像你想象的那样经过所有改进,就能威胁到它的创造者——人类。而这并不是AI的“错”,而是我们自己的责任。
Vincent Pham
Good day sir, this is my humble personal opinion. I took a look at AI from my perspective as a human behavior researcher. To answer the question: "Is AI an existential threat to humanity?", first we have to acknowledge: How do we define a threat, which comes from Intelligent spicies? Turn out its very simple, if you are sharing, or being resources of needs of species, they will be your threat (see Maslow hierarchy of needs for more information).
您好,这是我个人的一点浅见。作为一名研究人类行为的学者,我以我的专业视角来观察人工智能。要回答“人工智能是否对人类构成生存威胁”这一问题,我们首先需要明确:我们如何定义智能生物所带来的威胁?其实答案非常简单,如果你与某个物种共享资源,或者成为它们所需资源的一部分,那么它们就会成为你的威胁(可以参考马斯洛的需求层次理论了解更多相关信息)。
Good day sir, this is my humble personal opinion. I took a look at AI from my perspective as a human behavior researcher. To answer the question: "Is AI an existential threat to humanity?", first we have to acknowledge: How do we define a threat, which comes from Intelligent spicies? Turn out its very simple, if you are sharing, or being resources of needs of species, they will be your threat (see Maslow hierarchy of needs for more information).
您好,这是我个人的一点浅见。作为一名研究人类行为的学者,我以我的专业视角来观察人工智能。要回答“人工智能是否对人类构成生存威胁”这一问题,我们首先需要明确:我们如何定义智能生物所带来的威胁?其实答案非常简单,如果你与某个物种共享资源,或者成为它们所需资源的一部分,那么它们就会成为你的威胁(可以参考马斯洛的需求层次理论了解更多相关信息)。
Organic life form took billions years to move on to second stage of Maslow hierarchy of needs since the origin of life. The next stage did not take further than hundreds millions years and we, human archive the Esteem and self actualization stages for dozen of thousands years. In history record, some extraordinary individual human species even reach higher stages such as Self-transcendence. Less war and conflict happened during the last couple decade. We have more peace than ever before (See the Is War Over? — A Paradox Explained from YouTube for convincing information)
自生命起源以来,有机生命形式花了数十亿年才发展到马斯洛需求层次的第二阶段。接下来的阶段仅用了数亿年,而我们人类在几千年的时间里就已经达到了尊重和自我实现的阶段。在历史上,甚至有极少数非凡的个体达到了自我超越的更高阶段。在过去几十年里,战争和冲突的数量有所减少。我们比以往任何时候都享有更多的和平(可以参考油管上的《战争结束了吗?——一个悖论的解释》获取更有说服力的信息)。
自生命起源以来,有机生命形式花了数十亿年才发展到马斯洛需求层次的第二阶段。接下来的阶段仅用了数亿年,而我们人类在几千年的时间里就已经达到了尊重和自我实现的阶段。在历史上,甚至有极少数非凡的个体达到了自我超越的更高阶段。在过去几十年里,战争和冲突的数量有所减少。我们比以往任何时候都享有更多的和平(可以参考油管上的《战争结束了吗?——一个悖论的解释》获取更有说服力的信息)。
So for AI life form, we may have war with them since we share the need of safety with them, but in far future the faster they are and will be developed, higher chance we have to prepare the Rights, Laws and Society acceptance for them rather than prepare the wars.
对于人工智能生命形式来说,虽然我们可能因为共同的安全需求而发生冲突,但从长远来看,随着它们的发展速度越来越快,我们更应该为它们制定相应的权利、法律和社会接受度,而不是准备迎接战争。
对于人工智能生命形式来说,虽然我们可能因为共同的安全需求而发生冲突,但从长远来看,随着它们的发展速度越来越快,我们更应该为它们制定相应的权利、法律和社会接受度,而不是准备迎接战争。
很赞 2
收藏