话题讨论:大家会害怕人工智能吗?
正文翻译
Is AI an existential threat to humanity?
人工智能是人类生存的威胁吗?
Is AI an existential threat to humanity?
人工智能是人类生存的威胁吗?
评论翻译
Adam D'Angelo ·
Is AI a potential threat to humanity?
In the near term AI serves as a tool that can magnify the amount of power an individual has. For example, someone could buy thousands of cheap drones, attach a gun to each of them, and develop AI software to send them around shooting people. If the software was good enough this could result in far more destruction than a normal terrorist attack. And I fully expect that the software part of this will become easy in the future if it isn't already today.
人工智能是否对人类构成潜在威胁?
短期内,人工智能作为一种工具,能够放大个体拥有的力量。例如,有人可以购买成千上万的廉价无人机,为它们各自装备枪支,并开发人工智能软件来指挥它们四处射击。如果软件足够先进,这造成的破坏程度可能远超普通恐怖袭击。我完全预料到,如果现在还没有变得简单,那么将来这部分软件将会变得越来越容易。
Is AI a potential threat to humanity?
In the near term AI serves as a tool that can magnify the amount of power an individual has. For example, someone could buy thousands of cheap drones, attach a gun to each of them, and develop AI software to send them around shooting people. If the software was good enough this could result in far more destruction than a normal terrorist attack. And I fully expect that the software part of this will become easy in the future if it isn't already today.
人工智能是否对人类构成潜在威胁?
短期内,人工智能作为一种工具,能够放大个体拥有的力量。例如,有人可以购买成千上万的廉价无人机,为它们各自装备枪支,并开发人工智能软件来指挥它们四处射击。如果软件足够先进,这造成的破坏程度可能远超普通恐怖袭击。我完全预料到,如果现在还没有变得简单,那么将来这部分软件将会变得越来越容易。
This is very different from the options of a terrorist group today, because right now they need humans to carry out attacks and there is a limit to the amount of damage that can be done per person. Having relatively simple AI in place of the human here brings the marginal cost of an attack down to zero and hurts the ability of law enforcement to stop attacks or retaliate. So, there is a risk that as AI gets better and better it at least destabilizes things.
这与当今恐怖组织的选项截然不同,因为他们需要人来执行攻击,每个人能造成的损害是有限的。用相对简单的人工智能取代人,可以将攻击的边际成本降至零,并削弱执法部门阻止攻击或进行报复的能力。因此,存在一种风险,随着人工智能越来越先进,它至少会破坏稳定。
这与当今恐怖组织的选项截然不同,因为他们需要人来执行攻击,每个人能造成的损害是有限的。用相对简单的人工智能取代人,可以将攻击的边际成本降至零,并削弱执法部门阻止攻击或进行报复的能力。因此,存在一种风险,随着人工智能越来越先进,它至少会破坏稳定。
This is totally independent of concerns about AI "taking over" with its own "free will". I think that is a risk too, but it is much further off, and I think the near term force magnifier issue is just as dangerous.
这与担心AI凭借“自由意志”来“接管一切”完全不同。我认为这是一个潜在的风险,但它离我们还很远。而我认为,短期内的力量放大问题同样是一个更为危险的隐患。
这与担心AI凭借“自由意志”来“接管一切”完全不同。我认为这是一个潜在的风险,但它离我们还很远。而我认为,短期内的力量放大问题同样是一个更为危险的隐患。
Tony Paternite
This scares me.
这让我感到害怕。
This scares me.
这让我感到害怕。
Taylor Sage
Not that scare to be honest.
So hobby sized drones armed with small arms is possible but is still deep in the proof of concept phase. Anything in the scale of say a MQ-1 Predator (Smallest armed RPA) is well outside the budget of a single non state actor and easily countered (Many active programs dedicated to this exact threat right now)
说实话,我并不那么害怕。
就业余级别的无人机而言,虽然理论上可以装备小型武器,但目前仍处于概念验证阶段。任何达到MQ-1“捕食者”(最小的武装遥控飞机)规模的无人机,都远远超出了单一非国家玩家的预算,并且很容易被对抗(目前有许多活跃的程序专门针对这种威胁)。
Not that scare to be honest.
So hobby sized drones armed with small arms is possible but is still deep in the proof of concept phase. Anything in the scale of say a MQ-1 Predator (Smallest armed RPA) is well outside the budget of a single non state actor and easily countered (Many active programs dedicated to this exact threat right now)
说实话,我并不那么害怕。
就业余级别的无人机而言,虽然理论上可以装备小型武器,但目前仍处于概念验证阶段。任何达到MQ-1“捕食者”(最小的武装遥控飞机)规模的无人机,都远远超出了单一非国家玩家的预算,并且很容易被对抗(目前有许多活跃的程序专门针对这种威胁)。
Lets say however small scale drones could be reliably armed with small arms and be given autonomous tasking from a centralized C2 (Command and Control). It's likely an attack of this nature would do much less damage then an actual armed individual with dollar for dollar comparable equipment. This assessment is founded off of the limited mobility of small scale areal platforms in confined spaces as well as the expected reactions a crowed during such an event (cover would not be hard to fine). As you get deeper into the what ifs you start to run into the reality that smaller distributed autonomous attacks would have been conducted prior to this capability being made a reality and done in a non lethal capacity allowing for thoroughly threat modeled.
假设小型无人机确实能够装备轻型武器,并且能够从中央指挥控制系统接收自动任务指令。那么,这种攻击可能造成的破坏,很可能会比一个装备相当、成本相仿的武装个体要小得多。这种评估是基于小型空中平台在狭窄空间中的行动受限,以及在此类事件中人群可能的反应(找到掩护并不困难)。当你深入探讨各种可能性时,你会开始意识到,在这种能力成为现实之前,更小规模的分布式自主攻击可能已经以非致命的方式实施过,这样可以允许我们进行彻底的威胁模拟。
假设小型无人机确实能够装备轻型武器,并且能够从中央指挥控制系统接收自动任务指令。那么,这种攻击可能造成的破坏,很可能会比一个装备相当、成本相仿的武装个体要小得多。这种评估是基于小型空中平台在狭窄空间中的行动受限,以及在此类事件中人群可能的反应(找到掩护并不困难)。当你深入探讨各种可能性时,你会开始意识到,在这种能力成为现实之前,更小规模的分布式自主攻击可能已经以非致命的方式实施过,这样可以允许我们进行彻底的威胁模拟。
It should be noted that many many researchers have already addressed small scale RPA's and have developed a wide range of highly effective solutions, many of them involving electronic counter measures.
Most non state actors are cheap when it comes to weapons for terrorist attacks. It is far cheaper to recruit some fundamentalists willing to die for a cause than invest in an expensive technology. Furthermore the tactic would be a one time use and directs the fear factor onto the technology and not the ideology.
值得一提的是,许多研究人员已经针对小型遥控飞机提出了解决方案,并开发了一系列非常有效的对策,其中许多涉及电子对抗措施。
大多数非国家玩家在恐怖袭击的武器选择上都很节省。招募一些愿意为事业牺牲的激进分子比投资昂贵的技术要便宜得多。再者,这种策略只能使用一次,并将人们的恐惧感引向技术,而不是背后的意识形态。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
Most non state actors are cheap when it comes to weapons for terrorist attacks. It is far cheaper to recruit some fundamentalists willing to die for a cause than invest in an expensive technology. Furthermore the tactic would be a one time use and directs the fear factor onto the technology and not the ideology.
值得一提的是,许多研究人员已经针对小型遥控飞机提出了解决方案,并开发了一系列非常有效的对策,其中许多涉及电子对抗措施。
大多数非国家玩家在恐怖袭击的武器选择上都很节省。招募一些愿意为事业牺牲的激进分子比投资昂贵的技术要便宜得多。再者,这种策略只能使用一次,并将人们的恐惧感引向技术,而不是背后的意识形态。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
Steven Lu
The only problem with your entire argument is that you have neglected to consider explosives
你的整个论点的问题在于你没有考虑到爆炸物。
The only problem with your entire argument is that you have neglected to consider explosives
你的整个论点的问题在于你没有考虑到爆炸物。
Binu Jasim
Drones are already there in Afghanistan, Yemen etc. targeting terrorists and sometimes civilians too..
I don't see how drone with AI can be a bigger threat than this. The hardest part is to get the drone infiltrate a country, not to shoot people (that can be controlled by humans through signals from a distance, and this can be done even now). The drone will be shot down by radars way before it enters US or Europe or any other potentially terrorist targeted country.
无人机已经在阿富汗、也门等地被用来打击恐怖分子,有时也会误伤平民。
我不认为配备人工智能的无人机会比现在的情况更危险。最难的部分是让无人机潜入一个国家,而不是攻击人(这可以由人类通过远程信号控制,现在就可以做到)。无人机在进入美国或欧洲或其他潜在的恐怖袭击目标国家之前,就会被雷达击落。
Drones are already there in Afghanistan, Yemen etc. targeting terrorists and sometimes civilians too..
I don't see how drone with AI can be a bigger threat than this. The hardest part is to get the drone infiltrate a country, not to shoot people (that can be controlled by humans through signals from a distance, and this can be done even now). The drone will be shot down by radars way before it enters US or Europe or any other potentially terrorist targeted country.
无人机已经在阿富汗、也门等地被用来打击恐怖分子,有时也会误伤平民。
我不认为配备人工智能的无人机会比现在的情况更危险。最难的部分是让无人机潜入一个国家,而不是攻击人(这可以由人类通过远程信号控制,现在就可以做到)。无人机在进入美国或欧洲或其他潜在的恐怖袭击目标国家之前,就会被雷达击落。
William Ritson
I assume the drone would be assembled inside the country. Imagine a civilian quadcopter with a handgun strapped to it and a system to fire it. With good enough software you could get it to autonomously fly around and indiscriminately kill people.
我假设无人机会在国内组装。想象一下,一个民用四旋翼无人机上绑着一把手枪,并且有一个开火的系统。有了足够好的软件,你可以让它自动飞行并随意杀人。
I assume the drone would be assembled inside the country. Imagine a civilian quadcopter with a handgun strapped to it and a system to fire it. With good enough software you could get it to autonomously fly around and indiscriminately kill people.
我假设无人机会在国内组装。想象一下,一个民用四旋翼无人机上绑着一把手枪,并且有一个开火的系统。有了足够好的软件,你可以让它自动飞行并随意杀人。
Once a terrorist group developed such software they could might assemble a dozen of them and hide them in a truck. They could then park the truck near a sporting event. Once they were out of the way they could remotely trigger the drones to fly into the stands and attack. They could also continually reuse the software for multiple attacks with the same cheap off the shelf hardware.
一旦恐怖组织开发出这样的软件,他们可能会组装十几架这样的无人机,并将它们藏在卡车里。然后他们可以把卡车停在体育赛事附近。一旦他们离开现场,他们可以远程触发无人机飞入看台并发动攻击。他们还可以不断地用同样的廉价现成硬件重复使用软件进行多次攻击。
一旦恐怖组织开发出这样的软件,他们可能会组装十几架这样的无人机,并将它们藏在卡车里。然后他们可以把卡车停在体育赛事附近。一旦他们离开现场,他们可以远程触发无人机飞入看台并发动攻击。他们还可以不断地用同样的廉价现成硬件重复使用软件进行多次攻击。
That is not to say it's easy. Currently writing such an A.I would still be very difficult and there are probably cheaper and more effective attacks. But who knows what could happen in the next 20 years.
这并不是说这很容易。目前编写这样的人工智能仍然非常困难,可能还有更便宜、更有效的攻击方式。但谁知道未来20年会发生什么。
这并不是说这很容易。目前编写这样的人工智能仍然非常困难,可能还有更便宜、更有效的攻击方式。但谁知道未来20年会发生什么。
Koushtav Chakrabarty
If terrorists would use drones and AI controlled bots, I'm very sure law enforcement agencies would too. And the latter would invest in them much before terrorists if machines are found to be more reliable (UAVs for instance are already in use by the US whereas not many terrorists can get their hands on such tech). Plus if I remember correctly, a recent law passed in the US requires to register drones according to their specs. So, yes, as AI controlled machines improve, their uses too would increase, which will be both beneficial and harmful at the same time for humankind.
如果恐怖分子使用无人机和人工智能控制的机器人,我非常确定执法机构也会使用。如果发现机器更可靠(例如,美国已经在使用无人机,而没有多少恐怖分子能够掌握这种技术),后者会比恐怖分子更早投资于它们。另外,如果我没记错的话,美国最近通过的一项法律要求根据规格注册无人机。所以随着人工智能控制的机器的进步,它们的用途也会增加,这对人类来说既有益也有害。
If terrorists would use drones and AI controlled bots, I'm very sure law enforcement agencies would too. And the latter would invest in them much before terrorists if machines are found to be more reliable (UAVs for instance are already in use by the US whereas not many terrorists can get their hands on such tech). Plus if I remember correctly, a recent law passed in the US requires to register drones according to their specs. So, yes, as AI controlled machines improve, their uses too would increase, which will be both beneficial and harmful at the same time for humankind.
如果恐怖分子使用无人机和人工智能控制的机器人,我非常确定执法机构也会使用。如果发现机器更可靠(例如,美国已经在使用无人机,而没有多少恐怖分子能够掌握这种技术),后者会比恐怖分子更早投资于它们。另外,如果我没记错的话,美国最近通过的一项法律要求根据规格注册无人机。所以随着人工智能控制的机器的进步,它们的用途也会增加,这对人类来说既有益也有害。
Joey Thaman
What do you think is the best course of action to deal with this?
你认为处理这个问题的最佳方法是什么?
What do you think is the best course of action to deal with this?
你认为处理这个问题的最佳方法是什么?
Timothy Johnson
Perhaps the solution is to have your own drone security? I recently read "The Diamond Age," which mentions this problem.
也许解决方案是拥有你自己的无人机安全系统?我最近读了《钻石时代》,它提到了这个问题。
Perhaps the solution is to have your own drone security? I recently read "The Diamond Age," which mentions this problem.
也许解决方案是拥有你自己的无人机安全系统?我最近读了《钻石时代》,它提到了这个问题。
Aurélien Emmanuel
like every one having their own guns? Seems kind of a nice idea, works pretty well in the USA.
就像每个人都拥有枪一样?听起来是个不错的主意,在美国运作得很好。
like every one having their own guns? Seems kind of a nice idea, works pretty well in the USA.
就像每个人都拥有枪一样?听起来是个不错的主意,在美国运作得很好。
Timothy Johnson
I agree that it would be unfortunate to start an arms race. I'm imagining security drones that would only be used against other drones. If you agree that terrorists with autonomous drones is a problem, what would you do about it?
我认同启动军备竞赛并非好事。我设想的是专门用来对抗其他无人机的安全无人机。如果你也认同恐怖分子操控的自动无人机构成威胁,你会如何处理这个问题?
I agree that it would be unfortunate to start an arms race. I'm imagining security drones that would only be used against other drones. If you agree that terrorists with autonomous drones is a problem, what would you do about it?
我认同启动军备竞赛并非好事。我设想的是专门用来对抗其他无人机的安全无人机。如果你也认同恐怖分子操控的自动无人机构成威胁,你会如何处理这个问题?
Robert P. Collins
Retaliation is not the responsibility of law enforcement, at least not in a free society. I know that wasn't a major point in your answer, but it is always important to be clear about it.
执法部门的责任并不包括报复行为,至少在自由社会中是这样。我知道这一点在你之前的回答中并不是重点,但明确这一点非常重要。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
Retaliation is not the responsibility of law enforcement, at least not in a free society. I know that wasn't a major point in your answer, but it is always important to be clear about it.
执法部门的责任并不包括报复行为,至少在自由社会中是这样。我知道这一点在你之前的回答中并不是重点,但明确这一点非常重要。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
Minh Hoang
I wont think AI is going to be a threat to the humanity. Why should they exterminate human when they can collaborate with us? The machine itself can not have the thinking of a human, nor the human can not think like a machine. Arbitrary and variation are the unique trait of human brain, and if it apply to a binary machine, it wont functioned. Therefor if AI can be developed to the point of self awareness they will find that human can be the key point to their development in the time forward of that point.
我认为人工智能不会成为人类的威胁。当他们可以与我们合作时,他们为什么要消灭人类?机器本身不能拥有人类的思维,人类也不能像机器那样思考。任意性和变化是人类大脑的独特特征,如果将其应用于二进制机器,机器将无法正常工作。因此,如果人工智能发展到具有自我意识的程度,它们会发现人类是它们未来发展的关键所在。
I wont think AI is going to be a threat to the humanity. Why should they exterminate human when they can collaborate with us? The machine itself can not have the thinking of a human, nor the human can not think like a machine. Arbitrary and variation are the unique trait of human brain, and if it apply to a binary machine, it wont functioned. Therefor if AI can be developed to the point of self awareness they will find that human can be the key point to their development in the time forward of that point.
我认为人工智能不会成为人类的威胁。当他们可以与我们合作时,他们为什么要消灭人类?机器本身不能拥有人类的思维,人类也不能像机器那样思考。任意性和变化是人类大脑的独特特征,如果将其应用于二进制机器,机器将无法正常工作。因此,如果人工智能发展到具有自我意识的程度,它们会发现人类是它们未来发展的关键所在。
Andre Szykier
AI already affects your existence and socio-economic mobility. Here in the US it is called the Fair Issacs credit score. Low score makes you a pariah. High score puts you in the top 1%.
These algorithns are not sophisticated but argue for a future where deep learning AI systems will decide your future and potential value or burden to society.
Up to now data scientists pulled the levers but AI systems are becoming self-learning and wil be more precise than humans in pattern detection and prediction.
人工智能已经影响到了你的生存和社会经济流动性。在美国,这被称为 FICO 信用评分。低分让你成为贱民,高分让你进入前1%。
这些算法虽然不复杂,但它们指向了一个未来,届时深度学习人工智能系统将能够决定你的未来和你对这个社会可能的价值或负担。
迄今为止,数据科学家们一直在控制着局面,但人工智能系统正变得能够自我学习,并且在模式识别和预测方面将比人类更加精确。
AI already affects your existence and socio-economic mobility. Here in the US it is called the Fair Issacs credit score. Low score makes you a pariah. High score puts you in the top 1%.
These algorithns are not sophisticated but argue for a future where deep learning AI systems will decide your future and potential value or burden to society.
Up to now data scientists pulled the levers but AI systems are becoming self-learning and wil be more precise than humans in pattern detection and prediction.
人工智能已经影响到了你的生存和社会经济流动性。在美国,这被称为 FICO 信用评分。低分让你成为贱民,高分让你进入前1%。
这些算法虽然不复杂,但它们指向了一个未来,届时深度学习人工智能系统将能够决定你的未来和你对这个社会可能的价值或负担。
迄今为止,数据科学家们一直在控制着局面,但人工智能系统正变得能够自我学习,并且在模式识别和预测方面将比人类更加精确。
Connecting AI Systems is the next challenge. This is a bigger threat than Terminator style automatons that movies favor.
As we master genetics and socio-economic models of population determinants to wealth, AI could control who is allowed to procreate based on their healthy genes, their social status and intellectual capacity.
Sound like eugenics? I hope not. But in countries that have social complexities and need to industrialize at a rapid pace (India and Indonesia come to mind), AI could be misused by those in power to decide what demographics, ethnicity, even beliefs should be promoted and which need to be deprecated.
将人工智能系统相互连接是我们即将面临的挑战。这种挑战比电影中常描绘的终结者式的机器人威胁要大得多。
随着我们掌握遗传学和社会经济模型的人口决定因素对财富的影响,人工智能可以根据他们的健康基因、社会地位和智力能力来控制谁被允许生育。
听起来像优生学吗?我希望不是。但在那些社会复杂且需要快速工业化的国家(印度和印度尼西亚就是例子),人工智能可能被当权者滥用,以决定哪些人口统计、种族甚至信仰应该被推广,哪些需要被贬低。
As we master genetics and socio-economic models of population determinants to wealth, AI could control who is allowed to procreate based on their healthy genes, their social status and intellectual capacity.
Sound like eugenics? I hope not. But in countries that have social complexities and need to industrialize at a rapid pace (India and Indonesia come to mind), AI could be misused by those in power to decide what demographics, ethnicity, even beliefs should be promoted and which need to be deprecated.
将人工智能系统相互连接是我们即将面临的挑战。这种挑战比电影中常描绘的终结者式的机器人威胁要大得多。
随着我们掌握遗传学和社会经济模型的人口决定因素对财富的影响,人工智能可以根据他们的健康基因、社会地位和智力能力来控制谁被允许生育。
听起来像优生学吗?我希望不是。但在那些社会复杂且需要快速工业化的国家(印度和印度尼西亚就是例子),人工智能可能被当权者滥用,以决定哪些人口统计、种族甚至信仰应该被推广,哪些需要被贬低。
With the focus on information driven economics for future global growth, the real threat posed by AI is a federated system of real-time, big data, deep learning machines that begin to manage the future of humanity. No levers required because no humans needed to run them.
The Internet of Everything (IOE) may just be the first step. See below.
在信息驱动的经济成为未来全球增长的核心时,人工智能真正的威胁在于它们可能形成一个实时、大数据、深度学习的机器系统联盟,开始主导人类的未来。由于这些系统无需人类操作,因此也就不需要任何控制杆。
万物互联(IOE)可能只是第一步。
The Internet of Everything (IOE) may just be the first step. See below.
在信息驱动的经济成为未来全球增长的核心时,人工智能真正的威胁在于它们可能形成一个实时、大数据、深度学习的机器系统联盟,开始主导人类的未来。由于这些系统无需人类操作,因此也就不需要任何控制杆。
万物互联(IOE)可能只是第一步。
Chamath Palihapitiya
AI a threat to humanity? Should it be regulated?
No and no. I think the term AI is overloaded and mostly used by fear mongering technophiles or wannabe intellectuals. AI, in its current state is really about a probabilistic set of heuristics or rules. if A will ever become I, there needs to be a fluid transition and unknown boundary between deterministic and probabilistic decision making - like the human mind. I don't see that around the corner. I do think we will get ever precise capabilities in strictly defined systems (autonomous driving) where most of the hairiest and ambiguous rules will be ratified or voted on, but i don't see an "intelligent" brain anywhere around the corner. i think its mostly "smart" people trying to sound really smart...
人工智能对人类构成威胁吗?它应该受到监管吗?
不,两个问题的答案都是否定的。我认为“人工智能”这个术语被过度使用,而且大多被那些散布恐慌的技术狂热者或自诩知识分子的人使用。目前状态下的人工智能实际上就是关于一组概率性的启发式规则或规则集。如果A(人工)要成为I(智能),就需要在确定性和概率性决策之间有一个流畅的过渡和未知的界限——就像人类大脑一样。我并不认为这很快就会发生。我确实认为我们会在严格定义的系统(如自动驾驶)中获得越来越精确的能力,其中大多数复杂和模糊的规则将被批准或投票决定,但我并不认为“智能”大脑很快就会出现。我觉得这主要是一些“聪明”的人在试图让自己显得格外聪明。
AI a threat to humanity? Should it be regulated?
No and no. I think the term AI is overloaded and mostly used by fear mongering technophiles or wannabe intellectuals. AI, in its current state is really about a probabilistic set of heuristics or rules. if A will ever become I, there needs to be a fluid transition and unknown boundary between deterministic and probabilistic decision making - like the human mind. I don't see that around the corner. I do think we will get ever precise capabilities in strictly defined systems (autonomous driving) where most of the hairiest and ambiguous rules will be ratified or voted on, but i don't see an "intelligent" brain anywhere around the corner. i think its mostly "smart" people trying to sound really smart...
人工智能对人类构成威胁吗?它应该受到监管吗?
不,两个问题的答案都是否定的。我认为“人工智能”这个术语被过度使用,而且大多被那些散布恐慌的技术狂热者或自诩知识分子的人使用。目前状态下的人工智能实际上就是关于一组概率性的启发式规则或规则集。如果A(人工)要成为I(智能),就需要在确定性和概率性决策之间有一个流畅的过渡和未知的界限——就像人类大脑一样。我并不认为这很快就会发生。我确实认为我们会在严格定义的系统(如自动驾驶)中获得越来越精确的能力,其中大多数复杂和模糊的规则将被批准或投票决定,但我并不认为“智能”大脑很快就会出现。我觉得这主要是一些“聪明”的人在试图让自己显得格外聪明。
Pete Griffiths
"there needs to be a fluid transition and unknown boundary between deterministic and probabilistic decision making - like the human mind."
I have no idea what, if anything, this means.
“需要在确定性和概率性决策之间有一个流畅的过渡和未知的界限——就像人类大脑一样。”
我不知道这句话到底意味着什么——如果有什么意思的话。
"there needs to be a fluid transition and unknown boundary between deterministic and probabilistic decision making - like the human mind."
I have no idea what, if anything, this means.
“需要在确定性和概率性决策之间有一个流畅的过渡和未知的界限——就像人类大脑一样。”
我不知道这句话到底意味着什么——如果有什么意思的话。
Stephen Kahn
Do you have "free will?" Pete? Our mind says, "Yes. I make decisions, such as posting my comment." Current neuroscience casts doubts on our opinion of ourselves. Our "selves" seem to make decisions at a lower level than our consciousness tells us. Don't think about it too much. That way leads to madness.
你有“自由意志”吗,Pete?我们的大脑告诉我们,“有的,我做决定,比如发表我的评论。”但当前的神经科学对我们自我认知提出了质疑。我们的“自我”似乎在我们意识之前就已经做出了决定。不要过多思考这个问题,否则会导致疯狂。
Do you have "free will?" Pete? Our mind says, "Yes. I make decisions, such as posting my comment." Current neuroscience casts doubts on our opinion of ourselves. Our "selves" seem to make decisions at a lower level than our consciousness tells us. Don't think about it too much. That way leads to madness.
你有“自由意志”吗,Pete?我们的大脑告诉我们,“有的,我做决定,比如发表我的评论。”但当前的神经科学对我们自我认知提出了质疑。我们的“自我”似乎在我们意识之前就已经做出了决定。不要过多思考这个问题,否则会导致疯狂。
Pete Griffiths
I don't know Stephen. And I've given up thinking about it, for now at least. It does indeed lead to madness.
我不知道,Stephen。至少目前,我选择不再纠结这个问题。确实,深究会让人陷入疯狂。
I don't know Stephen. And I've given up thinking about it, for now at least. It does indeed lead to madness.
我不知道,Stephen。至少目前,我选择不再纠结这个问题。确实,深究会让人陷入疯狂。
Thomas George
Differentiate between Deterministic and Probabilistic Systems
You might find this answer useful
What is machine learning?
区分确定性系统和概率系统
你可能会发现这个答案很有用
什么是机器学习?
Differentiate between Deterministic and Probabilistic Systems
You might find this answer useful
What is machine learning?
区分确定性系统和概率系统
你可能会发现这个答案很有用
什么是机器学习?
Pete Griffiths
I know what machine learning is. I just don't understand what a 'fluid transition and unknown boundary' means.
我知道机器学习是什么。我只是不明白“流畅的过渡和未知的界限”这个表述是什么意思。
I know what machine learning is. I just don't understand what a 'fluid transition and unknown boundary' means.
我知道机器学习是什么。我只是不明白“流畅的过渡和未知的界限”这个表述是什么意思。
Olakusibe Aremu-Oluwole
From my understanding;
Machines are programmed to follow a set of rules for any particular task, whereas in AI, which is known as machine learning, meaning it has the tendency to observe and learn from it environment while performing it task, but to achieve this learning, it also follows a set of rules.
Whereas, the human mind can be autonomous, given to emotions (fear, anxiety, love and Whatnots), moral, nature and nurture, when making decisions.
Hence unless the machines have a human mind, there won't be need to worry about AI threats.
据我理解:
机器被编程遵循一套规则来执行特定任务,而在人工智能,也就是机器学习中,它能够观察并从环境中学习,同时执行任务,但这种学习也需要遵循一套规则。
相比之下,人类大脑在决策时会受到情绪(如恐惧、焦虑、爱等)、道德、天性和教养的影响。
因此,除非机器拥有类似人类的大脑,否则我们不必担心人工智能的威胁。
From my understanding;
Machines are programmed to follow a set of rules for any particular task, whereas in AI, which is known as machine learning, meaning it has the tendency to observe and learn from it environment while performing it task, but to achieve this learning, it also follows a set of rules.
Whereas, the human mind can be autonomous, given to emotions (fear, anxiety, love and Whatnots), moral, nature and nurture, when making decisions.
Hence unless the machines have a human mind, there won't be need to worry about AI threats.
据我理解:
机器被编程遵循一套规则来执行特定任务,而在人工智能,也就是机器学习中,它能够观察并从环境中学习,同时执行任务,但这种学习也需要遵循一套规则。
相比之下,人类大脑在决策时会受到情绪(如恐惧、焦虑、爱等)、道德、天性和教养的影响。
因此,除非机器拥有类似人类的大脑,否则我们不必担心人工智能的威胁。
Given these, it takes a complex AI system to make a human mind, though it ideally achievable, but it will take years of science and computing
For now, the AI systems that exist are for marketing and business, though the medical fields would push it a notch further, but the bottom line is a commercial success, but always remember, it still humans that design the machines, so let worry about the motives of these humans
要创造出类似人类大脑的复杂的人工智能系统,虽然理论上是可能的,但需要多年的科学研究和计算。
目前,现有的人工智能系统主要用于市场营销和商业领域,尽管医学领域可能会将其应用推向新的高度,但归根结底是为了商业成功。但请记住,设计这些机器的还是人类,所以我们更应该关注这些人类的动机。
For now, the AI systems that exist are for marketing and business, though the medical fields would push it a notch further, but the bottom line is a commercial success, but always remember, it still humans that design the machines, so let worry about the motives of these humans
要创造出类似人类大脑的复杂的人工智能系统,虽然理论上是可能的,但需要多年的科学研究和计算。
目前,现有的人工智能系统主要用于市场营销和商业领域,尽管医学领域可能会将其应用推向新的高度,但归根结底是为了商业成功。但请记住,设计这些机器的还是人类,所以我们更应该关注这些人类的动机。
Heiko Schmidt
I see it more in an economic context: 50% of all jobs will be replaced by AI based tools within the next 5 years. Short term this is good news because it increases productivity. Mid term it's not because the society can't change the social systems and values quick enough to make that digestible for the majority.
Our existing societies are ruled by the idea that people should participate on the generated add value in the amount they were able to create it.
That is how people become billionaires.
我更倾向于从经济角度来看待这个问题:在未来五年内,预计有50%的工作将被基于人工智能的工具所取代。短期内,这可能是个好消息,因为它能提升生产力。但从中期来看,这可能是个问题,因为社会无法迅速调整其社会体系和价值观,以适应这种变化,使之为广大民众所接受。
我们现有的社会体系是建立在这样的理念之上:人们应该根据他们创造的附加价值参与社会分配。
这也是人们如何成为亿万富翁的方式。
I see it more in an economic context: 50% of all jobs will be replaced by AI based tools within the next 5 years. Short term this is good news because it increases productivity. Mid term it's not because the society can't change the social systems and values quick enough to make that digestible for the majority.
Our existing societies are ruled by the idea that people should participate on the generated add value in the amount they were able to create it.
That is how people become billionaires.
我更倾向于从经济角度来看待这个问题:在未来五年内,预计有50%的工作将被基于人工智能的工具所取代。短期内,这可能是个好消息,因为它能提升生产力。但从中期来看,这可能是个问题,因为社会无法迅速调整其社会体系和价值观,以适应这种变化,使之为广大民众所接受。
我们现有的社会体系是建立在这样的理念之上:人们应该根据他们创造的附加价值参与社会分配。
这也是人们如何成为亿万富翁的方式。
But who is participating how much on what if machines/AI are generating the add value ?
And how can a large portion of humans adapt to the reality that no one needs their work anymore ?
If people can't find a positive role within a society they get radicalized.
但是,如果是由机器或人工智能来创造附加价值,那么人们如何参与分配、参与到多少程度的分配、以及参与什么样的分配呢?
如果大量的人类发现自己的工作不再被需要,他们该如何适应这种现实?
如果人们无法在社会中找到一个积极的角色,他们可能会变得激进。
And how can a large portion of humans adapt to the reality that no one needs their work anymore ?
If people can't find a positive role within a society they get radicalized.
但是,如果是由机器或人工智能来创造附加价值,那么人们如何参与分配、参与到多少程度的分配、以及参与什么样的分配呢?
如果大量的人类发现自己的工作不再被需要,他们该如何适应这种现实?
如果人们无法在社会中找到一个积极的角色,他们可能会变得激进。
If that is a larger group of people - we do have an existential threat.
I would like to see more ideas and discussions about how to solve this near term real problem rather than speculating about the day after the singularity.
如果在较大群体中出现这种情况,我们就面临一个生存威胁。
我更希望看到更多关于如何解决这个迫在眉睫的实际问题的想法和讨论,而不是去推测奇点之后会发生什么。
I would like to see more ideas and discussions about how to solve this near term real problem rather than speculating about the day after the singularity.
如果在较大群体中出现这种情况,我们就面临一个生存威胁。
我更希望看到更多关于如何解决这个迫在眉睫的实际问题的想法和讨论,而不是去推测奇点之后会发生什么。
很赞 3
收藏