技术与社会:人工智能的飞速发展让人既兴奋又恐惧,你应该有多担心?
正文翻译
Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart...and replace us? Should we risk loss of control of our civilisation?” These questions were asked last month in an open letter from the Future of Life Institute, an ngo. It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence (ai), and was signed by tech luminaries including Elon Musk. It is the most prominent example yet of how rapid progress in ai has sparked anxiety about the potential dangers of the technology.
“我们应该让所有的工作实现自动化吗,包括让人有成就感的工作?我们应该发展最终可能在数量上超过我们、智力上胜过我们、并取代我们的非人类智慧吗? 我们应该冒着文明失控的风险吗”?上个月,非政府组织“生命未来研究所”(Future of Life Institute)在一封公开信中提出了这些问题,呼吁在创造最先进的人工智能方面“暂停”六个月,得到了包括埃隆·马斯克在内的科技名流的签名。这是迄今为止最突出的例子,说明人工智能的飞速发展引发了人们对该技术潜在风险的担忧。
Should we automate away all the jobs, including the fulfilling ones? Should we develop non-human minds that might eventually outnumber, outsmart...and replace us? Should we risk loss of control of our civilisation?” These questions were asked last month in an open letter from the Future of Life Institute, an ngo. It called for a six-month “pause” in the creation of the most advanced forms of artificial intelligence (ai), and was signed by tech luminaries including Elon Musk. It is the most prominent example yet of how rapid progress in ai has sparked anxiety about the potential dangers of the technology.
“我们应该让所有的工作实现自动化吗,包括让人有成就感的工作?我们应该发展最终可能在数量上超过我们、智力上胜过我们、并取代我们的非人类智慧吗? 我们应该冒着文明失控的风险吗”?上个月,非政府组织“生命未来研究所”(Future of Life Institute)在一封公开信中提出了这些问题,呼吁在创造最先进的人工智能方面“暂停”六个月,得到了包括埃隆·马斯克在内的科技名流的签名。这是迄今为止最突出的例子,说明人工智能的飞速发展引发了人们对该技术潜在风险的担忧。
In particular, new “large language models” (llms)—the sort that powers Chatgpt, a chatbot made by Openai, a startup—have surprised even their creators with their unexpected talents as they have been scaled up. Such “emergent” abilities include everything from solving logic puzzles and writing computer code to identifying films from plot summaries written in emoji.
特别是新的“大型语言模型”(LLMS)——初创公司OpenAI开发的聊天机器人Chatgpt使用的那种驱动模型——随着规模的扩大,其出乎意料的才能甚至使开发者感到惊讶。这种“突发”能力包括解决逻辑谜题、编写计算机代码,从表情符号叙述的故事梗概中识别电影等等。
特别是新的“大型语言模型”(LLMS)——初创公司OpenAI开发的聊天机器人Chatgpt使用的那种驱动模型——随着规模的扩大,其出乎意料的才能甚至使开发者感到惊讶。这种“突发”能力包括解决逻辑谜题、编写计算机代码,从表情符号叙述的故事梗概中识别电影等等。
These models stand to transform humans’ relationship with computers, knowledge and even with themselves. Proponents of ai argue for its potential to solve big problems by developing new drugs, designing new materials to help fight climate change, or untangling the complexities of fusion power. To others, the fact that ais’ capabilities are already outrunning their creators’ understanding risks bringing to life the science-fiction disaster scenario of the machine that outsmarts its inventor, often with fatal consequences.
这些模型很可能改变人类与计算机、知识、甚至与自身的关系。人工智能的支持者认为它有潜力解决重大问题,包括开发新药物、设计新材料来帮助应对气候变化、梳理核聚变的复杂性。而对其他人来说,人工智能的能力已经超出了发明者的理解力,这一事实可能使科幻小说中的灾难场景成为现实,即机器比发明者更聪明,往往带来致命性后果。
这些模型很可能改变人类与计算机、知识、甚至与自身的关系。人工智能的支持者认为它有潜力解决重大问题,包括开发新药物、设计新材料来帮助应对气候变化、梳理核聚变的复杂性。而对其他人来说,人工智能的能力已经超出了发明者的理解力,这一事实可能使科幻小说中的灾难场景成为现实,即机器比发明者更聪明,往往带来致命性后果。
This bubbling mixture of excitement and fear makes it hard to weigh the opportunities and risks. But lessons can be learned from other industries, and from past technological shifts. So what has changed to make ai so much more capable? How scared should you be? And what should governments do?
这一兴奋与恐惧并存的局面让人很难权衡机会和风险。但是,我们可以从其他行业和以往的技术变革中吸取教训。那么,是什么变革使人工智能变得如此强大?你应该有多担心?政府应该怎么做?
这一兴奋与恐惧并存的局面让人很难权衡机会和风险。但是,我们可以从其他行业和以往的技术变革中吸取教训。那么,是什么变革使人工智能变得如此强大?你应该有多担心?政府应该怎么做?
In a special Science section, we explore the workings of llms and their future direction. The first wave of modern ai systems, which emerged a decade ago, relied on carefully labelled training data. Once exposed to a sufficient number of labelled examples, they could learn to do things like recognise images or transcribe speech. Today’s systems do not require pre-labelling, and as a result can be trained using much larger data sets taken from online sources. llms can, in effect, be trained on the entire internet—which explains their capabilities, good and bad.
在科学专题中,我们将探讨大型语言模型(LLMS)的工作原理及其未来发展方向。10年前出现了第一批现代人工智能系统,它们依赖于精心标注的训练数据。一旦接触到足够数量的标注示例,他们就能学会识别图像或转录语音等事情。如今的人工智能系统不需要事先标注,利用网络上更大的数据集就能进行训练。事实上,人们可以利用整个互联网来训练大型语言模型(LLMS)——这就解释了为什么他们的能力有好有坏。
在科学专题中,我们将探讨大型语言模型(LLMS)的工作原理及其未来发展方向。10年前出现了第一批现代人工智能系统,它们依赖于精心标注的训练数据。一旦接触到足够数量的标注示例,他们就能学会识别图像或转录语音等事情。如今的人工智能系统不需要事先标注,利用网络上更大的数据集就能进行训练。事实上,人们可以利用整个互联网来训练大型语言模型(LLMS)——这就解释了为什么他们的能力有好有坏。
Those capabilities became apparent to a wider public when Chatgpt was released in November. A million people had used it within a week; 100m within two months. It was soon being used to generate school essays and wedding speeches. Chatgpt’s popularity, and Microsoft’s move to incorporate it into Bing, its search engine, prompted rival firms to release chatbots too.
Chatgpt于去年11月发布后,公众意识到了它的这些能力。一周内用户达到100万人;两个月内用户达到1亿人。很快,它就被用来撰写学生作文和婚礼致辞。Chatgpt的火爆程度,以及微软将其整合到Bing搜索引擎的举动,促使竞争对手也推出了聊天机器人。
Chatgpt于去年11月发布后,公众意识到了它的这些能力。一周内用户达到100万人;两个月内用户达到1亿人。很快,它就被用来撰写学生作文和婚礼致辞。Chatgpt的火爆程度,以及微软将其整合到Bing搜索引擎的举动,促使竞争对手也推出了聊天机器人。
Some of these produced strange results. Bing Chat suggested to a journalist that he should leave his wife. Chatgpt has been accused of defamation by a law professor. llms produce answers that have the patina of truth, but often contain factual errors or outright fabrications. Even so, Microsoft, Google and other tech firms have begun to incorporate llms into their products, to help users create documents and perform other tasks.
其中一些聊天机器人产生了奇怪的结果。Bing Chat建议一位记者离开他的妻子。Chatgpt被一位法学教授指控诽谤。大型语言模型(LLMS)提供的答案有点儿真理的样子,但往往包含事实错误或彻头彻尾的捏造。即便如此,微软、谷歌和其他科技公司已开始将大型语言模型(LLMS)整合到他们的产品中,以帮助用户创建文档和执行其他任务。
其中一些聊天机器人产生了奇怪的结果。Bing Chat建议一位记者离开他的妻子。Chatgpt被一位法学教授指控诽谤。大型语言模型(LLMS)提供的答案有点儿真理的样子,但往往包含事实错误或彻头彻尾的捏造。即便如此,微软、谷歌和其他科技公司已开始将大型语言模型(LLMS)整合到他们的产品中,以帮助用户创建文档和执行其他任务。
The recent acceleration in both the power and visibility of ai systems, and growing awareness of their abilities and defects, have raised fears that the technology is now advancing so quickly that it cannot be safely controlled. Hence the call for a pause, and growing concern that ai could threaten not just jobs, factual accuracy and reputations, but the existence of humanity itself.
最近人工智能系统的能力和可见性都在加速发展,人们越来越认识到它们的能力和缺陷,由此引发的担忧是这项技术发展得这么快,会不会在安全上失去控制。所以才被叫停,人们越来越担心人工智能不仅会威胁到就业、事实准确性、名誉,还会威胁到人类自身的生存。
最近人工智能系统的能力和可见性都在加速发展,人们越来越认识到它们的能力和缺陷,由此引发的担忧是这项技术发展得这么快,会不会在安全上失去控制。所以才被叫停,人们越来越担心人工智能不仅会威胁到就业、事实准确性、名誉,还会威胁到人类自身的生存。
Extinction? Rebellion?
灭绝?叛乱?
灭绝?叛乱?
The fear that machines will steal jobs is centuries old. But so far new technology has created new jobs to replace the ones it has destroyed. Machines tend to be able to perform some tasks, not others, increasing demand for people who can do the jobs machines cannot. Could this time be different? A sudden dislocation in job markets cannot be ruled out, even if so far there is no sign of one. Previous technology has tended to replace unskilled tasks, but llms can perform some white-collar tasks, such as summarising documents and writing code.
机器抢走人类饭碗的担忧存在几个世纪了。但迄今为止,新技术创造出新工作取代了被其淘汰的旧工作。机器往往能完成某项工作,但也有无能为力的时候,所以那些能为机器所不能为的人变得炙手可热。这一次会有所不同吗?不排除就业市场突然陷入混乱的可能,不过到目前为止尚未出现这种情况。以前的技术往往取代非技术类工作,但大型语言模型(LLMS)可以完成某些白领工作,例如总结文档和编写代码。
机器抢走人类饭碗的担忧存在几个世纪了。但迄今为止,新技术创造出新工作取代了被其淘汰的旧工作。机器往往能完成某项工作,但也有无能为力的时候,所以那些能为机器所不能为的人变得炙手可热。这一次会有所不同吗?不排除就业市场突然陷入混乱的可能,不过到目前为止尚未出现这种情况。以前的技术往往取代非技术类工作,但大型语言模型(LLMS)可以完成某些白领工作,例如总结文档和编写代码。
The degree of existential risk posed by ai has been hotly debated. Experts are divided. In a survey of ai researchers carried out in 2022, 48% thought there was at least a 10% chance that ai’s impact would be “extremely bad (eg, human extinction)”. But 25% said the risk was 0%; the median researcher put the risk at 5%. The nightmare is that an advanced ai causes harm on a massive scale, by making poisons or viruses, or persuading humans to commit terrorist acts. It need not have evil intent: researchers worry that future ais may have goals that do not align with those of their human creators.
人工智能会给人类的生存带来多大风险一直备受争议。专家们对此意见不一。在2022年对人工智能研究人员进行的一项调查中,48%的人认为人工智能至少有10%的可能造成极其恶劣的影响 (例如人类灭绝)。但25%的人认为风险为0%;中位数研究人员给出的风险为5%。噩梦是先进的人工智能制造毒药或病毒,或者说服人类进行恐怖活动,从而造成大规模的伤害。人工智能不需要有邪恶意图:研究人员担心未来的人工智能可能会有与人类创造者不一致的目标。
人工智能会给人类的生存带来多大风险一直备受争议。专家们对此意见不一。在2022年对人工智能研究人员进行的一项调查中,48%的人认为人工智能至少有10%的可能造成极其恶劣的影响 (例如人类灭绝)。但25%的人认为风险为0%;中位数研究人员给出的风险为5%。噩梦是先进的人工智能制造毒药或病毒,或者说服人类进行恐怖活动,从而造成大规模的伤害。人工智能不需要有邪恶意图:研究人员担心未来的人工智能可能会有与人类创造者不一致的目标。
Such scenarios should not be dismissed. But all involve a huge amount of guesswork, and a leap from today’s technology. And many imagine that future ais will have unfettered access to energy, money and computing power, which are real constraints today, and could be denied to a rogue ai in future. Moreover, experts tend to overstate the risks in their area, compared with other forecasters. (And Mr Musk, who is launching his own ai startup, has an interest in his rivals downing tools.) Imposing heavy regulation, or indeed a pause, today seems an over-reaction. A pause would also be unenforceable.
这种情况不应被忽视。但所有这些都是基于大量的猜测,以及当今技术将出现巨大的飞跃。许多人想象,未来的人工智能将无限地获取能源、资金、算力,但这些在当今是完全受限的,未来可以拒绝向流氓人工智能提供这些资源。而且与其他预测者相比,专家们往往夸大他们所在领域的风险。(马斯克正在创办自己的人工智能初创公司,竞争对手的叫停引起了他的兴趣。)无论实施严格的监管,还是真正暂停研发,现在似乎都是过度反应,而且叫停无法被强制执行。
这种情况不应被忽视。但所有这些都是基于大量的猜测,以及当今技术将出现巨大的飞跃。许多人想象,未来的人工智能将无限地获取能源、资金、算力,但这些在当今是完全受限的,未来可以拒绝向流氓人工智能提供这些资源。而且与其他预测者相比,专家们往往夸大他们所在领域的风险。(马斯克正在创办自己的人工智能初创公司,竞争对手的叫停引起了他的兴趣。)无论实施严格的监管,还是真正暂停研发,现在似乎都是过度反应,而且叫停无法被强制执行。
Regulation is needed, but for more mundane reasons than saving humanity. Existing ai systems raise real concerns about bias, privacy and intellectual-property rights. As the technology advances, other problems could become apparent. The key is to balance the promise of ai with an assessment of the risks, and to be ready to adapt.
监管是必要的,但主要是出于现实的原因,而不是拯救人类。现在的人工智能系统引发的真正担忧是偏见、隐私、知识产权。随着技术的进步,可能会显现其他问题。关键是在人工智能的前景与风险评估之间取得平衡,并做好适应的准备。
监管是必要的,但主要是出于现实的原因,而不是拯救人类。现在的人工智能系统引发的真正担忧是偏见、隐私、知识产权。随着技术的进步,可能会显现其他问题。关键是在人工智能的前景与风险评估之间取得平衡,并做好适应的准备。
So far governments are taking three different approaches. At one end of the spectrum is Britain, which has proposed a “light-touch” approach with no new rules or regulatory bodies, but applies existing regulations to ai systems. The aim is to boost investment and turn Britain into an “ai superpower”. America has taken a similar approach, though the Biden administration is now seeking public views on what a rulebook might look like.
到目前为止,各国政府采取了三种不同的模式。英国采取了温和模式,没有新增法规或监管机构,而是利用现行法规来监管人工智能系统,旨在促进投资,使英国成为“人工智能超级大国”。美国采取了类似的做法,但拜登政府正在就相关法规的制定征求民意。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
到目前为止,各国政府采取了三种不同的模式。英国采取了温和模式,没有新增法规或监管机构,而是利用现行法规来监管人工智能系统,旨在促进投资,使英国成为“人工智能超级大国”。美国采取了类似的做法,但拜登政府正在就相关法规的制定征求民意。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
The eu is taking a tougher line. Its proposed law categorises different uses of ai by the degree of risk, and requires increasingly stringent monitoring and disclosure as the degree of risk rises from, say, music-recommendation to self-driving cars. Some uses of ai are banned altogether, such as subliminal advertising and remote biometrics. Firms that break the rules will be fined. For some critics, these regulations are too stifling.
欧盟采取了比较强硬的方针。它提出的法律根据风险程度对人工智能的各种用途进行分类,并且要求随着风险程度的增加,例如从音乐推荐服务到无人驾驶汽车,实行越来越严格的监管和信息披露。人工智能的某些用途是完全被禁止的,例如潜意识广告和远程生物识别,违反规定的公司将受到罚款。有些批评者认为,这些规定太令人窒息了。
欧盟采取了比较强硬的方针。它提出的法律根据风险程度对人工智能的各种用途进行分类,并且要求随着风险程度的增加,例如从音乐推荐服务到无人驾驶汽车,实行越来越严格的监管和信息披露。人工智能的某些用途是完全被禁止的,例如潜意识广告和远程生物识别,违反规定的公司将受到罚款。有些批评者认为,这些规定太令人窒息了。
But others say an even sterner approach is needed. Governments should treat ai like medicines, with a dedicated regulator, strict testing and pre-approval before public release. China is doing some of this, requiring firms to register ai products and undergo a security review before release. But safety may be less of a motive than politics: a key requirement is that ais’ output reflects the “core value of socialism”.
但也有人认为,应该采取更严厉的措施。政府应该像对待药物一样对待人工智能,有专门的监管机构,严格的测试、公开发布前的预批。中国就在这样做,要求公司在发布人工智能产品之前进行注册和安全审查。但安全性可能主要出于政治上的考虑:一个核心要求是人工智能的输出应该反映“社会主义的核心价值观”。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
但也有人认为,应该采取更严厉的措施。政府应该像对待药物一样对待人工智能,有专门的监管机构,严格的测试、公开发布前的预批。中国就在这样做,要求公司在发布人工智能产品之前进行注册和安全审查。但安全性可能主要出于政治上的考虑:一个核心要求是人工智能的输出应该反映“社会主义的核心价值观”。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
What to do? The light-touch approach is unlikely to be enough. If ai is as important a technology as cars, planes and medicines—and there is good reason to believe that it is—then, like them, it will need new rules. Accordingly, the eu’s model is closest to the mark, though its classification system is overwrought and a principles-based approach would be more flexible. Compelling disclosure about how systems are trained, how they operate and how they are monitored, and requiring inspections, would be comparable to similar rules in other industries.
那该怎么办?温和模式可能力度不够。如果人工智能与汽车、飞机、药物一样重要的话(我们有充分的理由相信这一点)那么人工智能和它们一样需要新的法规。因此,欧盟的模式是最接近目标的,只是它的分类体系过于繁杂,采取基于原则的模式会比较灵活。强制披露有关人工智能系统的训练方式、工作原理、监控方式,并要求审查,这与其他行业的类似规则差不多。
那该怎么办?温和模式可能力度不够。如果人工智能与汽车、飞机、药物一样重要的话(我们有充分的理由相信这一点)那么人工智能和它们一样需要新的法规。因此,欧盟的模式是最接近目标的,只是它的分类体系过于繁杂,采取基于原则的模式会比较灵活。强制披露有关人工智能系统的训练方式、工作原理、监控方式,并要求审查,这与其他行业的类似规则差不多。
This could allow for tighter regulation over time, if needed. A dedicated regulator may then seem appropriate; so too may intergovernmental treaties, similar to those that govern nuclear weapons, should plausible evidence emerge of existential risk. To monitor that risk, governments could form a body modelled on cern, a particle-physics laboratory, that could also study ai safety and ethics—areas where companies lack incentives to invest as much as society might wish.
随着时间的推移,必要时可以实施更严格的监管。成立一个专门的监管机构可能是合适的举措;如果有人类生存面临风险的确凿证据,各国也可以签署政府间条约(类似于那些规范核武器的条约)。为了监控这种风险,各国政府还可以效仿欧洲核子研究组织(一座粒子物理实验室)成立一个机构,该机构还可以研究人工智能的安全和伦理——企业对这些领域的投资动力不足,投资力度达不到社会的期待。
随着时间的推移,必要时可以实施更严格的监管。成立一个专门的监管机构可能是合适的举措;如果有人类生存面临风险的确凿证据,各国也可以签署政府间条约(类似于那些规范核武器的条约)。为了监控这种风险,各国政府还可以效仿欧洲核子研究组织(一座粒子物理实验室)成立一个机构,该机构还可以研究人工智能的安全和伦理——企业对这些领域的投资动力不足,投资力度达不到社会的期待。
This powerful technology poses new risks, but also offers extraordinary opportunities. Balancing the two means treading carefully. A measured approach today can provide the foundations on which further rules can be added in future. But the time to start building those foundations is now.
这项强大的技术带来了新的风险,但也提供了非凡的机遇,在两者之间取得平衡就得小心翼翼。现在采取慎重的模式能为将来增添规则打下基础,但现在是时候开始打基础了。
这项强大的技术带来了新的风险,但也提供了非凡的机遇,在两者之间取得平衡就得小心翼翼。现在采取慎重的模式能为将来增添规则打下基础,但现在是时候开始打基础了。
评论翻译
很赞 1
收藏