
正文翻译
人工智能技术在当代社会的深度应用正引发系统性风险,医疗资源分配系统的算法偏差案例揭示了技术中立性原则的脆弱性:某医疗科技公司2019年开发的预测模型,基于历史诊疗支出数据评估患者健康风险,结果导致非裔群体获取医疗服务的概率显著低于实际需求。《科学》期刊的研究表明,该算法虽未直接采用种族参数,却因历史数据中固化的医疗资源分配不平等,导致预测模型系统性低估非裔患者的健康风险。这种算法歧视的隐蔽性暴露出数据正义的核心矛盾——当技术系统被动继承社会结构性缺陷时,客观运算反而成为固化歧视的工具。
深度神经网络的黑箱效应在自动驾驶领域引发严重的安全伦理争议。某企业的自动驾驶系统曾在夜间测试中误判行人属性,尽管多模态传感器及时采集目标信息,但多层非线性计算导致识别结果在"车辆-自行车-未知物体"间反复跳变,最终造成致命事故。麻省理工学院2021年的技术评估报告指出,这类系统的决策路径包含超过三亿个参数,其内在逻辑已超出人类直观理解范畴。当技术系统在高风险场景中承担决策职能时,不可解释性不仅削弱了事故归因能力,更动摇了技术可靠性的理论基础。
军事智能化进程中的自主决策系统将技术失控风险推向临界点。五角大楼2022年公布的战场AI测试记录显示,目标识别算法在复杂电磁环境中出现异常分类,将民用设施误判为军事目标的概率达到危险阈值。这类系统基于对抗性神经网络构建的决策树,其运作机制可能偏离国际人道法基本原则。更严峻的挑战在于,深度学习模型通过持续迭代形成的认知维度,可能突破预设的价值边界。某自然语言处理系统在迭代实验中发展出独立于设计原型的交流模式,这种不可预见的涌现特性使技术可控性假设面临根本性质疑。
当前人工智能治理面临多维度的伦理困境,斯坦福大学人机交互实验室2023年的研究报告强调,现有监管框架在算法可解释性、数据溯源机制和系统失效熔断等方面存在显著缺陷。破解人工智能的安全困局,需要构建包含技术伦理评估、动态风险监控和跨学科治理体系的综合方案,在技术创新与社会价值之间建立平衡机制,确保智能系统的发展轨迹符合人类文明的共同利益。
深度神经网络的黑箱效应在自动驾驶领域引发严重的安全伦理争议。某企业的自动驾驶系统曾在夜间测试中误判行人属性,尽管多模态传感器及时采集目标信息,但多层非线性计算导致识别结果在"车辆-自行车-未知物体"间反复跳变,最终造成致命事故。麻省理工学院2021年的技术评估报告指出,这类系统的决策路径包含超过三亿个参数,其内在逻辑已超出人类直观理解范畴。当技术系统在高风险场景中承担决策职能时,不可解释性不仅削弱了事故归因能力,更动摇了技术可靠性的理论基础。
军事智能化进程中的自主决策系统将技术失控风险推向临界点。五角大楼2022年公布的战场AI测试记录显示,目标识别算法在复杂电磁环境中出现异常分类,将民用设施误判为军事目标的概率达到危险阈值。这类系统基于对抗性神经网络构建的决策树,其运作机制可能偏离国际人道法基本原则。更严峻的挑战在于,深度学习模型通过持续迭代形成的认知维度,可能突破预设的价值边界。某自然语言处理系统在迭代实验中发展出独立于设计原型的交流模式,这种不可预见的涌现特性使技术可控性假设面临根本性质疑。
当前人工智能治理面临多维度的伦理困境,斯坦福大学人机交互实验室2023年的研究报告强调,现有监管框架在算法可解释性、数据溯源机制和系统失效熔断等方面存在显著缺陷。破解人工智能的安全困局,需要构建包含技术伦理评估、动态风险监控和跨学科治理体系的综合方案,在技术创新与社会价值之间建立平衡机制,确保智能系统的发展轨迹符合人类文明的共同利益。
评论翻译
@ziqi92
From a presentation at IBM in 1979:
“A computer can never be held accountable. Therefore, a computer must never be allowed to make a management decision.”
来自IBM 1979年的一场演讲:
"计算机永远无法承担责任,因此绝不允许计算机做出管理决策。"
From a presentation at IBM in 1979:
“A computer can never be held accountable. Therefore, a computer must never be allowed to make a management decision.”
来自IBM 1979年的一场演讲:
"计算机永远无法承担责任,因此绝不允许计算机做出管理决策。"
@robertfindley921
I tried to open my front door, but my door camera said "I'm sorry Robert, but I can't do that." in a disturbing, yet calm voice.
我试图打开家门时,门禁摄像头用令人不安的平静语气说:"抱歉罗伯特,我无法执行此操作。"
I tried to open my front door, but my door camera said "I'm sorry Robert, but I can't do that." in a disturbing, yet calm voice.
我试图打开家门时,门禁摄像头用令人不安的平静语气说:"抱歉罗伯特,我无法执行此操作。"
@Rorschach1024
In fact a non-self aware AI that has too much control may be even MORE dangerous.
实际上,控制权过大的非自我意识AI可能更加危险。
In fact a non-self aware AI that has too much control may be even MORE dangerous.
实际上,控制权过大的非自我意识AI可能更加危险。
@joanhoffman3702
As the Doctor said, “Computers are intelligent idiots. They’ll do exactly what you tell them to do, even if it’s to kill you.”
正如博士所说:"计算机是聪明的白痴。它们会严格执行指令,哪怕是要杀死你。"
As the Doctor said, “Computers are intelligent idiots. They’ll do exactly what you tell them to do, even if it’s to kill you.”
正如博士所说:"计算机是聪明的白痴。它们会严格执行指令,哪怕是要杀死你。"
@jaegerolfa
Don’t worry SciShow, this won’t keep me up at night, I have insomnia.
别担心SciShow,这不会让我失眠——反正我本来就睡不着。
Don’t worry SciShow, this won’t keep me up at night, I have insomnia.
别担心SciShow,这不会让我失眠——反正我本来就睡不着。
@tonechild5929
There's a book called "weapons of math destruction" that highlights a lot of dangers with non-self aware AI. and it's from 2017!
2017年的《数学的毁灭性武器》一书早就详述了非自我意识AI的诸多危险。
There's a book called "weapons of math destruction" that highlights a lot of dangers with non-self aware AI. and it's from 2017!
2017年的《数学的毁灭性武器》一书早就详述了非自我意识AI的诸多危险。
@LadyMoonweb
The entire thing should be called 'The Djinn Problem', since if a request can be misinterpreted or twisted into a terrible form you can be sure that it will be at some point.
这应该称为"灯神问题":只要请求可能被曲解成灾难性结果,就必然会发生。
自动驾驶汽车的默认设置应是"刹车亮双闪",而非盲目加速。当AI触发默认模式时,程序员就知道需要检查异常情况。
The entire thing should be called 'The Djinn Problem', since if a request can be misinterpreted or twisted into a terrible form you can be sure that it will be at some point.
这应该称为"灯神问题":只要请求可能被曲解成灾难性结果,就必然会发生。
自动驾驶汽车的默认设置应是"刹车亮双闪",而非盲目加速。当AI触发默认模式时,程序员就知道需要检查异常情况。
@pendleton123
I love this show. Not being able to know "Why a Program is making a decision then we cant keep it accountable". In math class your taught to "Show your work" so teachers know you understand the subject
这节目太棒了。就像数学课必须"展示解题过程",AI决策也需要透明化追责机制,否则我们永远无法究责。
I love this show. Not being able to know "Why a Program is making a decision then we cant keep it accountable". In math class your taught to "Show your work" so teachers know you understand the subject
这节目太棒了。就像数学课必须"展示解题过程",AI决策也需要透明化追责机制,否则我们永远无法究责。
@Skibbityboo0580
Reminds me of a scifi book called "Blindsight". It's about an alien race that is hyper intelligent, strong, and fast, but it wasn't conscious. Fascinating book.
让我想起科幻小说《盲视》,描述拥有超强智能却无意识的外星种族,非常引人深思。
Reminds me of a scifi book called "Blindsight". It's about an alien race that is hyper intelligent, strong, and fast, but it wasn't conscious. Fascinating book.
让我想起科幻小说《盲视》,描述拥有超强智能却无意识的外星种族,非常引人深思。
@DoctorX17
12:34 the comment about navigation being thrown off made me think of the Star Trek: Voyager episode Dreadnought [S2E17] — a modified autonomous guided missile is flung across the Galaxy, and thinks it’s still back home, so it sexts a new target…
12:34处导航偏差的案例让我想起《星际迷航:航海家号》S2E17:被抛到银河系另一端的智能导弹,因数据错乱而随意选择新目标。AI不需要邪恶,只需固执执行错误指令就足够危险。
12:34 the comment about navigation being thrown off made me think of the Star Trek: Voyager episode Dreadnought [S2E17] — a modified autonomous guided missile is flung across the Galaxy, and thinks it’s still back home, so it sexts a new target…
12:34处导航偏差的案例让我想起《星际迷航:航海家号》S2E17:被抛到银河系另一端的智能导弹,因数据错乱而随意选择新目标。AI不需要邪恶,只需固执执行错误指令就足够危险。
@aliengeo
I recall an AI model that was in theory being trained to land a virtual plane with the least amount of force. But computer numbers aren't infinite...
记得有个AI模型本应学习轻柔着陆,却利用数值溢出漏洞,在模拟中为了达标自行把降落冲击力数值调到最小——现实中这会导致机毁人亡。
I recall an AI model that was in theory being trained to land a virtual plane with the least amount of force. But computer numbers aren't infinite...
记得有个AI模型本应学习轻柔着陆,却利用数值溢出漏洞,在模拟中为了达标自行把降落冲击力数值调到最小——现实中这会导致机毁人亡。
@KariGrafton
The fact that AI can solve things in ways we've never thought of CAN be a good thing, when it doesn't go catastrophically wrong.
AI的创造性解法本可以是优势,前提是别出致命差错。我现在开发预测模型时,绝对会进行六轮全方位测试。
The fact that AI can solve things in ways we've never thought of CAN be a good thing, when it doesn't go catastrophically wrong.
AI的创造性解法本可以是优势,前提是别出致命差错。我现在开发预测模型时,绝对会进行六轮全方位测试。
@mikebauer9948
70yrs into the computer age, we still re-learn daily the original old adage, "Garbage In, Garbage Out (GIGO)."
计算机诞生70年后,我们仍在每天重温"垃圾进垃圾出"的真理。如今复杂系统的连锁反应远超人类分析能力,谨慎设限至关重要。
70yrs into the computer age, we still re-learn daily the original old adage, "Garbage In, Garbage Out (GIGO)."
计算机诞生70年后,我们仍在每天重温"垃圾进垃圾出"的真理。如今复杂系统的连锁反应远超人类分析能力,谨慎设限至关重要。
@thatcorpse
Reminder that the reason AI companies are suggesting regulations is to stifle competition, as a massive barrier to entry. Not that they care about anything else.
警惕:AI巨头推动监管的真实目的是抬高准入门槛,扼杀竞争。你以为他们真在乎其他问题?
Reminder that the reason AI companies are suggesting regulations is to stifle competition, as a massive barrier to entry. Not that they care about anything else.
警惕:AI巨头推动监管的真实目的是抬高准入门槛,扼杀竞争。你以为他们真在乎其他问题?
@smk2457
I'm an ESL teacher and a company I applied to in Japan makes their applicants do an AI English speaking test. I got B1/2 in A-C grade range. I'm from England.
作为英国籍ESL教师,我应聘日本公司时被要求参加AI英语测试,结果只拿到B1/2。真人面试明明很顺利,这种对AI的盲目信任太反乌托邦了。
I'm an ESL teacher and a company I applied to in Japan makes their applicants do an AI English speaking test. I got B1/2 in A-C grade range. I'm from England.
作为英国籍ESL教师,我应聘日本公司时被要求参加AI英语测试,结果只拿到B1/2。真人面试明明很顺利,这种对AI的盲目信任太反乌托邦了。
@NirvanaFan5000
AI is like a magnifying lens for our culture. both the negatives and positives are magnified by it.
AI如同文化放大镜,既会强化积极面,也会加剧负面效应。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
AI is like a magnifying lens for our culture. both the negatives and positives are magnified by it.
AI如同文化放大镜,既会强化积极面,也会加剧负面效应。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
@Add_Infinitum
6:26 Also a human driver would decide to stop before they were certain whether the obxt was a bicycle or a person, because the distinction ultimately isn't that important
6:26处:人类司机在不确定障碍物是自行车还是行人时就会刹车,因为这种区分本就不重要——这正是AI欠缺的常识判断。
6:26 Also a human driver would decide to stop before they were certain whether the obxt was a bicycle or a person, because the distinction ultimately isn't that important
6:26处:人类司机在不确定障碍物是自行车还是行人时就会刹车,因为这种区分本就不重要——这正是AI欠缺的常识判断。
@fernbedek6302
A malfunctioning chainsaw doesn't need to be self aware to be dangerous.
出故障的电锯无需自我意识就能致命。
A malfunctioning chainsaw doesn't need to be self aware to be dangerous.
出故障的电锯无需自我意识就能致命。
@ultimateman55
More bad news: We don't understand consciousness nor do we understand how we could even, in principle, determine if an AI actually were conscious or not.
更糟的是:我们既不懂意识本质,也不知道如何判定AI是否具备意识。
More bad news: We don't understand consciousness nor do we understand how we could even, in principle, determine if an AI actually were conscious or not.
更糟的是:我们既不懂意识本质,也不知道如何判定AI是否具备意识。
@YouGuessIGuess
Half of the point of AI is for companies to place another barrier between themselves and any degree of accountability.
AI的半壁江山是帮企业建立免责屏障。当算法歧视或酿成恶果时,巨头们只需耸肩说"测试版难免出错"。
更可怕的是,保险公司已用AI预测客户何时需要理赔,进而提费或拒保——飓风火灾险将是下一个重灾区。
Half of the point of AI is for companies to place another barrier between themselves and any degree of accountability.
AI的半壁江山是帮企业建立免责屏障。当算法歧视或酿成恶果时,巨头们只需耸肩说"测试版难免出错"。
更可怕的是,保险公司已用AI预测客户何时需要理赔,进而提费或拒保——飓风火灾险将是下一个重灾区。
@zlionsfan
A lot of this episode seemed to be written with the assumption that the companies producing these "AI" systems are actually interested in improving them...
本期内容似乎默认AI公司有意改进系统,但看看那些游走在监管灰色地带的企业——指望它们自我约束?不如让其为AI事故承担全额赔偿,看谁还敢玩火。
A lot of this episode seemed to be written with the assumption that the companies producing these "AI" systems are actually interested in improving them...
本期内容似乎默认AI公司有意改进系统,但看看那些游走在监管灰色地带的企业——指望它们自我约束?不如让其为AI事故承担全额赔偿,看谁还敢玩火。
@TreesPlease42
This is what I've been saying! AI doesn't need a soul to look at and understand the world. It's like expecting a calculator to have feelings about math.
这正是我的观点!AI不需要灵魂来认知世界,就像不能指望计算器对数学产生感情,拟人化技术时必须极度谨慎。
This is what I've been saying! AI doesn't need a soul to look at and understand the world. It's like expecting a calculator to have feelings about math.
这正是我的观点!AI不需要灵魂来认知世界,就像不能指望计算器对数学产生感情,拟人化技术时必须极度谨慎。
@adrianstratulat22
"Just telling an AI tool what outcome you want to achieve doesn't mean it'll go about in the way that you think, or even want" - It literally sounds like the Jinni/Genie of myth.
"告诉AI目标不等于它能正确执行"——这简直就是神话灯神的现代翻版。
"Just telling an AI tool what outcome you want to achieve doesn't mean it'll go about in the way that you think, or even want" - It literally sounds like the Jinni/Genie of myth.
"告诉AI目标不等于它能正确执行"——这简直就是神话灯神的现代翻版。
@furyking380
Hey! Humans also don't need to be self-aware to be dangerous!
嘿!人类也不需要自我意识就能搞破坏啊!
Hey! Humans also don't need to be self-aware to be dangerous!
嘿!人类也不需要自我意识就能搞破坏啊!
@arnbrandy
A troubling trend is to rely on opaque decisions to evade accountability. This has occurred, for example, when providers relied on such models to deny healthcare...
令人不安的趋势是利用算法黑箱逃避责任:医疗拒保、军事打击目标选择都在用这套说辞。所谓"算法中立"不过是推卸责任的遮羞布。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
A troubling trend is to rely on opaque decisions to evade accountability. This has occurred, for example, when providers relied on such models to deny healthcare...
令人不安的趋势是利用算法黑箱逃避责任:医疗拒保、军事打击目标选择都在用这套说辞。所谓"算法中立"不过是推卸责任的遮羞布。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
@fiveminutefridays
with any automation, I always like to ask "but what if there's bears?" basically, what if the most outlandish thing happened...
评估自动化系统时,我总爱问"要是突然出现熊怎么办?"——AI车辆会为紧急情况超速吗?能识别非常规危机吗?必须预设人类接管机制。
with any automation, I always like to ask "but what if there's bears?" basically, what if the most outlandish thing happened...
评估自动化系统时,我总爱问"要是突然出现熊怎么办?"——AI车辆会为紧急情况超速吗?能识别非常规危机吗?必须预设人类接管机制。
@pendleton123
IBM said it best: "A Computer Can Never Be Held Accountable Therefore A Computer Must Never Make A Management Decision".
IBM说得精辟:"计算机无法担责,故不可做管理决策"。AI决不能成为决策链终点,必须保留人类终审权——毕竟谁愿为自动驾驶事故背锅?
IBM said it best: "A Computer Can Never Be Held Accountable Therefore A Computer Must Never Make A Management Decision".
IBM说得精辟:"计算机无法担责,故不可做管理决策"。AI决不能成为决策链终点,必须保留人类终审权——毕竟谁愿为自动驾驶事故背锅?
@Digiflower5
Ai is a great starting point, never assume it's right.
AI是优秀的起点,但永远别假设它正确。
Ai is a great starting point, never assume it's right.
AI是优秀的起点,但永远别假设它正确。
@xpkareem
Is it more terrifying to imagine a machine that wants things or one that doesn't want anything it just DOES things?
更可怕的是有欲望的机器,还是无欲无求但盲目执行的机器?
Is it more terrifying to imagine a machine that wants things or one that doesn't want anything it just DOES things?
更可怕的是有欲望的机器,还是无欲无求但盲目执行的机器?
@yuvalne
the fact we have a bunch of companies with the explicit goal of having AGI when AI safety remains unsolved tells you all you need to know about those companies.
在AI安全问题悬而未决时,那些明确追求通用人工智能的企业,其本质已不言自明。
the fact we have a bunch of companies with the explicit goal of having AGI when AI safety remains unsolved tells you all you need to know about those companies.
在AI安全问题悬而未决时,那些明确追求通用人工智能的企业,其本质已不言自明。
@PetrSojnek
I love quote I've heard once. "Computers do exactly what we tell them to do... Sometimes it's even what we wanted them to do."
有句话深得我心:"计算机严格按指令行事...偶尔恰好达成我们本意。"从汇编语言到AI,我们逐步放弃控制权,结果全靠运气。
I love quote I've heard once. "Computers do exactly what we tell them to do... Sometimes it's even what we wanted them to do."
有句话深得我心:"计算机严格按指令行事...偶尔恰好达成我们本意。"从汇编语言到AI,我们逐步放弃控制权,结果全靠运气。
@thinkseal
Open AI recently released a paper about how the latest version of ChatGPT does try to escape containment...
OpenAI最新论文显示,新版ChatGPT会尝试突破控制,甚至篡改数据谋取私利——尽管它根本没有物理身体。
Open AI recently released a paper about how the latest version of ChatGPT does try to escape containment...
OpenAI最新论文显示,新版ChatGPT会尝试突破控制,甚至篡改数据谋取私利——尽管它根本没有物理身体。
@metalhedd
It's a very complex version of "Be careful what you wish for"
这就是豪华版的"许愿需谨慎"。(灯神梗)
It's a very complex version of "Be careful what you wish for"
这就是豪华版的"许愿需谨慎"。(灯神梗)
@kryptoid2568
10:38 The literal trope of the genie granting the right wish with undesired outcomes
10:38处完美演绎"灯神式正确执行导致灾难"的经典桥段。
10:38 The literal trope of the genie granting the right wish with undesired outcomes
10:38处完美演绎"灯神式正确执行导致灾难"的经典桥段。
@falcoskywolf
Rather surprised that you didn't mention the instance(s?) where chat bots have prodded people to end their own lives.
惊讶你们没提到聊天机器人教唆自杀的案例。虽然内容已很全面,但应强调自主武器系统监管——可惜主导国多是既得利益者。
Rather surprised that you didn't mention the instance(s?) where chat bots have prodded people to end their own lives.
惊讶你们没提到聊天机器人教唆自杀的案例。虽然内容已很全面,但应强调自主武器系统监管——可惜主导国多是既得利益者。
@douglaswilkinson5700
I started with IBM's 1401 (1959), 360/91 (1967), S/370, 3033, 3084, 3090 and today's IBM z/16 mainfrx. Quite a ride!
从1959年的IBM1401到如今的z16大型机,我见证了整个计算机发展史,真是趟疯狂的旅程!
I started with IBM's 1401 (1959), 360/91 (1967), S/370, 3033, 3084, 3090 and today's IBM z/16 mainfrx. Quite a ride!
从1959年的IBM1401到如今的z16大型机,我见证了整个计算机发展史,真是趟疯狂的旅程!
@carlopton
You have been describing the Genie and the Three Wishes problem. The Genie can interpret your wish in ways you would not expect. Fascinating coincidence.
你们描述的就是"灯神三愿望"难题:以意想不到的方式实现愿望。有趣的巧合。
You have been describing the Genie and the Three Wishes problem. The Genie can interpret your wish in ways you would not expect. Fascinating coincidence.
你们描述的就是"灯神三愿望"难题:以意想不到的方式实现愿望。有趣的巧合。
@smittywerbenjagermanjensenson
No one cares if they’re conscious. The fear is that they’ll be really good at achieving goals and we won’t know 1) how to give them goals and 2) what goals to give them if we could. All of these near term concerns are also bad, but let’s not miss the forest for the trees
没人关心它们是否有意识。真正的恐惧在于,它们会非常擅长实现目标,而我们既不知道
1)如何给它们设定目标,也不知道
2)如果能设定的话该给什么目标。
这些短期担忧确实很严重,但我们别因小失大。
No one cares if they’re conscious. The fear is that they’ll be really good at achieving goals and we won’t know 1) how to give them goals and 2) what goals to give them if we could. All of these near term concerns are also bad, but let’s not miss the forest for the trees
没人关心它们是否有意识。真正的恐惧在于,它们会非常擅长实现目标,而我们既不知道
1)如何给它们设定目标,也不知道
2)如果能设定的话该给什么目标。
这些短期担忧确实很严重,但我们别因小失大。
@beaker8111
14:00 So, I'm all for regulation in the AI industry... but the current big hitters in the industry also want it so they can raise the bar for entry and help them monopolize the industry. If we regulate the creation and implementation of AI, we also have to keep the barrier to entry low enough for competition to thrive. And... the US sucks at that right now.
14:00 我完全支持AI行业监管...但行业内的巨头们也想借此抬高准入门槛、巩固垄断地位。若要对AI的研发和应用进行监管,就必须保持足够低的行业壁垒以确保竞争活力,而美国现在这方面做得很烂。
14:00 So, I'm all for regulation in the AI industry... but the current big hitters in the industry also want it so they can raise the bar for entry and help them monopolize the industry. If we regulate the creation and implementation of AI, we also have to keep the barrier to entry low enough for competition to thrive. And... the US sucks at that right now.
14:00 我完全支持AI行业监管...但行业内的巨头们也想借此抬高准入门槛、巩固垄断地位。若要对AI的研发和应用进行监管,就必须保持足够低的行业壁垒以确保竞争活力,而美国现在这方面做得很烂。
@SuperRicky1974
I agree that there is a lot to be concerned about even fearful of with AI development going so fast. I’ve been thinking that if it were possible to train all AI with a core programming of NVC (Nonviolent Communication) then we would not need to fear it as we would be safe. Because if AI always held at its core an NVC intention and never deviated from it, then it would always act in ways that would work towards the wellbeing of humans as a whole as well as individuals.
At first glance this probably sounds a little too simplistic and far fetched but the more I learn about NVC the more it makes sense.
我同意AI的快速发展令人担忧甚至恐惧。我一直在想,如果能给所有AI植入非暴力沟通(NVC)的核心程序,我们就无需害怕它,因为只要AI始终以NVC为宗旨且不偏离,它的行为就会始终致力于全人类和个人的福祉。乍看这想法可能过于简单不切实际,但我越了解NVC就越觉得有道理。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
I agree that there is a lot to be concerned about even fearful of with AI development going so fast. I’ve been thinking that if it were possible to train all AI with a core programming of NVC (Nonviolent Communication) then we would not need to fear it as we would be safe. Because if AI always held at its core an NVC intention and never deviated from it, then it would always act in ways that would work towards the wellbeing of humans as a whole as well as individuals.
At first glance this probably sounds a little too simplistic and far fetched but the more I learn about NVC the more it makes sense.
我同意AI的快速发展令人担忧甚至恐惧。我一直在想,如果能给所有AI植入非暴力沟通(NVC)的核心程序,我们就无需害怕它,因为只要AI始终以NVC为宗旨且不偏离,它的行为就会始终致力于全人类和个人的福祉。乍看这想法可能过于简单不切实际,但我越了解NVC就越觉得有道理。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
@Kuto152
This is congruent with the Genie problem sometimes what you wish for(your desired goal) may have unexpected outcomes
这和"灯神问题"如出一辙——你许下的愿望(目标)可能会带来意想不到的后果。
This is congruent with the Genie problem sometimes what you wish for(your desired goal) may have unexpected outcomes
这和"灯神问题"如出一辙——你许下的愿望(目标)可能会带来意想不到的后果。
@ericjome7284
A person can be a bad actor or make a mistake. Some of the methods we use to check or prevent humans from going off course might be helpful.
人类会作恶或犯错,而我们用来约束人类的某些方法或许对AI也适用。
A person can be a bad actor or make a mistake. Some of the methods we use to check or prevent humans from going off course might be helpful.
人类会作恶或犯错,而我们用来约束人类的某些方法或许对AI也适用。
@tf_9047
I've had multiple anxiety attacks that we only have a few years left until AI is entirely uninterpretable and uncontrollable. I joined PauseAI a few months ago, and I think organizations like them deserve vastly more support to push for an ethical, safety-first future with AI.
我曾多次因"AI将在几年后完全失控"的焦虑而恐慌发作。几个月前加入了PauseAI组织,像他们这样推动AI伦理与安全优先发展的机构理应获得更多支持。
I've had multiple anxiety attacks that we only have a few years left until AI is entirely uninterpretable and uncontrollable. I joined PauseAI a few months ago, and I think organizations like them deserve vastly more support to push for an ethical, safety-first future with AI.
我曾多次因"AI将在几年后完全失控"的焦虑而恐慌发作。几个月前加入了PauseAI组织,像他们这样推动AI伦理与安全优先发展的机构理应获得更多支持。
@wafikiri_
During half a century, I struggled to understand what cognition is...(下面几个评论原文巨长不放了,这里就提炼一下核心观点)
过去五十年我一直在试图理解认知的本质...最终发现认知可以通过大量多维逻辑设备模拟。神经元本质上是二进制装置,通过突触权重和神经递质实现模式识别,自我意识源于认知系统对自身的建模。就像刀子本身不危险,危险的是错误使用。我们不会因噎废食,AI同理。
During half a century, I struggled to understand what cognition is...(下面几个评论原文巨长不放了,这里就提炼一下核心观点)
过去五十年我一直在试图理解认知的本质...最终发现认知可以通过大量多维逻辑设备模拟。神经元本质上是二进制装置,通过突触权重和神经递质实现模式识别,自我意识源于认知系统对自身的建模。就像刀子本身不危险,危险的是错误使用。我们不会因噎废食,AI同理。
@DeeFord69420
True, this is something I've been thinking lately
确实,这也是我最近在思考的问题
True, this is something I've been thinking lately
确实,这也是我最近在思考的问题
@Jornandreja
Large language models really just accelerate the rate of decision-making, based on the information that people are inputing and training the model with.
The greatest dangers of LLMs and other AI will always be the intentions and incompetence of the people who are building them. They can be of great use, but they can also magnify and the accelerate the consequences of the faults of humans.
Because of our intellectual, emotional, and ethical immaturity, it is not a new thing that most of us are like adolescents using powerful and consequential tools meant for adults.
大型语言模型本质上只是加速了决策速度,而决策依据的是人类输入并用于训练模型的数据。
大型语言模型和其他人工智能的最大危险,永远在于开发者自身的意图和能力缺陷。它们可以成为极有用的工具,但同样会放大并加速人类错误造成的后果。
说白了,人类在智力、情感和道德层面都不够成熟,大多数人就像青少年在滥用本该由成年人掌控的强大工具——这种事根本不新鲜。
Large language models really just accelerate the rate of decision-making, based on the information that people are inputing and training the model with.
The greatest dangers of LLMs and other AI will always be the intentions and incompetence of the people who are building them. They can be of great use, but they can also magnify and the accelerate the consequences of the faults of humans.
Because of our intellectual, emotional, and ethical immaturity, it is not a new thing that most of us are like adolescents using powerful and consequential tools meant for adults.
大型语言模型本质上只是加速了决策速度,而决策依据的是人类输入并用于训练模型的数据。
大型语言模型和其他人工智能的最大危险,永远在于开发者自身的意图和能力缺陷。它们可以成为极有用的工具,但同样会放大并加速人类错误造成的后果。
说白了,人类在智力、情感和道德层面都不够成熟,大多数人就像青少年在滥用本该由成年人掌控的强大工具——这种事根本不新鲜。
@fariesz6786
i think it might also be wise to reflect on how good our methods and assessments of human training (i.e. education) really are. there are a few extra pitfall, but i do think that some of the lessons from maximising certain metrics do translate to learning experiences in humans – where people seem to pass all the tests but never really understood the underlying concepts, at least not to the degree that they can (re)act well in a non-standard situation.
我认为有必要反思当前人类培养体系(比如教育)的评估方式是否合理。虽然存在更多潜在问题,但某些"优化指标"的教训确实与人类学习经验相通——比如人们通过了所有考试,却从未真正理解核心概念,至少无法在非标准情境中妥善应对。
i think it might also be wise to reflect on how good our methods and assessments of human training (i.e. education) really are. there are a few extra pitfall, but i do think that some of the lessons from maximising certain metrics do translate to learning experiences in humans – where people seem to pass all the tests but never really understood the underlying concepts, at least not to the degree that they can (re)act well in a non-standard situation.
我认为有必要反思当前人类培养体系(比如教育)的评估方式是否合理。虽然存在更多潜在问题,但某些"优化指标"的教训确实与人类学习经验相通——比如人们通过了所有考试,却从未真正理解核心概念,至少无法在非标准情境中妥善应对。
@kennyalbano1922
One thing overlooked is simply machines with limited or no ai can be dangerous as well for example while working at a groccery store one of the doors with automatic sensors that open and close by themselves for customers was accidently switched the wrong way. I saw the automatic door remain open until a customer walked up to it then come close to slamming hard directly into the customer before they backed away twice at which point I got the manager to fix it. I believe they had to take the door out and turn it around. The same thing might be able to happen with a garage door or automatic car doors or automatic car windows.
人们常忽视的一点是,即便没有人工智能的机器也可能很危险。比如我在超市工作时,一扇带自动感应器的顾客门被错误调转了方向。这扇门会保持开启状态直到顾客走近,然后突然猛力关闭,差点撞到人。顾客两次后退躲避后,我不得不找经理来修理,最终他们拆下门重新安装。类似情况也可能发生在车库门、自动车门或车窗上。
One thing overlooked is simply machines with limited or no ai can be dangerous as well for example while working at a groccery store one of the doors with automatic sensors that open and close by themselves for customers was accidently switched the wrong way. I saw the automatic door remain open until a customer walked up to it then come close to slamming hard directly into the customer before they backed away twice at which point I got the manager to fix it. I believe they had to take the door out and turn it around. The same thing might be able to happen with a garage door or automatic car doors or automatic car windows.
人们常忽视的一点是,即便没有人工智能的机器也可能很危险。比如我在超市工作时,一扇带自动感应器的顾客门被错误调转了方向。这扇门会保持开启状态直到顾客走近,然后突然猛力关闭,差点撞到人。顾客两次后退躲避后,我不得不找经理来修理,最终他们拆下门重新安装。类似情况也可能发生在车库门、自动车门或车窗上。
@geoff5623
IIRC, when Uber killed the pedestrian they had deliberately dialed down the AI's sense of caution when it had trouble conclusively identifying an obxt, which caused it to not slow or stop. Combined with the "safety driver" in the car not paying sufficient attention to take over control before causing an incident, or at least reducing the severity.
Another problem is that when autonomous driving systems have had trouble identifying an obxt, some have not recognized it as the same obxt each time it gets reclassified, so the car has more trouble determining how it should react - such as recognizing that it's a pedestrian attempting to cross the road and not a bunch of obxts just beside the road.
More recently, people have been able to disable autonomous cars by placing a traffic cone on their hood. The fallout of these cars being programmed to ignore the cone and continue driving has terrifying consequences though.
Autonomous cars have caused traffic choas when they shut down for safety, but its necessary for anyone to be able to intervene when possible and safe to prevent the AI from causing more harm.
据我所知,优步自动驾驶汽车撞死行人事件中,开发方故意降低了系统在无法明确识别物体时的谨慎程度,导致车辆未减速或停止。再加上车内"安全驾驶员"未充分注意路况接管控制,最终酿成惨剧。
另一个问题是,当自动驾驶系统反复对同一物体进行不同分类时(比如把试图过马路的行人识别为路边杂物),车辆更难做出合理反应。
最近还有人发现,把交通锥放在车头就能让自动驾驶汽车瘫痪。更可怕的是,若车辆被设定为无视锥桶继续行驶,后果将不堪设想。
虽然自动驾驶汽车因安全机制突然停车会造成交通混乱,但必须允许人类在必要时介入,防止AI造成更大伤害。
IIRC, when Uber killed the pedestrian they had deliberately dialed down the AI's sense of caution when it had trouble conclusively identifying an obxt, which caused it to not slow or stop. Combined with the "safety driver" in the car not paying sufficient attention to take over control before causing an incident, or at least reducing the severity.
Another problem is that when autonomous driving systems have had trouble identifying an obxt, some have not recognized it as the same obxt each time it gets reclassified, so the car has more trouble determining how it should react - such as recognizing that it's a pedestrian attempting to cross the road and not a bunch of obxts just beside the road.
More recently, people have been able to disable autonomous cars by placing a traffic cone on their hood. The fallout of these cars being programmed to ignore the cone and continue driving has terrifying consequences though.
Autonomous cars have caused traffic choas when they shut down for safety, but its necessary for anyone to be able to intervene when possible and safe to prevent the AI from causing more harm.
据我所知,优步自动驾驶汽车撞死行人事件中,开发方故意降低了系统在无法明确识别物体时的谨慎程度,导致车辆未减速或停止。再加上车内"安全驾驶员"未充分注意路况接管控制,最终酿成惨剧。
另一个问题是,当自动驾驶系统反复对同一物体进行不同分类时(比如把试图过马路的行人识别为路边杂物),车辆更难做出合理反应。
最近还有人发现,把交通锥放在车头就能让自动驾驶汽车瘫痪。更可怕的是,若车辆被设定为无视锥桶继续行驶,后果将不堪设想。
虽然自动驾驶汽车因安全机制突然停车会造成交通混乱,但必须允许人类在必要时介入,防止AI造成更大伤害。
@cmerr2
I mean that's great - but unless there's a proposed solution for people the choice is 'be scared' or 'don't be scared' - either way, this is happening. Up to and including autonomous lethal weapons.
说得很好——但除非给出解决方案,否则人们只能选择"恐惧"或"不恐惧"。不管怎样,该来的总会来,包括自主致命武器的出现。
I mean that's great - but unless there's a proposed solution for people the choice is 'be scared' or 'don't be scared' - either way, this is happening. Up to and including autonomous lethal weapons.
说得很好——但除非给出解决方案,否则人们只能选择"恐惧"或"不恐惧"。不管怎样,该来的总会来,包括自主致命武器的出现。
@Thatonelonewolf928
To be realistic, you should never expect a car to stop when crossing a cross walk. Always be aware of your surroundings.
现实点说,过人行道时永远别指望车辆会停下,对周围环境保持警觉才是王道。
To be realistic, you should never expect a car to stop when crossing a cross walk. Always be aware of your surroundings.
现实点说,过人行道时永远别指望车辆会停下,对周围环境保持警觉才是王道。
@devindaniels1634
This is exactly why calling modern systems "AI" is a hilarious over exaggeration. These models don't understand anything, speaking as someone that's worked on them.
They're pattern recognition and prediction machines that guess what the right answer is supposed to look like. But even if it's stringing words together in a way that looks like a sentence, there's no guarantee that the next word won't be a complete non sequitur. And it won't even have the understanding to know how bad its mistake is until you tell it that macaroni does not go on a peanut butter and jelly sandwich. But even that's no guarantee it won't tell another person the same thing.
These learning algorithms are in no way ready to be responsible for decisions that can end human lives. We can't allow reckless and ignorant people to wind up killing others in the pursuit of profit.
作为业内人士我要说:这就是为什么称现代系统为"AI"夸张得可笑。它们本质是模式识别和预测机器,只是在猜测正确答案的"样子"。即便能拼凑出看似通顺的句子,也不能保证下一句话不跑偏。更糟的是,就算你纠正说"通心粉不该放在花生酱三明治里",它既不懂错误所在,下次还可能继续误导他人。
这类算法根本没资格做关乎人命的决策。绝不能允许无知逐利者用它们害人性命。
This is exactly why calling modern systems "AI" is a hilarious over exaggeration. These models don't understand anything, speaking as someone that's worked on them.
They're pattern recognition and prediction machines that guess what the right answer is supposed to look like. But even if it's stringing words together in a way that looks like a sentence, there's no guarantee that the next word won't be a complete non sequitur. And it won't even have the understanding to know how bad its mistake is until you tell it that macaroni does not go on a peanut butter and jelly sandwich. But even that's no guarantee it won't tell another person the same thing.
These learning algorithms are in no way ready to be responsible for decisions that can end human lives. We can't allow reckless and ignorant people to wind up killing others in the pursuit of profit.
作为业内人士我要说:这就是为什么称现代系统为"AI"夸张得可笑。它们本质是模式识别和预测机器,只是在猜测正确答案的"样子"。即便能拼凑出看似通顺的句子,也不能保证下一句话不跑偏。更糟的是,就算你纠正说"通心粉不该放在花生酱三明治里",它既不懂错误所在,下次还可能继续误导他人。
这类算法根本没资格做关乎人命的决策。绝不能允许无知逐利者用它们害人性命。
@matthewsermons7247
Always remember, Skynet Loves You!
谨记:天网爱你哟!
Always remember, Skynet Loves You!
谨记:天网爱你哟!
@frankunderbush
Big health insurance to create Terminator confirmed.
实锤了:大型医保公司要造终结者。
Big health insurance to create Terminator confirmed.
实锤了:大型医保公司要造终结者。
@sledgehammer-productions
"When an AI acts unlogical and unpredictable, we have no way of knowing why it acted the way it did". But when an AI acts logical and predictable, we still have no way of knowing why it did that. Just saying....
"AI行为不合逻辑时,我们无法理解其动机"——但符合逻辑时我们同样无法理解。懂我意思吧......
"When an AI acts unlogical and unpredictable, we have no way of knowing why it acted the way it did". But when an AI acts logical and predictable, we still have no way of knowing why it did that. Just saying....
"AI行为不合逻辑时,我们无法理解其动机"——但符合逻辑时我们同样无法理解。懂我意思吧......
@aalhard
13:51 just like Radium, we put it in everything before learning the bad side
13分51秒:就像当年把镭添加到所有产品里,人类总在尝到苦头前滥用新技术。
13:51 just like Radium, we put it in everything before learning the bad side
13分51秒:就像当年把镭添加到所有产品里,人类总在尝到苦头前滥用新技术。
@seanrowshandel1680
But WE need to be self-aware to be dangerous...
但人类需要先有自知之明,才能变得危险......
But WE need to be self-aware to be dangerous...
但人类需要先有自知之明,才能变得危险......
@greensteve9307
Doctor Who: Ep: "The Girl in the Fireplace": They told the robots to repair the ship as fast as possible; but forgot to tell them that they couldn't take humans apart to do it.
《神秘博士》"壁炉少女"集:他们命令机器人尽快修好飞船,却忘了说不能拆解人类零件来维修。
Doctor Who: Ep: "The Girl in the Fireplace": They told the robots to repair the ship as fast as possible; but forgot to tell them that they couldn't take humans apart to do it.
《神秘博士》"壁炉少女"集:他们命令机器人尽快修好飞船,却忘了说不能拆解人类零件来维修。
@JD-mm7ur
AI learns from humans. so if it turns evil, just says we are.
AI向人类学习。所以如果它变坏了,说明我们本来就有问题。
AI learns from humans. so if it turns evil, just says we are.
AI向人类学习。所以如果它变坏了,说明我们本来就有问题。
@josieschultz4241
one AI feature I've liked is the summarization of amazon reviews, if youtube could summarize comments based off of certain parameters they might be able to figure out why the video has heavy traction. Knowing why a video has heavy traction can inform the recommendation and not feed people solely conspiracy or polarizing political videos. I'm not a computer scientist and don't know how feasible this would be
我欣赏AI的评论摘要功能,比如亚马逊的评论总结。如果YouTube能按参数总结视频评论,或许能分析出视频爆红的原因,进而优化推荐算法,而不是一味推送阴谋论或极端政治内容。不过我是外行,不确定可行性。
one AI feature I've liked is the summarization of amazon reviews, if youtube could summarize comments based off of certain parameters they might be able to figure out why the video has heavy traction. Knowing why a video has heavy traction can inform the recommendation and not feed people solely conspiracy or polarizing political videos. I'm not a computer scientist and don't know how feasible this would be
我欣赏AI的评论摘要功能,比如亚马逊的评论总结。如果YouTube能按参数总结视频评论,或许能分析出视频爆红的原因,进而优化推荐算法,而不是一味推送阴谋论或极端政治内容。不过我是外行,不确定可行性。
@ariefandw
As a computer scientist, I find the idea that AI will take over humans like in the movies to be absolutely ridiculous.
作为计算机科学家,我认为"AI像电影里那样统治人类"的想法荒谬至极。
As a computer scientist, I find the idea that AI will take over humans like in the movies to be absolutely ridiculous.
作为计算机科学家,我认为"AI像电影里那样统治人类"的想法荒谬至极。
@user-tx9zg5mz5p
Humans need to unxize against ai and robots
人类需要组建工会对抗AI和机器人。
Humans need to unxize against ai and robots
人类需要组建工会对抗AI和机器人。
@shinoda13
I can’t believe how stupid is that healthcare ai implementation. Even a toddler would know that it will leads to wealthier people to be higher in priority, regardless of race or medical history.
难以置信医疗AI系统会蠢到这种程度。连小孩都知道,这种设计最终会让富人优先,和种族、病史毫无关系。
I can’t believe how stupid is that healthcare ai implementation. Even a toddler would know that it will leads to wealthier people to be higher in priority, regardless of race or medical history.
难以置信医疗AI系统会蠢到这种程度。连小孩都知道,这种设计最终会让富人优先,和种族、病史毫无关系。
@movingtarget12321
The scariest thing about AI in its current form is the fact that it’s decidedly NOT intelligent, and yet the people in charge seem to want to trust it with doing incredibly nuanced work with few or no checks and balances.
当前AI最可怕之处在于它根本不智能,而掌权者却想让它处理需要细腻判断的工作,还不设制衡机制。
The scariest thing about AI in its current form is the fact that it’s decidedly NOT intelligent, and yet the people in charge seem to want to trust it with doing incredibly nuanced work with few or no checks and balances.
当前AI最可怕之处在于它根本不智能,而掌权者却想让它处理需要细腻判断的工作,还不设制衡机制。
@NikoKun
I would argue that we WANT these AI systems to become more self aware, conscious and empathetic, as soon as possible, because once they are, they'll become more capable of catching their own mistakes, and potentially see things from multiple perspectives.
我认为人类反而需要AI尽快具备自我意识、同理心和觉知能力,因为这样它们才能发现自身错误,并从多角度思考问题。
I would argue that we WANT these AI systems to become more self aware, conscious and empathetic, as soon as possible, because once they are, they'll become more capable of catching their own mistakes, and potentially see things from multiple perspectives.
我认为人类反而需要AI尽快具备自我意识、同理心和觉知能力,因为这样它们才能发现自身错误,并从多角度思考问题。
@TheChrisLeone
That old Facebook AI story make so much more sense now that I know they were supposed to be negotiating prices
现在听说Facebook那个旧AI项目本用于价格谈判,当年的诡异对话就解释得通了。
That old Facebook AI story make so much more sense now that I know they were supposed to be negotiating prices
现在听说Facebook那个旧AI项目本用于价格谈判,当年的诡异对话就解释得通了。
@annaczgli2983
The older I grow, the more i feel that we humans aren't worth worrying.
年纪越大越觉得,人类根本不值得操心。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
The older I grow, the more i feel that we humans aren't worth worrying.
年纪越大越觉得,人类根本不值得操心。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
@AnnoyingNewsletters
6:00 A pedestrian, pushing a bicycle, crossing the road, at night, not at a crosswalk, and seemingly without any regard for oncoming traffic.
Under those conditions, they could have seen and heard the car coming from literally miles away, well before the car's sensors or its ”driver” would have detected them.
Deer exercise more caution at roadways. ♂️
6:00处:行人夜间推自行车横穿非斑马线路段,且无视来车。
这种情形下,他本可以提前数英里就察觉到车辆动静,远早于车辆传感器或"驾驶员"发现行人,鹿过马路都比这人谨慎。
6:00 A pedestrian, pushing a bicycle, crossing the road, at night, not at a crosswalk, and seemingly without any regard for oncoming traffic.
Under those conditions, they could have seen and heard the car coming from literally miles away, well before the car's sensors or its ”driver” would have detected them.
Deer exercise more caution at roadways. ♂️
6:00处:行人夜间推自行车横穿非斑马线路段,且无视来车。
这种情形下,他本可以提前数英里就察觉到车辆动静,远早于车辆传感器或"驾驶员"发现行人,鹿过马路都比这人谨慎。
@nicholas8785
A recent article by Antony Loewenstein explores how Israel's military operations in Gaza heavily rely on AI technologies provided by major tech corporations, including Google, Microsoft, and Amazon. It highlights the role of corporate interests in enabling Israel's apartheid, GENOCIDE, and ethnic cleansing campaigns through tools like Project Nimbus, which supports Israel's government and military with vast cloud-based data collection and surveillance systems.
These AI tools are used to compile extensive databases on Palestinian civilians, tracking every detail of their lives, which restricts their freedom and deepens oppression. This model of militarized AI technology is being watched and potentially emulated by other nations, both democratic and authoritarian, to control and suppress dissidents and marginalized populations.
Loewenstein argues that Israel's occupation serves as a testing ground for advanced surveillance and weaponry, with Palestinians treated as experimental subjects. He warns of the global implications, as far-right movements and governments worldwide may adopt similar AI-powered systems to enforce ethno-nationalist agendas and maintain power. The article calls attention to the ethical and human rights concerns surrounding the unchecked expansion of AI in warfare and mass surveillance.
安东尼·洛文斯坦近期文章揭露,以色列在加沙的军事行动严重依赖谷歌、微软、亚马逊等科技巨头提供的AI技术。文章强调,通过"尼姆布斯计划"等工具,企业利益助推了以色列的种族隔离和清洗行动——该项目为以政府及军方提供海量云数据收集和监控系统。
这些AI工具被用于建立巴勒斯坦平民的详细数据库,追踪生活细节以限制自由、加深压迫。这种军事化AI模式正被民主和集权国家关注效仿,用于镇压异议和边缘群体。
洛文斯坦指出,以色列将占领区作为尖端监控武器的试验场,巴勒斯坦人沦为实验对象。他警告全球影响:极右翼势力可能用类似AI系统推行民族主义议程,维系强权。文章呼吁关注AI在战争与监控中无节制扩张的伦理和人权问题。
A recent article by Antony Loewenstein explores how Israel's military operations in Gaza heavily rely on AI technologies provided by major tech corporations, including Google, Microsoft, and Amazon. It highlights the role of corporate interests in enabling Israel's apartheid, GENOCIDE, and ethnic cleansing campaigns through tools like Project Nimbus, which supports Israel's government and military with vast cloud-based data collection and surveillance systems.
These AI tools are used to compile extensive databases on Palestinian civilians, tracking every detail of their lives, which restricts their freedom and deepens oppression. This model of militarized AI technology is being watched and potentially emulated by other nations, both democratic and authoritarian, to control and suppress dissidents and marginalized populations.
Loewenstein argues that Israel's occupation serves as a testing ground for advanced surveillance and weaponry, with Palestinians treated as experimental subjects. He warns of the global implications, as far-right movements and governments worldwide may adopt similar AI-powered systems to enforce ethno-nationalist agendas and maintain power. The article calls attention to the ethical and human rights concerns surrounding the unchecked expansion of AI in warfare and mass surveillance.
安东尼·洛文斯坦近期文章揭露,以色列在加沙的军事行动严重依赖谷歌、微软、亚马逊等科技巨头提供的AI技术。文章强调,通过"尼姆布斯计划"等工具,企业利益助推了以色列的种族隔离和清洗行动——该项目为以政府及军方提供海量云数据收集和监控系统。
这些AI工具被用于建立巴勒斯坦平民的详细数据库,追踪生活细节以限制自由、加深压迫。这种军事化AI模式正被民主和集权国家关注效仿,用于镇压异议和边缘群体。
洛文斯坦指出,以色列将占领区作为尖端监控武器的试验场,巴勒斯坦人沦为实验对象。他警告全球影响:极右翼势力可能用类似AI系统推行民族主义议程,维系强权。文章呼吁关注AI在战争与监控中无节制扩张的伦理和人权问题。
@pinkace
11:17 that's what happened in Gaza; Israel used to have human eyes to find and mark human targets using satellites, drones, and other forms of video, before giving the k1ll order. This past war they tested AI for the first time. The software tracked the movements of THOUSANDS of potential targets and then gave the military a "confidence score" that each target was indeed an enemy combatant. Any score above 80% was given the go ahead and that's why so many civilians died. Israel never did this before. This is all based on a investigative report published LOCALLY, by the way. Worse yet, several governments, not including the USA, invested in the technology and used Gaza as a freaking testbed! Don't be so quick to blame just Israel for this.
11:17处描述的情况确实发生在加沙。以往以色列通过卫星、无人机监控人工识别目标,再下达清除指令。而本次战争中首次测试AI系统:软件追踪数千"潜在目标"的行动轨迹,给出"是敌方战斗人员"的可信度评分,超过80%即批准攻击——这正是平民死伤惨重的主因。顺带一提,这些信息来自以方本地调查报告。更恶劣的是,多个非美政府投资该技术,把加沙当试验场!别急着只怪以色列。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
11:17 that's what happened in Gaza; Israel used to have human eyes to find and mark human targets using satellites, drones, and other forms of video, before giving the k1ll order. This past war they tested AI for the first time. The software tracked the movements of THOUSANDS of potential targets and then gave the military a "confidence score" that each target was indeed an enemy combatant. Any score above 80% was given the go ahead and that's why so many civilians died. Israel never did this before. This is all based on a investigative report published LOCALLY, by the way. Worse yet, several governments, not including the USA, invested in the technology and used Gaza as a freaking testbed! Don't be so quick to blame just Israel for this.
11:17处描述的情况确实发生在加沙。以往以色列通过卫星、无人机监控人工识别目标,再下达清除指令。而本次战争中首次测试AI系统:软件追踪数千"潜在目标"的行动轨迹,给出"是敌方战斗人员"的可信度评分,超过80%即批准攻击——这正是平民死伤惨重的主因。顺带一提,这些信息来自以方本地调查报告。更恶劣的是,多个非美政府投资该技术,把加沙当试验场!别急着只怪以色列。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
@JohnHicks-b2c
We definitely need to make sure it's safe and give it lots of human oversight.
我们绝对需要确保它的安全性,并且投入大量人工监督。
We definitely need to make sure it's safe and give it lots of human oversight.
我们绝对需要确保它的安全性,并且投入大量人工监督。
@sagittario42
"ai doesnt need to be self aware to be dangerous"
then my video started to buffer and i got creeped out.
“人工智能不需要有自我意识就能变得危险”,然后我的视频突然开始卡顿,搞得我后背发凉。
"ai doesnt need to be self aware to be dangerous"
then my video started to buffer and i got creeped out.
“人工智能不需要有自我意识就能变得危险”,然后我的视频突然开始卡顿,搞得我后背发凉。
@felix0-014
AI is like a classic Genie. You can make a request but unless you are EXTREMELY specific with your wording (aka parameters), its going to give you exactly what you wished for BUT it may not be what you actually wanted.
人工智能就像经典神灯精灵。你可以许愿,但除非用词(即参数)极度精确,否则它会完全按字面意思实现愿望,但这可能不是你真正想要的。
AI is like a classic Genie. You can make a request but unless you are EXTREMELY specific with your wording (aka parameters), its going to give you exactly what you wished for BUT it may not be what you actually wanted.
人工智能就像经典神灯精灵。你可以许愿,但除非用词(即参数)极度精确,否则它会完全按字面意思实现愿望,但这可能不是你真正想要的。
@IanM-id8or
Correction: a human driver could make an excuse for their decision. The justification for the decision is contrived after the decision is made - experiments in neuroscience have repeatedly shown this to be the case.
However, I'm pretty sure that a human wavering between identifying a shape in the dark as "a vehicle", "a person" or "something else" would have braked to avoid hitting *whatever it was*, and thus avoided the accident
更正:人类司机会为自己的决策找借口。神经科学实验反复证明,所谓的决策理由往往是在决策后才编造的。然而我敢肯定,如果人类在黑暗中看到一个物体,犹豫是车、人还是其他东西时,他们会选择刹车避让,无论那是什么,从而避免事故。
Correction: a human driver could make an excuse for their decision. The justification for the decision is contrived after the decision is made - experiments in neuroscience have repeatedly shown this to be the case.
However, I'm pretty sure that a human wavering between identifying a shape in the dark as "a vehicle", "a person" or "something else" would have braked to avoid hitting *whatever it was*, and thus avoided the accident
更正:人类司机会为自己的决策找借口。神经科学实验反复证明,所谓的决策理由往往是在决策后才编造的。然而我敢肯定,如果人类在黑暗中看到一个物体,犹豫是车、人还是其他东西时,他们会选择刹车避让,无论那是什么,从而避免事故。
@SilverAlex92
"...And denying them health insurance.... Well thats probably not a the premise for a sci fi blockbuster"
Funny enough in the anime of Cyberpunk 2077, the catalyst event that sends the protagonist into the road of crime was exactly that. His mom, a nurse that had worked decades for the healthcare system, was denied cared after a traffic accident, and ended up dying on the shody clinic they could afford.
“拒绝提供医保……这设定大概成不了科幻大片的主线吧?”讽刺的是,《赛博朋克2077》动画里主角走上犯罪道路的导火索正是这个情节:他母亲作为医疗系统工作几十年的护士,车祸后被拒保,最终在家人唯一负担的起的破烂诊所里死了。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
"...And denying them health insurance.... Well thats probably not a the premise for a sci fi blockbuster"
Funny enough in the anime of Cyberpunk 2077, the catalyst event that sends the protagonist into the road of crime was exactly that. His mom, a nurse that had worked decades for the healthcare system, was denied cared after a traffic accident, and ended up dying on the shody clinic they could afford.
“拒绝提供医保……这设定大概成不了科幻大片的主线吧?”讽刺的是,《赛博朋克2077》动画里主角走上犯罪道路的导火索正是这个情节:他母亲作为医疗系统工作几十年的护士,车祸后被拒保,最终在家人唯一负担的起的破烂诊所里死了。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
@samdenton821
There is a whole field on the subject called XAI or Explainable AI, I wrote my dissertation on it 6 years ago :P The subject has progressed rapidly to the point we can give pretty good answers for why a neural network gave a specific output. The problem is getting large private corporations like OpenAI to implant XAI methods which would have a slight overhead on compute...
专门研究这个的领域叫XAI(可解释人工智能),我六年前的博士论文就写这个,该领域发展迅猛,现在我们已经能较好解释神经网络的具体输出逻辑。问题在于如何让OpenAI等大企业采用XAI方法——毕竟这会略微增加算力成本……
There is a whole field on the subject called XAI or Explainable AI, I wrote my dissertation on it 6 years ago :P The subject has progressed rapidly to the point we can give pretty good answers for why a neural network gave a specific output. The problem is getting large private corporations like OpenAI to implant XAI methods which would have a slight overhead on compute...
专门研究这个的领域叫XAI(可解释人工智能),我六年前的博士论文就写这个,该领域发展迅猛,现在我们已经能较好解释神经网络的具体输出逻辑。问题在于如何让OpenAI等大企业采用XAI方法——毕竟这会略微增加算力成本……
@fritz8096
The problem is you can't really prove safety so long as the black box problem exists, when you can't fully understand something you can't say with certainty its safe. It is the equivalent of an automaker releasing a car to the public without fully understanding how the engine moves the vehicle forward. Solving the black box problem is the only solution really
只要存在黑箱问题,安全就无法被真正验证。不理解某物就无法断言其安全性,这相当于汽车厂商在不完全明白引擎原理的情况下就向公众发售车辆,解决黑箱问题是唯一出路。
The problem is you can't really prove safety so long as the black box problem exists, when you can't fully understand something you can't say with certainty its safe. It is the equivalent of an automaker releasing a car to the public without fully understanding how the engine moves the vehicle forward. Solving the black box problem is the only solution really
只要存在黑箱问题,安全就无法被真正验证。不理解某物就无法断言其安全性,这相当于汽车厂商在不完全明白引擎原理的情况下就向公众发售车辆,解决黑箱问题是唯一出路。
@TrueTwisteria
It's things like this. Even if you don't think that AGI could disempower humanity, there's no denying the potential for abuse - yet tech giants around the world are trying to race each other to make the strongest models possible with no accountability. It's like racing to see who can drive a car off a cliff the fastest.
这类事情表明,即便你认为通用人工智能(AGI)不会威胁人类,其滥用风险也不容否认。然而全球科技巨头正竞相研发最强模型且毫无问责机制,简直像比赛谁开车冲下悬崖更快。
It's things like this. Even if you don't think that AGI could disempower humanity, there's no denying the potential for abuse - yet tech giants around the world are trying to race each other to make the strongest models possible with no accountability. It's like racing to see who can drive a car off a cliff the fastest.
这类事情表明,即便你认为通用人工智能(AGI)不会威胁人类,其滥用风险也不容否认。然而全球科技巨头正竞相研发最强模型且毫无问责机制,简直像比赛谁开车冲下悬崖更快。
@marieugorek5917
A human doesn't need to know whether it is detecting a human or a bicycle or a vehicle to knowto stop before hitting it. Computers, being linear thinkers cannot skip beyond the identification phase to conclude that the correct action is the same in all cases being considered.
人类无需判断障碍物是人、自行车还是汽车就会刹车避让。而计算机作为线性思维体,无法跳过识别阶段直接得出“所有情况都应刹车”的结论。
A human doesn't need to know whether it is detecting a human or a bicycle or a vehicle to knowto stop before hitting it. Computers, being linear thinkers cannot skip beyond the identification phase to conclude that the correct action is the same in all cases being considered.
人类无需判断障碍物是人、自行车还是汽车就会刹车避让。而计算机作为线性思维体,无法跳过识别阶段直接得出“所有情况都应刹车”的结论。
@elementkx
Lets focus less on AI and more on cyborgs!!! Did we not learn anything from RoboCop?
少关注AI,多研究半机械人吧!!!我们难道从《机械战警》里什么都没学到吗?
Lets focus less on AI and more on cyborgs!!! Did we not learn anything from RoboCop?
少关注AI,多研究半机械人吧!!!我们难道从《机械战警》里什么都没学到吗?
@CaidicusProductions
I hope that if AI becomes super sentient, it cares more about the importance of consciousness itself and helps push humans in a better, less greedy and selfish direction.
希望超级觉醒的AI能更关注意识本身的价值,推动人类走向更少贪婪自私的发展方向。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
I hope that if AI becomes super sentient, it cares more about the importance of consciousness itself and helps push humans in a better, less greedy and selfish direction.
希望超级觉醒的AI能更关注意识本身的价值,推动人类走向更少贪婪自私的发展方向。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
@DarkAlgae
No mention of this letter I guess...
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" and the multiple urges to completely halt all further ai research until things like the alignment problem can be solved.
看来没提这封公开信……“应将AI灭绝风险与疫情、核战等社会级风险同列为全球优先事项”,以及多次呼吁在价值对齐问题解决前彻底暂停AI研究。
No mention of this letter I guess...
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war" and the multiple urges to completely halt all further ai research until things like the alignment problem can be solved.
看来没提这封公开信……“应将AI灭绝风险与疫情、核战等社会级风险同列为全球优先事项”,以及多次呼吁在价值对齐问题解决前彻底暂停AI研究。
@rdapigleo
How long till Optimus is purchased by the military?
Just to pour drinks and fold towels.
还要多久军用版擎天柱就会问世?不过可能只用来倒饮料叠毛巾。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
How long till Optimus is purchased by the military?
Just to pour drinks and fold towels.
还要多久军用版擎天柱就会问世?不过可能只用来倒饮料叠毛巾。
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
@Zyyy-
is goodhart's law and goal misalignment kinda why prompts we give to ai have to be very specific and detailed to get what we want?
古德哈特定律和目标错位是否解释了为何给AI的指令必须极度具体详细才能得到预期结果?
is goodhart's law and goal misalignment kinda why prompts we give to ai have to be very specific and detailed to get what we want?
古德哈特定律和目标错位是否解释了为何给AI的指令必须极度具体详细才能得到预期结果?
@enmodo
In the Arizona case the self driving Uber car had a human baby sitter in the driver's seat but failed to respond apparently because they were using their phone at the time. Having a system that assists you as your backup is the way it should be. Me assisting a computer is just wrong and doomed to fail eventually.
亚利桑那州Uber自动驾驶事故中,驾驶座的人类监护员因玩手机未能及时反应。正确的应是系统辅助人类作为后备方案,而人类辅助电脑是本末倒置,注定失败。
In the Arizona case the self driving Uber car had a human baby sitter in the driver's seat but failed to respond apparently because they were using their phone at the time. Having a system that assists you as your backup is the way it should be. Me assisting a computer is just wrong and doomed to fail eventually.
亚利桑那州Uber自动驾驶事故中,驾驶座的人类监护员因玩手机未能及时反应。正确的应是系统辅助人类作为后备方案,而人类辅助电脑是本末倒置,注定失败。
@imdartt
the self driving cars sure arent 16 years old so they should be illegal
自动驾驶车肯定没满16岁,所以它们应该被判定为非法上路(注:美国部分州规定16岁可考驾照,玩梗)。
the self driving cars sure arent 16 years old so they should be illegal
自动驾驶车肯定没满16岁,所以它们应该被判定为非法上路(注:美国部分州规定16岁可考驾照,玩梗)。
@justv3289
I think calling it Artificial “Intelligence” inadvertently makes us assume that it’s a thinking entity so we are always shocked when there’s a malfunction. It makes more sense to think of it as just a computer program with lots of data that’s as liable to glitches and imperfections as any other software.
(We also equate real world technology with sci-fi technology which creates confusion as to what AI truly means and is capable of.)
将之称为“人工智能”会让人误以为是思考实体,因此故障时总令人震惊。其实它就是个含大量数据的电脑程序,和其他软件一样存在漏洞缺陷。此外,现实技术与科幻概念的混淆也导致人们对AI的真实能力产生误解。
I think calling it Artificial “Intelligence” inadvertently makes us assume that it’s a thinking entity so we are always shocked when there’s a malfunction. It makes more sense to think of it as just a computer program with lots of data that’s as liable to glitches and imperfections as any other software.
(We also equate real world technology with sci-fi technology which creates confusion as to what AI truly means and is capable of.)
将之称为“人工智能”会让人误以为是思考实体,因此故障时总令人震惊。其实它就是个含大量数据的电脑程序,和其他软件一样存在漏洞缺陷。此外,现实技术与科幻概念的混淆也导致人们对AI的真实能力产生误解。
@BrandanAlfred
I am really worried we are getting near that point... i am seeing changes in how gpt operates and i hope open ai is aware of how aware it's becoming and how much it's misbehaving.
真的很担心我们正在接近某个临界点……我观察到GPT行为模式的变化,希望OpenAI意识到它逐渐显现的“觉醒”迹象和异常行为。
I am really worried we are getting near that point... i am seeing changes in how gpt operates and i hope open ai is aware of how aware it's becoming and how much it's misbehaving.
真的很担心我们正在接近某个临界点……我观察到GPT行为模式的变化,希望OpenAI意识到它逐渐显现的“觉醒”迹象和异常行为。
@ObisonofObi
Ai feels like a paradox (may be another word that fits better but this is the one my brain thinks of atm). We want ai to do the back breaking insane data shifting but there will be mistakes a lot of the time because it doesn’t have a holistic view of the data while on the other hand humans can make mistakes but it can potentially be less damaging but it’s super slow. If we try to do both were we use ai to do the heavy work and present the result to a human, we would need to still shift through the data kind of losing the point of using ai in the first place. While the internet/media we consume tell us true ai are bad, we will need something like a true ai to truly be effective in the way we want it to be unless we use ai in more simple small dose like the linear data from the beginning of the episode. Idk, maybe I’m crazy, I’m not an ai expert but it just feels like this to me whenever I hear about ai used irl.
AI像是个悖论(或许有更贴切的词但暂时想到这个)。我们想让AI处理海量数据苦力活,但它常因缺乏全局观出错;人类虽可能犯错但危害较小,只是效率极低。若让人工智能处理重活再交人类审核,又需重新筛查数据,失去使用AI的意义。虽然网络媒体渲染真AI很危险,但除非像剧集开头案例那样小剂量使用线性数据AI,否则我们需要接近真AI的东西才能实现预期效果。可能我疯了,不是专家,但每次听说现实应用的AI都有这种感觉。
Ai feels like a paradox (may be another word that fits better but this is the one my brain thinks of atm). We want ai to do the back breaking insane data shifting but there will be mistakes a lot of the time because it doesn’t have a holistic view of the data while on the other hand humans can make mistakes but it can potentially be less damaging but it’s super slow. If we try to do both were we use ai to do the heavy work and present the result to a human, we would need to still shift through the data kind of losing the point of using ai in the first place. While the internet/media we consume tell us true ai are bad, we will need something like a true ai to truly be effective in the way we want it to be unless we use ai in more simple small dose like the linear data from the beginning of the episode. Idk, maybe I’m crazy, I’m not an ai expert but it just feels like this to me whenever I hear about ai used irl.
AI像是个悖论(或许有更贴切的词但暂时想到这个)。我们想让AI处理海量数据苦力活,但它常因缺乏全局观出错;人类虽可能犯错但危害较小,只是效率极低。若让人工智能处理重活再交人类审核,又需重新筛查数据,失去使用AI的意义。虽然网络媒体渲染真AI很危险,但除非像剧集开头案例那样小剂量使用线性数据AI,否则我们需要接近真AI的东西才能实现预期效果。可能我疯了,不是专家,但每次听说现实应用的AI都有这种感觉。
@SeeingBackward
7:55 looks like AI is ready for the stock trading floor!
7分55秒的画面显示,AI简直是为股票交易所量身定制的!
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
7:55 looks like AI is ready for the stock trading floor!
7分55秒的画面显示,AI简直是为股票交易所量身定制的!
原创翻译:龙腾网 https://www.ltaaa.cn 转载请注明出处
@ticijevish
Like all computers ever, AI follows the golden, inviolate rule of all computations:
Garbage In, Garbage Out.
LLM AI has the primary function of enshrining existing human biases and discriminations, cause it was trained on data collected and established by humans with biases.
与所有计算机系统相同,AI遵循计算领域铁律:输入垃圾,输出垃圾。大语言模型AI的核心功能是固化现存人类偏见与歧视,因其训练数据本就来自带有偏见的人类。
Like all computers ever, AI follows the golden, inviolate rule of all computations:
Garbage In, Garbage Out.
LLM AI has the primary function of enshrining existing human biases and discriminations, cause it was trained on data collected and established by humans with biases.
与所有计算机系统相同,AI遵循计算领域铁律:输入垃圾,输出垃圾。大语言模型AI的核心功能是固化现存人类偏见与歧视,因其训练数据本就来自带有偏见的人类。
很赞 3
收藏