服务粉丝

我们一直在努力
当前位置:首页 > 财经 >

比GPT-4更强大的AI模型训练应该被暂停吗?

日期: 来源:安全牛收集编辑:安全牛


GPT-4的发布,在全球范围再一次掀起AI技术应用的热潮。与此同时,一些知名计算机科学家和科技业界人士也对人工智能技术的快速发展表示了担忧,因为这对人类社会存在着不可预知的潜在风险。


北京时间3月29日,由特斯拉公司CEO 埃隆·马斯克,图灵奖得主约书亚·本吉奥(Yoshua Bengio),苹果联合创始人瓦兹尼亚克(Steve Wozniak),以及《人类简史》作者尤瓦尔·赫拉利等人联名签署了一封公开信,呼吁全球所有AI实验室立即暂停训练比GPT-4更强大的AI系统,为期至少6个月,以确保人类能够有效管理其风险。如果商业型的AI研究组织不能快速暂停其研发进程,各国政府应该采取有效监管措施实行暂停令。


在这封公开信中,详细表述了暂停AI模型训练的理由:AI技术已经强大到能和人类进行某些方面的竞争,将给人类社会带来深刻的变革。因此,所有人都必须思考AI的潜在风险:假新闻和宣传充斥信息渠道、大量工作被自动化取代、人工智能甚至有一天会比人类更聪明、更强大,让我们失去对人类文明的掌控。只有当我们确信强大的人工智能系统的影响将是积极的,其风险将是可控的时候,才应该继续开发和训练它们。


截至今天中午11点,这份公开信已经征集到1344份签名。



各大科技公司是否真的过快推进了AI技术,并有可能威胁人类的生存呢?实际上,OpenAI创始人山姆·阿尔特曼自己也对chatGPT的火爆应用表示了担忧:我对AI如何影响劳动力市场、选举和虚假信息的传播有些“害怕”,AI技术需要政府和社会共同参与监管,用户反馈和规则制定对抑制AI的负面影响非常重要。


如果连AI系统的制造者也并不能完全理解、预测和有效控制其风险,而且相应的安全计划和管理工作也没有跟上,那么目前的AI技术研发或许确实已陷入了“失控”的竞赛态势。



牛调研





附:公开信原文


AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research,and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI Principles, Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.


Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization? Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system's potential effects. OpenAI's recent statement regarding artificial general intelligence, states that "At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models." We agree. That point is now.


Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.


AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.


AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.In parallel, AI developers must work with policymakers to dramatically accelerate development of robust AI governance systems. These should at a minimum include: new and capable regulatory authorities dedicated to AI; oversight and tracking of highly capable AI systems and large pools of computational capability; provenance and watermarking systems to help distinguish real from synthetic and to track model leaks; a robust auditing and certification ecosystem; liability for AI-caused harm; robust public funding for technical AI safety research; and well-resourced institutions for coping with the dramatic economic and political disruptions (especially to democracy) that AI will cause.


Humanity can enjoy a flourishing future with AI. Having succeeded in creating powerful AI systems, we can now enjoy an "AI summer" in which we reap the rewards, engineer these systems for the clear benefit of all, and give society a chance to adapt. Society has hit pause on other technologies with potentially catastrophic effects on society.We can do so here.Let's enjoy a long AI summer, not rush unprepared into a fall.


相关阅读

ChatGPT爆火出圈带来的AI网络安全的思考

AI技术在身份欺诈检测中的五种应用


合作电话:18311333376
合作微信:aqniu001
投稿邮箱:editor@aqniu.com


相关阅读

  • 如果ChatGPT“变坏了”我们该如何应对?

  • 编者说从ChatGPT到ChatGPT4,AI技术再次出圈,风靡全球。有人说,ChatGPT将会彻底改变AI在人类社会中“生存”的方式,将AI技术从人类的工具提升为人类的“伙伴”甚至“敌人”。也有
  • 同学,这有offer待领取!

  • 技术小哥友情分享阿里集团大淘宝春招线上宣讲会中,阿里妈妈的技术小哥哥 ——— 乙骨,为大家分享了他在阿里妈妈的工作日常~欢迎围观!
  • 马斯克等联名叫停GPT训练!什么原因?

  • OpenAI的GPT大模型日益强大,也引起了关于伦理方面的巨大争议。OpenAI的共同发起人伊隆·马斯克(Elon Musk)和一群人工智能专家及行业高管在一封最新发布的公开信中呼吁,在未来六
  • “魔鬼鱼”,特等奖

  • 近期,2022年度中国造船工程学会科学技术奖评审结果公布,由西北工业大学、西北工业大学宁波研究院、中国船舶科学研究中心联合申报的《滑扑一体仿蝠鲼柔体潜水器技术及应用》
  • 我区启动4个现代农业产业技术体系

  • 石榴云/新疆日报讯(记者 谢慧变报道)近日,新疆农业科学院相继启动小麦、玉米、大豆、蔬菜4个自治区现代农业产业技术体系,旨在按全产业链配置科技力量,建立新型农业科研组织模式,
  • 4年种10万棵树!新疆这所高校师生让荒滩变绿园

  • 石榴云/新疆日报讯(记者 赵西娅报道)又是一年春光好,植树添绿正当时。3月27日至31日,阿克苏职业技术学院组织师生开展劳动教育实践周活动,为学校再添新绿。3月28日,阿克苏职业技术

热门文章

  • “复活”半年后 京东拍拍二手杀入公益事业

  • 京东拍拍二手“复活”半年后,杀入公益事业,试图让企业捐的赠品、家庭闲置品变成实实在在的“爱心”。 把“闲置品”变爱心 6月12日,“益心一益·守护梦想每一步”2018年四

最新文章

  • 如果ChatGPT“变坏了”我们该如何应对?

  • 编者说从ChatGPT到ChatGPT4,AI技术再次出圈,风靡全球。有人说,ChatGPT将会彻底改变AI在人类社会中“生存”的方式,将AI技术从人类的工具提升为人类的“伙伴”甚至“敌人”。也有
  • 比GPT-4更强大的AI模型训练应该被暂停吗?

  • GPT-4的发布,在全球范围再一次掀起AI技术应用的热潮。与此同时,一些知名计算机科学家和科技业界人士也对人工智能技术的快速发展表示了担忧,因为这对人类社会存在着不可预知的
  • 360集团发布扶助中小微数字化“共同富裕”战略

  • 3月29日下午,由中国中小企业协会、中国互联网协会、360集团共同主办的“新技术、新服务、新格局——2023数字安全发展与高峰论坛”在京举办。为破解中小微企业的数字化转型困
  • 直播预告 | 2022超级CSO年度评选提名分享

  • 2022(第二届)超级CSO年度评选,由安在新媒体策划发起并组织举办,得到包括中国网络安全审查技术与认证中心(CCRC)、大数据协同安全技术国家工程实验室、中国管理科学学会应急与安全