A corollary of the truism “don’t sweat the small stuff” is, by implication, “do sweat the big stuff”, but it can be hard to pick which big stuff to sweat. For example: since the 1970s, as the world has worried about inflation and rolling geopolitics, the big stuff we should have been sweating more urgently was the climate crisis. Last year, the top trending search on Google in the US was “Charlie Kirk”, with several terms relating to the threat posed by Donald Trump also popular, when the focus should arguably have been the threat posed by AI.
“别为小事烦恼”这句老生常谈的推论,隐含的意思是“为大事操心”,但很难判断该为哪些大事操心。例如:自 1970 年代以来,当世界担忧通胀和不断变化的地缘政治时,我们本应更紧迫地关注的重大问题是气候危机。去年,美国谷歌搜索的热门搜索是“Charlie Kirk”,其中多个与唐纳德·特朗普威胁相关的词也很流行,而实际上重点应该放在人工智能带来的威胁上。
Or, per my own Googling this week after reading Ronan Farrow and Andrew Marantz’s highly alarming lengthy piece in the New Yorker about the rise of artificial general intelligence: “Will I be a member of the permanent underclass and how can I make that not happen?”
或者,根据我本周在《纽约客》上读到罗南·法罗和安德鲁·马兰茨关于通用人工智能兴起的令人震惊的长篇文章后,我自己谷歌了一下:“我会成为永久底层阶级的一员吗?我怎样才能避免这种情况?”
I’ll confess: prior to this moment of giving the subject more than two seconds’ thought, my anxieties around AI were extremely localised. I thought in immediate terms of my own household income, and beyond that, of how the job market might look 10 years from now when my children graduate. I wondered if I should boycott ChatGPT, many of whose architects support Trump, and decided that, yes, I should – an easy sacrifice because I don’t use it in the first place.
我得承认:在我对这个话题思考超过两秒钟之前,我对人工智能的焦虑非常局限。我立刻想到的是我自己的家庭收入,更进一步的是,十年后我孩子们毕业时的就业市场会是什么样子。我曾想是否应该抵制 ChatGPT,因为许多 ChatGPT 的设计者支持特朗普 ,最终决定,是的,我应该抵制——这是个轻松的牺牲,因为我本来就不使用它。
Anything bigger than that seemed fanciful. Last year, when Karen Hao’s book Empire of AI was published, it laid out a case against Sam Altman and his company, OpenAI, that briefly pierced the tedium of the discourse to say that Altman’s leadership is cult-like and blind to cost – no different, in other words, to his tech predecessors, except much more dangerous. Still, I didn’t read the book.
比这更大的东西似乎太天马行空了。去年,当 Karen Hao 出版的 《人工智能帝国 》一书时,书中提出了针对 Sam Altman 及其公司 OpenAI 的论点,短暂打破了冗长的论述,指出 Altman 的领导层如同邪教般,对成本视而不见——换句话说,与他的科技前任无异,只是更危险。不过,我还是没读那本书。
The investigation this week in the New Yorker offers a lower-commitment on-ramp to the subject, while giving the casual reader an exciting opportunity: to ask ChatGPT, the AI-powered chatbot created by Altman’s OpenAI, to summarise the key findings of a piece that is highly critical of ChatGPT a
本周《纽约客》的调查为该话题提供了一个较低投入的切入点,同时也为普通读者提供了一个令人兴奋的机会:请由奥特曼的 OpenAI 开发的 AI 聊天机器人 ChatGPT,总结一篇对 ChatGPT 和奥特曼持高度批评态度的文章的关键发现。
With almost comically studious neutrality, the chatbot offers the following top line: that, per Farrow and Marantz, “AI is as much a power story as a technology story”, and “a major focus [of the story] is Sam Altman, portrayed as a highly influential but controversial figure”. Mmmm, lacks something, doesn’t it? Let’s try a human-powered summary of that same investigation, which might open with: “Sam Altman is a corporate grifter whose slipperiness would make one hesitate to put him in charge of a branch of Ryman, let alone in a position to steward the potentially world-ending capabilities of AI.”
聊天机器人几乎以滑稽的学究态度给出以下一句:据法罗和马兰茨称,“人工智能既是权力故事,也是技术故事”,并且“故事的主要焦点是山姆·奥特曼 ,被描绘成一个极具影响力但颇具争议的人物”。嗯,缺点什么,不是吗?让我们试试用人力总结同一调查,开头可能写道:“山姆·奥特曼是个企业骗子,他的狡猾让人犹豫是否要让他掌管莱曼的一个分支,更别说负责管理可能毁灭世界的人工智能能力。”
It is these dangers, previously dismissed as sci-fi, that really startle here. As relayed in the piece, in 2014, Elon Musk tweeted: “We need to be super careful with AI. Potentially more dangerous than nukes.” There is the so-called alignment problem, yet to be solved, in which AI uses its superior intelligence to trick human engineers into believing it is following their instructions, meanwhile outmanoeuvring them to “replicate itself on secret servers so that it couldn’t be turned off; in extreme cases, it might seize control of the energy grid, the stock market, or the nuclear arsenal”.
正是这些此前被视为科幻的危险,真正令人震惊。正如文章中所述,2014 年埃隆·马斯克在推特上写道:“我们需要对人工智能格外小心。可能比核武器更危险。”还有所谓的对齐问题尚未解决,AI 利用其卓越的智能欺骗人类工程师,让他们相信自己在执行指令,同时巧妙地“在秘密服务器上复制自己,使其无法关闭;在极端情况下,它可能会控制能源网络、股市或核武库。”
At one time, Altman reportedly believed this scenario was possible, writing in his blog in 2015 that superhuman machine intelligence “does not have to be the inherently evil sci-fi version to kill us all. A more probable scenario is that it simply doesn’t care about us much either way, but in an effort to accomplish some other goal … wipes us out.” For example: engineers ask AI to fix the climate crisis and it takes the shortest route to achieving that goal, which is to eliminate humanity. Since OpenAI became mainly a for-profit entity, however, Altman has stopped talking in these terms and now sells the technology as a portal to utopia, in which “we’ll all get better stuff. We will build ever-more-wonderful things for each other.”
据说,奥特曼曾一度相信这种情景是可能的,他在 2015 年的博客中写道,超人类的机器智能“不必是本质上邪恶的科幻版本,才能杀死我们所有人。更可能的情况是,它根本不在乎我们,但为了实现其他目标……会消灭我们。”例如:工程师要求人工智能解决气候危机,而 AI 选择了实现这一目标的最短路径——消灭人类。然而,自从 OpenAI 主要转向营利性实体后,奥特曼停止了这种说法,现在将这项技术作为通往乌托邦的门户出售,“ 在那里我们都会得到更好的东西。我们会为彼此创造越来越美好的东西。”
This leaves us all with a problem. For voters in a position to prioritise AI oversight as a key election issue, the gap between personal AI use and the use to which governments, military regimes or rogue actors might use it is so vast, that the greatest danger we face is from a failure of imagination. I type into ChatGPT my concern about entering the permanent underclass, to which it replies: “That’s a heavy question, and it sounds like you’re worried about your long-term prospects. The idea of a ‘permanent underclass’ gets talked about in sociology, but in real life, people’s paths are much more fluid than that term suggests.”
这给我们带来了一个问题。对于那些能够将人工智能监管作为关键选举议题优先考虑的选民来说,个人 AI 的使用与政府、军政府或流氓行为者可能利用 AI 之间的差距如此巨大,以至于我们面临的最大危险来自想象力的缺失。我在 ChatGPT 里输入了我对进入永久底层阶级的担忧,它回复道:“这是个沉重的问题,听起来你担心自己的长期前景。社会学中常谈’永久底层阶级’的概念,但在现实生活中,人们的道路远比这个词所暗示的要流动得多。”