xAI 开发者泄露用于私有 SpaceX、Tesla LLM 的 API 密钥
An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceX , Tesla and Twitter/X, KrebsOnSecurity has learned.
Image: Shutterstock, @sdx15.
Philippe Caturegli , “chief hacking officer” at the security consultancy Seralys , was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.
Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian , a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.
GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok , the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.
“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”
Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne , but just a few hours later the repository containing the API key was removed from GitHub.
“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”
xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.
Carole Winqwist , chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.
“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”
The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.
The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.
“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.
Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration , part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.
A Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.
Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.
“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”
This entry was posted on Thursday 1st of May 2025 08:52 PM 标题: xAI 开发者在 GitHub 泄露了用于私有 SpaceX、Tesla LLM 的 API 密钥
一个在埃隆·马斯克的人工智能公司 xAI 的员工,在 GitHub 上泄露了一个私有密钥。在过去的两个月里,这个密钥可能允许任何人查询 xAI 的大型语言模型 (LLMs)。这些 LLM 似乎是为处理马斯克旗下公司(包括 SpaceX、Tesla 和 Twitter/X)的内部数据而定制的,KrebsOnSecurity 获悉了这一消息。
图片来源: Shutterstock, @sdx15.
安全咨询公司 Seralys 的“首席黑客官” Philippe Caturegli 是 第一个公开泄露事件的人,他指出,xAI 技术人员的 GitHub 代码仓库中暴露了 x.ai 应用程序编程接口 (API) 的凭据。
Caturegli 在 LinkedIn 上的帖子引起了 GitGuardian 研究人员的注意。GitGuardian 是一家专门检测和修复公共及私有环境中暴露的密钥的公司。GitGuardian 的系统不断扫描 GitHub 和其他代码仓库,查找暴露的 API 密钥,并向受影响的用户发出自动警报。
GitGuardian 的 Eric Fourrier 告诉 KrebsOnSecurity,暴露的 API 密钥可以访问 Grok 的几个未发布模型,Grok 是 xAI 开发的 AI 聊天机器人。 GitGuardian 发现,该密钥总共可以访问至少 60 个经过微调的私有 LLM。
“这些凭据可以用于以用户的身份访问 X.ai API,” GitGuardian 在一封解释其发现的电子邮件中写道。“关联的帐户不仅可以访问公共 Grok 模型(grok-2-1212 等),还可以访问未发布的 (grok-2.5V)、开发模型 (research-grok-2p5v-1018) 和私有模型 (tweet-rejector, grok-spacex-2024-11-04)。”
Fourrier 发现 GitGuardian 早在两个月前(3 月 2 日)就已提醒 xAI 员工注意暴露的 API 密钥。但截至 4 月 30 日,当 GitGuardian 直接提醒 xAI 的安全团队注意此事时,该密钥仍然有效且可用。 xAI 告诉 GitGuardian 通过 HackerOne 的漏洞赏金计划报告此事,但在几个小时后,包含 API 密钥的存储库从 GitHub 中删除。
“看起来其中一些内部 LLM 是用 SpaceX 数据进行微调的,一些是用 Tesla 数据进行微调的,”Fourrier 说。“我绝对不认为一个用 SpaceX 数据进行微调的 Grok 模型应该公开暴露。”
xAI 没有回应置评请求。 28 岁的 xAI 技术人员(其密钥被泄露)也没有回应。
GitGuardian 的首席营销官 Carole Winqwist 说,允许潜在的恶意用户免费访问私有 LLM 是灾难的根源。
“如果你是一名攻击者,并且可以直接访问 Grok 等模型的后端接口,那么你肯定可以利用它进行进一步的攻击,”她说。“攻击者可以利用它进行提示注入,调整 (LLM) 模型以满足他们的目的,或者尝试将代码植入供应链。”
xAI 内部 LLM 的意外暴露,正值马斯克所谓的“政府效率部门” (Department of Government Efficiency, DOGE) 将敏感的政府记录输入人工智能工具之际。 今年 2 月,The Washington Post 报道,DOGE 官员正在将教育部的数据输入 AI 工具,以调查该机构的项目和支出。
《华盛顿邮报》称,DOGE 计划在许多部门和机构中复制这一过程,访问政府不同部门的后端软件,然后使用 AI 技术提取和筛选有关员工和项目支出的信息。
《华盛顿邮报》记者写道:“将敏感数据输入 AI 软件会将其置于系统运营商的控制之下,从而增加了数据泄露或被网络攻击席卷的几率。”
Wired 在 3 月份报道称,DOGE 已经 向美国总务管理局 (General Services Administration) 的 1500 名联邦工作人员部署了一种名为 GSAi 的专有聊天机器人,这是 DOGE 精简联邦政府员工队伍并自动化原本由人类执行的任务的一部分。
Reuters 报告 上个月称,特朗普政府官员告诉一些美国政府雇员,DOGE 正在使用 AI 来监视至少一个联邦机构的通信,以查找对特朗普总统及其议程的敌意。 路透社写道,DOGE 团队大量部署了马斯克的 Grok AI 聊天机器人,作为他们削减联邦政府工作的一部分,但路透社表示无法确定 Grok 的具体使用方式。
Caturegli 表示,虽然没有迹象表明可以通过暴露的 x.ai API 密钥访问联邦政府或用户数据,但这些私有模型很可能经过专有数据的训练,并可能无意中暴露与 xAI、Twitter 或 SpaceX 内部开发工作相关的详细信息。
“该密钥被公开暴露了两个月,并授予了对内部模型的访问权限,这一事实令人担忧,”Caturegli 说。“这种长期存在的凭据暴露突显了密钥管理的薄弱和内部监控的不足,从而引发了对开发人员访问权限的安全保障和更广泛的运营安全问题的质疑。”
此条目发布于 2025 年 5 月 1 日星期四晚上 08:52