Read more at source.
Read more at source.
The American people entrust the federal government with sensitive personal information related to their health, finances, and other biographical information on the basis that this information will not be disclosed or improperly used without their consent. The Democrats are concerned that the use of unapproved and unaccountable third-party AI software could lead to breaches of this trust.
There are concerns that Musk, who founded xAI and whose electric car company, Tesla, is pivoting toward robotics and AI, could be using his access to sensitive government data for personal enrichment. Musk could potentially leverage this data to enhance his proprietary AI model, Grok.
Federal agencies are bound by multiple statutory requirements in their use of AI software. Key among these are the Federal Risk and Authorization Management Program, which standardizes the government's approach to cloud services and ensures AI-based tools are properly assessed for security risks, and the Advancing American AI Act, which requires federal agencies to maintain an inventory of the AI use cases and make these inventories available to the public.
DOGE operatives have reportedly deployed a proprietary chatbot, GSAi, to approximately 1,500 federal workers. Other agencies, including the departments of Treasury and Health and Human Services, have considered using a chatbot. The United States Army is currently using software dubbed CamoGPT to scan its records systems for any references to diversity, equity, inclusion, and accessibility.
The central purpose of the requests is to press the agencies into demonstrating that any potential use of AI is legal and that steps are being taken to safeguard Americans private data.