A former Google employee accused the company of breaching AI ethics policies by helping an Israeli military contractor analyze drone footage in 2024, according to a whistleblower complaint seen by The Washington Post and reported on Sunday.
The complaint was filed with the United States Securities and Exchange Commission (SEC) and alleged that Google breached its own “AI principles,” which, at the time of the incident detailed in the report, outlined that the company would not utilize AI technology to conduct surveillance in a manner “violating internationally accepted norms” or in relation to weapons.
According to the WP, complaint documents detailed that a customer support request allegedly sent from an IDF email address was received by Google’s cloud-computing division.
The customer name attached to the support request matched an employee of CloudEx, an Israeli tech company. The SEC complaint additionally alleged that CloudEx is an IDF contractor.
Internal documents included with the complaint described that the customer requested help after noticing a bug while attempting to use Google’s Gemini AI to analyze aerial footage, an issue that allegedly led to the software occasionally failing to identify drones, soldiers, and other objects, the WP reported.
Google’s cloud unit customer support staff reportedly responded with suggestions and did internal tests related to the request.
After several message exchanges between the CloudEx employee and the support staffer, the issue reportedly resolved itself.
An additional Google staffer was copied in on the support request, the SEC complaint documents showed. The whistleblower alleged that the second staffer worked on the IDF’s Google Cloud account.
The complaint alleged that the aerial footage was related to Israeli operations in Gaza during the Israel-Hamas War, but provided no specific evidence supporting that claim.
The whistleblower additionally asserted that the customer service response contradicted Google’s publicly stated, previously federally filed, policies and claimed that it broke securities laws, as well as accusing the company of misleading regulators and investors.
According to the WP, Google has previously stated that any work it has done with the Israeli government or officials was “not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”
Ex-employee accuses Google of upholding 'double standard'
The former employee who filed the complaint told the WP that many of their projects at Google had gone through internal AI ethics review processes.
“That process is robust and as employees we are regularly reminded of how important the company’s AI Principles are,” they said in the statement, which was provided anonymously due to alleged fear of Google retaliation, continuing to claim that “when it came to Israel and Gaza, the opposite was true.”
The former employee elaborated that they filed the SEC complaint because they felt the company needed to be “held accountable” for the accused “double standard.”
A spokesperson for Google disputed the “double standard” claim and asserted that the customer support did not violate AI ethics because the support requester’s usage of Gemini services was too small to be considered “meaningful.”
In a statement provided to the WP, the spokesperson stated that “the [customer support] ticket originated from an account with less than a couple hundred dollars of monthly spend on AI products, which makes any meaningful usage of AI impossible.”
Google’s “cloud video intelligence” service offers object tracking free for the first 1,000 minutes and costs 15 cents per additional minute, the WP reported, citing Google documentation.
The spokesperson elaborated that Google customer support “answered a general use question, as [they] would for any customer, with standard, help desk information, and did not provide any further technical assistance.”
SEC complaints can be filed by any individual and do not automatically lead to further investigation, the report noted.
In February of 2025, Google updated the section of its AI policy that pledged to bar the usage of its technology for surveillance and weapons, stating that it needed to help democratically elected governments evolve and keep up with global AI usage, the WP additionally noted.