
Albert Saniger, co-founder of AI shopping app Nate, faces federal AI fraud charges after the DoJ revealed his company allegedly used human contractors in the Philippines to manually process orders, while falsely marketing the service as fully AI-powered to investors.
Despite claiming since 2018 that Nate could make one-click purchases across e-commerce sites using AI, prosecutors say the startup relied on hundreds of call center workers.
However, it ended with AI-driven fraud and DoJ indictment, revealing Nate’s automation was not as advertised, with federal department claiming the app’s true automation rate was near zero.
The Department of Justice’s (DoJ) indictment highlights growing scrutiny of AI hype in fundraising, with Nate accused of securing investments under “materially false pretenses.”
Instead of depending on AI, Nate apparently used hundreds of human contractors working from a Philippines-based call center to manually execute transactions while leading investors to believe the company had abilities far greater than what they did.
AI Fraud Leaving Investors in Losses
The AI shopping company raised over $50 million in venture capital from blue-chip investors such as Coatue, Forerunner Ventures, and Renegade Partners, including a $38 million Series A round in 2021.
Saniger had insisted again and again that Nate’s AI was capable of making transactions “without human intervention,” except where the system was confronted with errors. This, was proven to be wrong by the DoJ.
Although they had employed data scientists and bought some AI software, Nate’s use of human labor for labor sold as automated was a key contributor to the company’s downfall. Saniger’s AI-generated fraud and misleading presentation of his company’s features was criticized by a 2022 investigative report by The Information.
DOJ’s indictment further observes that Nate became bankrupt and was forced to sell its assets in January 2023, leaving investors with virtually total losses. This is not the first AI fraud instance where a startup has been accused of lying over its AI capabilities.
In 2023, another controversy emerged when an “AI” drive-through software startup was found to have significant reliance on human laborers in the Philippines.
https://twitter.com/SDNYnews/status/1910079478660845653
Also, a Business Insider report recently revealed that EvenUp, an AI-driven legal tech unicorn, had relied on human laborers to do much of its work leading to more AI fraud cases.
Albert Saniger did not respond to requests for comment to TechCrunch, and he is no longer Nate’s CEO as of 2023.
Final Thoughts
The recent AI-powered fraud case serves as a reminder of the ethical challenges surrounding AI deployment, highlighting the risks associated with deceptive practices in the tech industry. AI fraud emphasizes the critical need for transparency, honesty, and accountability in AI applications to prevent malicious actions and protect investor’s interests.
When companies mislead the public about the capabilities of their AI systems, it not only exposes them to legal consequences but also undermines consumer trust, which can take years to rebuild.
The future will reveal whether the industry can achieve balance between innovation and ethical responsibility, eventually constructing a more trustworthy environment for both consumers and businesses or else AI and fraud will continue becoming the problem of our time.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Tech sections to stay informed and up-to-date with our daily articles.