Ian Caroll Exposes McDonald’s AI Hiring Tool’s Data Breach 

researchers picked up a massive leak in McDonald's employment application website through vulnerability in AI recruiting assistant software.

Podcasters and security researchers, Ian Caroll and Sam Curry, exposed a massive data leak of more than 64 million job applicant records, at none other than McDonald’s hiring portal, an AI recruiting assistant software. 

Exposed via a weak Paradox.ai AI chatbot backend, including real names, emails, and resumes, the breach stemmed from an unsecured admin portal (username: “admin”, password: “123456”).  

In Paradox.ai’s Olivia hiring bot – one of the top AI recruiting tools – McDonald’s applicants were screened. Despite Paradox calling most data “test entries,” at least five real applicants were confirmed to be exposed. In an effort to defend itself, McDonald’s blamed the vendor and vowed to impose stricter AI hiring protection rules.  

The company now plans to implement a bug bounty recruiting chatbot program to prevent future issues, while McDonald’s blamed the third-party vendor. 

A Chatbot at the Front Door 

McDonald’s job applicants will start their job search with Olivia, an AI hiring assistant to automate up-front filtering. Paradox.ai’s Olivia asks candidates about schedules, collects résumés, and sends out personality tests – an example of a chat-based hiring AI tool

But, as some Reddit users complained, there was a strange activity, revealing that Olivia was reported to misunderstand direct questions or take users through endless loops, inevitably wasting their time.  

This encouraged Carroll and Sam Curry to dig deeper into the system’s architecture and vulnerabilities. 

Carroll and Curry’s study accentuates concerns over bias in AI recruitment especially when machines like Olivia have the power to decide how real human individuals are screened out during the hiring processes. 

Alarming Security Flaws 

The researchers found a staff login page on McHire.com. Testing common login combos, they struck gold with the username “admin” and password “123456.” This unlocked backend access to Paradox.ai’s AI hiring bot security dashboard. 

Without multi-factor authentication, they visited a McDonald’s “restaurant” test and began to surf. By manipulating applicant ID numbers, they could access names, email addresses, and messages from real job applicants. These were years old, with more than 64 million entries. 

Although Paradox.ai said most were test data, at least five confirmed real applicants’ info was exposed. In a blog post, the company admitted the admin account hadn’t been used since 2019. 

“We do not take this matter lightly,” said Paradox.ai’s legal officer Stephanie King. “We own this.”  

“We’re disappointed by this unacceptable vulnerability,” the company stated, while reaffirming its focus on data protection through AI data-driven hiring AI safeguards. 

The flaw also highlights how AI sourcing assistants could be manipulated if not properly secured, especially when they handle job applications from vulnerable individuals seeking entry-level work. Curry warned that hackers could have used the exposed data to impersonate recruiters and request personal information in a fake manner.  

 “I have nothing but respect for McDonald’s employees,” Carroll testified. As AI recruitment assistants increase in popularity, so do the risks of cutting corners on cybersecurity. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Cybersecurity sections to stay informed and up-to-date with our daily articles.