Huawei Asserts AI Models' Domestic Following Whistleblower 

On July 6, Huawei denied allegations by GitHub user HonestAGI that its Huawei Pangu AI model was incrementally trained on Alibaba’s technology, sparking debate over transparency in China’s fiercely competitive AI race.

Even though Huawei’s Noah’s Ark Lab insisted Pangu was developed independently on its Ascend hardware, an anonymous employee alleges on GitHub that third-party models were used to catch rivals, like DeepSeek.

After the Whistleblower introduced this information, pressure mounted in China’s consolidating large-language model (LLM) market, where Huawei seeks to compete with Alibaba and ByteDance, despite being scrutinized over training data origins.

Huawei Cites Full Compliance with Open-Source Standards 

The Shenzhen-based giant further clarified that while chips training Pangu models did use “certain open-source codes” from other models during development, all usage complied with relevant licensing agreements.  

“We strictly followed the requirements for open-source licences and clearly labelled the codes,” the lab noted in its statement released on Saturday. 

Despite this assurance, the original GitHub repository containing the detailed allegations has since been removed, leaving only a summary explanation. Nevertheless, the controversy persisted over the weekend. 

On Sunday, an individual claiming to be a Huawei Pangu employee posted a detailed rebuttal on GitHub, alleging that the company had indeed trained its model using third-party foundations due to competitive pressure. The anonymous author claimed the move was aimed at closing the gap with rivals like DeepSeek, whose open-source R1 model triggered a wave of interest earlier this year.

On Sunday, a purported Huawei Pangu employee posted a GitHub rebuttal alleging the company trained its model using third-party foundations to compete with rivals like DeepSeek. The anonymous author referred to competitive pressure, exposing how DeepSeek’s open-source R1 model sparked industry interest earlier this year.

Huawei declined to comment on the employee’s claims, and the author did not respond to inquiries from the South China Morning Post, owned by Alibaba. 

China’s AI Race Intensifies Amid Growing Market Consolidation 

The dispute comes at a time when China’s tech ecosystem is engaged in a high-stakes race to dominate the foundational Pangu Pro MoE AI model landscape. More than 200 large language models (LLMs) were up for sale by the close of 2023, while consolidation has already begun, with some firms—like o1.AI—scaling back model development efforts. 

The dispute comes as China’s tech sector races to dominate the foundational Pangu Pro MoE AI model space. Over 200 LLMs were available by end-2023, with consolidation already underway – firms like o1.AI are scaling back model development.

The space is now dominated by China’s biggest tech players: Alibaba, ByteDance, Tencent, and DeepSeek. Other significant players such as Baidu, Zhipu AI, and MiniMax are also working on efforts to build competitive LLMs. 

Huawei, the beacon of China’s defiance against US technological embargoes, is eager to remain competitive in the dense Pangu architecture evolving sector. Founded in 2012, its Noah’s Ark Lab has a presence with labs scattered across several cities such as Hong Kong, Shenzhen, and Shanghai. The lab focuses on AI research in natural language processing, computer vision, and recommendation systems. In March 2025, Huawei hired Wang Yunhe as the new head of the lab to replace Yao Jun. 

As the Huawei Pangu foundation model market keeps evolving, credibility and transparency are now just as important as performance—something Huawei is learning this time too. 


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.