
In Denver, risk specialist Tony Cox is developing an AI system to scan scientific studies, a project funded by the American Chemical Council, that could directly impact AI in health policy, with wide-ranging implications for measuring dangers caused by pollution.
Cox’s research is part of an AI epidemiology influence wave where AI is used, not just to analyze data, but to challenge assumed affairs between environmental pollution and health outcomes.
But some scientists fear the tool’s essential mission, claiming it’s used more for denying regulations, rather than improving science. Backed by corporate funding in its development, critics are also questioning what happens when such a tool is deployed where industry and health meet.
Who Calls for Safe and Ethical AI for Health?
Cox declared that his AI system uses “critical thinking at scale” in emails to industry experts.
The tool screens scientific papers and flags conclusions which, in its analysis, confuse correlation with causation. But Cox’s emails disclose a trend: arguments with ChatGPT forcing it to withdraw absolute declarations regarding pollution harms and sharing these arguments with oil and chemical industry researchers.
Experts warn this tactic could cross lines drawn by health WHO AI guidelines violations, which prioritize caution in using AI to re-explain health data with policy implications.
The American Chemistry Council, which funds Cox’s work, argues that the tech has the potential to improve AI in health regulation by enhancing scientific transparency and credibility. However, most claim that the funding source and the tool’s original intent to primarily counter pollution studies, suggesting an agenda closer to industrial than public health interests.
“Science denialism often sounds convincing because it contains some truthiness to it or elements of truth or elements of valid points, but it’s often based on either overemphasis or omission and doesn’t portray a full picture,” said Chris Frey, associate dean for research and infrastructure at the North Carolina State University.
Science Behind Pollution Regulations
In his research, Cox says his goal is to create tools to allow researchers, editors, and even the public to evaluate assertions with neutral logic, envisioning that AI for health risk assessment can be applied by journals and regulators.
Even though he admits the tool is more skeptical of links between toxins and harm than many human editors, a charge from critics that the rollout of needed regulation may be hindered.
In one case, Cox used ChatGPT to challenge the prevailing link between fine particle air pollution (PM2.5) and lung disease. The bot initially went with the scientific agreement but when investigated consequently, it ultimately lost its stance.
Cox bragged about this debate to leaders in the industry. It’s a troubling indicator, one expert maintained, of how ChatGPT public health policies could be shaped not by data integrity but by relentless reframing. With journals already putting out these AI-assisted debates and others testing the tool, scientists are concerned about a world where machine-generated skepticism is seen as objectivity.
In a tech era where there’s a need for transparency in AI decision-making in health sectors, everybody today wonders: Who owns the equipment that determines the truth of science and who decides what we do with it?
AI in health policy will soon play a fundamental role in environmental risk assessment, but critics will forever warn that without strict oversight, it could also become a powerful tool to delay or weaken regulations instead of strengthening them.
Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.