Apple Begins Tests to Integrate AI into iPhones

Apple is developing a new model to address and collect data on its smartphones. The new Apple AI integration will hit future iPhone models.

Apple is developing a new model to address and collect data on its smartphones. The new Apple AI integration will hit future iPhone models.

Recent research papers revealed that Apple conducted tests to integrate AI tools into iPhones, although it hadn’t disclosed any features related to these tools.

According to a research paper titled “LLM in Flash,” published by the company on the 12th of December, Apple has succeeded in developing an AI model and operating it locally on iPhones. This includes guidance on battery consumption and the ability to analyze and address data collection internally on the phone, without the need to connect to cloud servers.

This is not the first time Apple has shown its intention to offer AI on its iPhones by addressing data collection internally, to protect users’ data privacy. Bloomberg earlier published a report in October about Apple’s AI team working on a new update for Apple’s smart assistant Siri, based on generative AI, with an expectation of its release to users by 2024.

The report also pointed out that the company is concerned about the technology used in the new version of “Siri,” as it may take a longer time to apply within services or other apps.

Bloomberg added that apple is developing a generative AI model that deals with texts called “Ajax” and internally known as “AppleGPT” equivalent to ChatGPT. The company has enabled the use of its smart platform internally by the employees to deal with texts.

A “See” Better Models

Along with the research paper, earlier this month, Apple published two research papers. One of them relates to an AI model named ‘HUGS,’ specialized in creating 3D animated human digital models. This model relies on single-angle videos with a limited number of shots, ranging from 50 to 100 at most.

The model takes only about 30 minutes to create a 3D designed digital animated model. It does this by separating the character from the filmed scene and making it more dynamic.

Apple’s research team, through this model, has succeeded in speeding up the creation of a 3D human body from a single-angle filmed video, achieving a pace 100 times faster than any other AI model in both training and practical application.

The new technique adopted by Apple significantly facilitates the extraction and separation of elements from videos and their addition to entirely new videos. This advancement will pave the way for many industries to benefit from this technology. For instance, in remote meetings, it can enhance privacy by replacing participants with 3D human avatars in the same space, making communication more realistic. Additionally, there is the potential for applying this technology to create innovative content via smartphone cameras.

Earlier this year, Apple’s Director Tim Cook confirmed that the company is intensively focusing on the AI market and is heavily investing in generative AI technology. This announcement was made during the last quarterly results meeting in November. Additionally, some reports revealed that the company is trending towards investing millions of dollars daily to run and train AI models.

All In One Model

A research paper published back in October, authored by a large team of researchers from Apple, Google, and the University of Columbia, revealed progress on an all-in-one project called Ferret. This project has the potential to run on portable smart devices, such as smartphones.

The new model is promising for several reasons. Firstly, it does not require superior processing and storage capabilities to deliver its full capabilities. Additionally, it is open source, which represents an unprecedented step from Apple in the AI market.


Inside Telecom provides you with an extensive list of content covering all aspects of the tech industry. Keep an eye on our Intelligent Tech sections to stay informed and up-to-date with our daily articles.