CONNECT WITH US

Apple enables LLMs to recognize actions from sound, advancing health monitoring and smart fitness

Ollie Chang, Taipei; Charlene Chen, DIGITIMES Asia 0

Credit: AFP

Apple has demonstrated that large language models (LLMs) can accurately identify user activities by integrating textual audio and motion data without accessing raw audio. This multimodal approach opens new possibilities for health monitoring and smart...

The article requires paid subscription. Subscribe Now