On August 11, 2025, Bloomberg’s “Power On” reported that Apple is preparing an “AI voice control” capability for iPhone: users will be able to use natural language to directly perform tasks such as search, editing, and placing orders within third-party apps. The feature is expected to roll out alongside Siri’s underlying architecture overhaul in spring 2026, with internal testing already underway on popular apps including Uber and YouTube.
The core lies in the new App Intents: developers declare app actions that can be invoked by the system, enabling Siri to complete tasks inside apps rather than merely opening an app or jumping to a page. Examples include “find a photo, make quick edits, and send it,” or “scroll within a shopping app and add items to the cart.” Apple’s developer documentation has incorporated this path into the integration guidance for Siri and Apple Intelligence.
The rollout cadence will be “system update + staged scenario unlock.” Multiple media outlets citing internal sources say Apple plans to ship voice control with the iOS 26 cycle next spring, but the first release will restrict sensitive categories (such as finance and health) and gradually open up based on app readiness; the engineering team is still validating stability and accuracy across a sufficiently large set of apps.
Today’s Siri relies more on SiriKit’s domain-limited scenarios (messaging, payments, ride-hailing, etc.) and interactions like “open app/execute Shortcuts.” The new approach upgrades “can understand” to “can execute,” with App Intents exposing each action directly to the system so voice can complete operations within the UI just as if the user tapped manually.
Meanwhile, the combination of Apple Intelligence’s on-device models and Private Cloud Compute means many voice tasks can run offline or with minimal cloud involvement, which helps it enter higher-barrier scenarios such as payments and communications—this is also the biggest difference in capability and trust compared with past Siri.
If voice control truly becomes usable across high-frequency apps, then on the hardware side the upside is an interaction-driven upgrade cycle (forming voice-first synergies with AirPods, CarPlay, etc.); on software and services, the voice entry point becomes a new distribution and conversion path, improving subscription and transaction conversion efficiency. For investors, the next three things to watch are: first, the speed of developer adoption and whether flagship cases emerge; second, the stability and word-of-mouth in first-wave markets such as the U.S. when the spring rollout lands; third, whether the hardware and scenario bundles this fall and next spring create differentiated selling points.
It should be noted that the timeline remains uncertain. Bloomberg and multiple tech media outlets have mentioned internal precision concerns for high-risk scenarios (such as banking and health), as well as region-by-region, phased rollouts; if developer adoption lags expectations or quality falls short, both reputation and the pace of diffusion will be affected.
After logging into the uSMART HK app, click on "Search" at the top right of the page, input the stock code to access the details page and view transaction details and historical trends. Then click the "Trade" button at the bottom right, select the "Buy/Sell" option, fill in the transaction conditions, and submit your order.
(image source:uSMART HK app)
