Apple Intelligence Is Gambling on Privacy as a Killer Feature

by Alan North
0 comments


As Apple’s Worldwide Developers Conference keynote concluded on Monday, market watchers couldn’t help but notice that the company’s stock price was down, perhaps a reaction to Apple’s relatively low-key approach to incorporating AI compared to most of its competitors. Still, Apple Intelligence–based features and upgrades were plentiful, and while some are powered using the company’s privacy- and security-focused cloud platform known as Private Cloud Compute, many run locally on Apple Intelligence–enabled devices.

Apple’s new Messages screening feature automatically moves texts from phone numbers and accounts you’ve never interacted with before to an “Unknown Sender” folder. The feature automatically detects time-sensitive messages like login codes or food delivery updates and will still deliver them to your main inbox, but it also scans for messages that seem to be scams and puts them in a separate spam folder. All of this sorting is done locally using Apple Intelligence. Similarly, the expanded Call Screening feature will automatically and locally pick up untrusted phone calls, ask for details about the caller, and transcribe the answers so you can decide whether you want to pick up the call. Even Live Translation adds real-time language translation to calls and messaging using local processing.

From a privacy perspective, local processing is the gold standard for AI features. Data never leaves your device, meaning there’s no risk that it could end up somewhere unintended as a result of a journey through the cloud. And new features like spam and “Unknown Sender” sorting for Messages, call screening for untrusted phone numbers, and Live Translation tools all seemed to be designed with a strategy of using privacy as a differentiator in an already crowded AI field.

In addition to being privacy-friendly, local processing has other benefits like allowing AI-based services to be available offline and speeding up certain tasks, since data doesn’t have to be sent to the cloud, processed, and then sent back to a device. If AI features are going to be widely available and accessible, though, most companies are constrained by attempting to factor in the old, low-end devices that many of their customers are likely using that may not be able to handle local AI. Apple has less need to be inclusive, though, because it produces both hardware and software and has already imposed limitations that Apple Intelligence can only run on recent device models.

There are other limitations to Apple Intelligence, too, and the company offers opt-in integrations with some third-party generative AI services to expand functionality. For OpenAI’s ChatGPT, for example, users must turn the integration on, and Apple services will then prompt the user to confirm each time they go to submit a ChatGPT query. Additionally, users can elect to log in to a ChatGPT account, in which case their queries will be subject to OpenAI’s normal policies, or they can use ChatGPT without logging in. In this scenario, Apple says, it does not connect an Apple ID or other identifier to queries and obfuscates users‘ IP addresses.

Apple invested extensively to develop Private Cloud Compute to maintain strong security and privacy guarantees for AI processing in the cloud. Other companies have even begun to create similar secure AI cloud schemes for products and services that specifically center privacy as a crucial feature. But the fact that Apple still deploys local processing for new features when possible may indicate that privacy isn’t just an intellectual priority in the company’s approach to AI. It may be a business strategy.



Source link

Related Posts

Leave a Comment