Key Takeaways
- WhatsApp is rolling out new cloud-based AI features like message summarization and writing assistance.
- A new system called “Private Processing” is designed to use AI while preserving the privacy of end-to-end encrypted chats.
- Unlike standard Meta AI, Private Processing aims to prevent Meta or WhatsApp from accessing message content used for AI tasks.
- The system uses secure hardware and limits data exposure, but security experts note potential risks remain.
- Users need to opt-in to use these AI features and can control their use in chats with a new “Advanced Chat Privacy” setting.
WhatsApp, the messaging app used by billions globally, is introducing new AI capabilities.
These features, set to launch soon, will offer helpful tools like summarizing long chats or helping you compose messages.
This move follows parent company Meta integrating AI, built on its Llama model, across its platforms. You might have already seen the Meta AI assistant button in WhatsApp.
However, many users hesitated because interacting with that assistant isn’t protected by the same end-to-end encryption as regular WhatsApp messages.
To address this, WhatsApp developed “Private Processing.” According to WIRED, this system is specifically built to handle AI tasks without exposing your private message content to Meta or WhatsApp.
Normally, end-to-end encryption means only the sender and receiver can see messages. Cloud-based AI usually needs access to data, creating a conflict.
Private Processing uses specialized hardware, known as Trusted Execution Environments, to create a secure silo for handling data temporarily. The system is designed to minimize data retention and detect tampering.
Using these AI features is optional. You have to choose to turn them on. WhatsApp also added an “Advanced Chat Privacy” setting.
If someone in a chat turns this setting on, it blocks others in that chat from using messages for AI features, among other privacy controls.
While initial security reviews seem positive, experts caution that adding any off-device AI processing increases risks compared to purely end-to-end encrypted systems.
Matt Green, a cryptographer at Johns Hopkins, acknowledged WhatsApp’s security efforts but told Wired, “More private data will go off device, and the machines that process this data will be a target for hackers and nation state adversaries.”
WhatsApp emphasizes its commitment to security, inviting third-party audits and planning to make components open source for scrutiny.
This approach differs from Apple’s recently announced Private Cloud Compute for its AI features, which focuses more on processing data directly on newer devices when possible.
WhatsApp serves a vast user base with diverse devices, making on-device processing difficult for everyone. Their focus is therefore on making cloud processing as secure and private as possible.
Why add AI to a secure messenger at all? WhatsApp head Will Cathcart explained to Wired that users want these tools, stating, “We think building a private way to do that is important.”
Ultimately, WhatsApp aims to offer modern AI conveniences while upholding its core privacy promises, though it acknowledges the inherent challenges and risks involved.