At Apple’s WWDC 2025 conference, the company detailed its shift toward integrating generative AI directly on devices. One of the key announcements was the introduction of the Foundation Models framework, a Swift-based developer interface that grants access to Apple’s own large language models (LLMs) for use in third-party apps. This marks a significant technical change in how AI can be implemented across Apple platforms—especially in privacy-sensitive contexts.
What Is Apple’s Foundation Models Framework?
The Foundation Models framework is a new set of APIs designed for use within Apple’s ecosystem (iOS, iPadOS, macOS, and watchOS). Developers can access Apple’s proprietary LLMs directly on-device, without sending data to the cloud. This design reduces dependency on internet connectivity and mitigates risks related to data privacy and third-party servers.
The framework is written in Swift and is designed to integrate easily into existing app architectures. Developers can call LLM-based features such as text summarization, question answering, context-aware automation, and image generation—all locally on supported Apple hardware.
How Do Apple’s On-Device AI Models Work?
According to Apple, the models accessible through the framework are derived from a compact ~3 billion parameter foundation model. First detailed in a 2024 technical report, this model is optimized for mobile inference. It supports:
- Text generation and natural language understanding
- Summarization and content transformation
- Tool invocation within apps
- Image creation and manipulation
All inference happens on the user’s device, improving response time, avoiding API costs, and aligning with Apple’s privacy-first design.
Use Cases for Developers
Apple highlighted several specific scenarios during the event to demonstrate potential use cases for developers:
- Education apps generating quizzes from student notes.
- Outdoor apps offering trail suggestions via natural language queries.
- Writing tools that provide real-time rephrasing or summarization.
- Messaging and call apps with built-in translation features.
Integration into System Features
These same models are used internally to power features such as:
- Visual Intelligence for recognizing and describing images.
- Writing Tools for editing and refining text.
- Shortcuts that respond to natural language.
- Genmoji creation from descriptive prompts.
Apple Watch Application: Workout Buddy
In watchOS 26, the Workout Buddy feature uses local AI to analyze fitness history and offer tailored recommendations—all without needing a network connection.
Frequently Asked Questions (FAQ)
A: It’s a developer API introduced at WWDC 2025 that allows apps to access Apple’s on-device large language models for text generation, summarization, and more—all without cloud processing.
Q: Are Apple’s AI models cloud-based?A: No. Apple’s models are designed for local execution, meaning they run entirely on-device. This preserves privacy and ensures the models function without an internet connection.
Q: How big is Apple’s on-device model?A: The model is approximately 3 billion parameters, making it compact enough for mobile use but still capable of complex tasks like summarization and tool calling.
Q: Can third-party developers use the same AI as Apple’s apps?A: Yes. Apple has made the same model used in system features like Genmoji and Shortcuts available to developers through the Foundation Models framework.
Q: What kinds of apps can use on-device AI?A: Any app that benefits from language understanding or generation—such as education, health, productivity, or communication apps—can use the framework to add intelligent features.
Q: Does the use of this model require internet access?A: No. The AI functions run offline, providing speed, cost savings, and user privacy.