Capacitor LLM Plugin
このコンテンツはまだあなたの言語で利用できません。
Capacitor LLM Plugin
Capacitor SDK to run LLM models locally in iOS and Android, with native Apple Intelligence support.
Features
- 🤖 Run LLM models directly on device for privacy and offline capabilities
- 🍎 Native Apple Intelligence integration on iOS 18.0+
- 🤖 MediaPipe tasks-genai support on Android
- 📦 Support for multiple model formats (
.gguf
,.task
,.litertlm
) - ⚡ Hardware acceleration for fast inference
- 💾 Model download and caching support
- 🔄 Real-time streaming responses
Installation
npm install @capgo/capacitor-llmnpx cap sync
yarn add @capgo/capacitor-llmnpx cap sync
pnpm add @capgo/capacitor-llmnpx cap sync
bun add @capgo/capacitor-llmnpx cap sync
Platform Configuration
iOS Configuration
- iOS 18.0+: Uses Apple Intelligence by default (no model needed)
- iOS < 18.0: Requires MediaPipe custom models (experimental)
Android Configuration
Place model files in your Android assets folder:
- Path:
android/app/src/main/assets/
- Supported formats:
.task
,.litertlm
Recommended Models
Gemma-3 Models (Best Performance)
- 270M - Smallest, most efficient for mobile
- 1B - Larger text generation model
- 2B - Cross-platform experimental
Usage
import { CapacitorLLM } from '@capgo/capacitor-llm';import { Capacitor } from '@capacitor/core';
// Check if LLM is readyconst { readiness } = await CapacitorLLM.getReadiness();console.log('LLM readiness:', readiness);
// Set the model path based on platformawait CapacitorLLM.setModelPath({ path: Capacitor.getPlatform() === 'ios' ? 'gemma-3-270m.gguf' // iOS model : '/android_asset/gemma-3-270m-it-int8.task' // Android model});
// Create a chat sessionconst { id: chatId } = await CapacitorLLM.createChat();
// Send a messageawait CapacitorLLM.sendMessage({ chatId, message: 'Hello! How are you today?'});
// Listen for AI responsesCapacitorLLM.addListener('onAiText', (event) => { console.log('AI response:', event.text);});
// Listen for completionCapacitorLLM.addListener('onAiCompletion', (event) => { console.log('AI completed response');});
Advanced Features
Download Models
// Download a model from URLawait CapacitorLLM.downloadModel({ url: 'https://example.com/model.task', filename: 'model.task'});
Model Management
// Set a specific modelawait CapacitorLLM.setModel({ model: 'gemma-3-1b'});
// Check readinessconst { readiness } = await CapacitorLLM.getReadiness();if (readiness === 'ready') { // Model is loaded and ready}
API Methods
createChat()
- Create a new chat sessionsendMessage()
- Send a message to the LLMgetReadiness()
- Check if the LLM is readysetModel()
- Set the active modelsetModelPath()
- Set the path to model filedownloadModel()
- Download a model from URL
Events
onAiText
- Fired when AI generates textonAiCompletion
- Fired when AI completes response
Platform Support
Platform | Supported | Requirements |
---|---|---|
iOS | ✅ | iOS 13.0+ (18.0+ for Apple Intelligence) |
Android | ✅ | API 24+ |
Web | ❌ | Not supported |
Contributing
We welcome contributions! Please see our Contributing Guide for more details.
License
This plugin is licensed under the MIT License. See LICENSE for more information.
Support
If you encounter any issues or have questions, please file an issue on our GitHub repository.