Getting Started
Ce contenu n'est pas encore disponible dans votre langue.
Installation
npm install @capgo/capacitor-llmnpx cap syncyarn add @capgo/capacitor-llmnpx cap syncpnpm add @capgo/capacitor-llmnpx cap syncbun add @capgo/capacitor-llmnpx cap syncPlatform Configuration
iOS Configuration
- iOS 18.0+: Uses Apple Intelligence by default (no model needed)
- iOS < 18.0: Requires MediaPipe custom models (experimental)
Place model files in your iOS app bundle for older iOS versions.
Android Configuration
Place model files in your Android assets folder:
android/app/src/main/assets/Supported formats: .task, .litertlm
Recommended Models
Gemma-3 Models (Best Performance)
- 270M - Smallest, most efficient for mobile
- 1B - Larger text generation model
- 2B - Cross-platform experimental
Download models from Hugging Face or other model repositories.
Usage
Import the plugin and initialize:
import { CapacitorLLM } from '@capgo/capacitor-llm';import { Capacitor } from '@capacitor/core';
// Check if LLM is readyconst { readiness } = await CapacitorLLM.getReadiness();console.log('LLM readiness:', readiness);
// Set the model path based on platformawait CapacitorLLM.setModelPath({ path: Capacitor.getPlatform() === 'ios' ? 'gemma-3-270m.gguf' // iOS model : '/android_asset/gemma-3-270m-it-int8.task' // Android model});
// Create a chat sessionconst { id: chatId } = await CapacitorLLM.createChat();
// Send a messageawait CapacitorLLM.sendMessage({ chatId, message: 'Hello! How are you today?'});
// Listen for AI responsesCapacitorLLM.addListener('onAiText', (event) => { console.log('AI response:', event.text);});
// Listen for completionCapacitorLLM.addListener('onAiCompletion', (event) => { console.log('AI completed response');});Advanced Features
Download Models
// Download a model from URLawait CapacitorLLM.downloadModel({ url: 'https://example.com/model.task', filename: 'model.task'});Model Management
// Set a specific modelawait CapacitorLLM.setModel({ model: 'gemma-3-1b'});
// Check readinessconst { readiness } = await CapacitorLLM.getReadiness();if (readiness === 'ready') { // Model is loaded and ready}API Methods
createChat()
Create a new chat session.
const { id: chatId } = await CapacitorLLM.createChat();Returns: Promise<{ id: string }>
sendMessage(…)
Send a message to the LLM.
await CapacitorLLM.sendMessage({ chatId: 'chat-id', message: 'What is the weather like?'});| Param | Type | Description |
|---|---|---|
chatId | string | Chat session ID |
message | string | Message to send |
getReadiness()
Check if the LLM is ready to use.
const { readiness } = await CapacitorLLM.getReadiness();Returns: Promise<{ readiness: string }>
Possible values:
ready- Model is loaded and readyloading- Model is being loadednot_ready- Model not yet loadederror- Error loading model
setModel(…)
Set the active model.
await CapacitorLLM.setModel({ model: 'gemma-3-270m'});| Param | Type | Description |
|---|---|---|
model | string | Model name |
setModelPath(…)
Set the path to model file.
await CapacitorLLM.setModelPath({ path: '/android_asset/model.task'});| Param | Type | Description |
|---|---|---|
path | string | Path to model file |
downloadModel(…)
Download a model from URL.
await CapacitorLLM.downloadModel({ url: 'https://example.com/model.task', filename: 'model.task'});| Param | Type | Description |
|---|---|---|
url | string | URL to download from |
filename | string | Filename to save as |
Events
onAiText
Fired when AI generates text.
CapacitorLLM.addListener('onAiText', (event) => { console.log('AI text:', event.text); console.log('Chat ID:', event.chatId);});Event Data:
text(string) - Generated textchatId(string) - Chat session ID
onAiCompletion
Fired when AI completes response.
CapacitorLLM.addListener('onAiCompletion', (event) => { console.log('Completed for chat:', event.chatId);});Event Data:
chatId(string) - Chat session ID
Complete Example
import { CapacitorLLM } from '@capgo/capacitor-llm';import { Capacitor } from '@capacitor/core';
class AIService { private chatId: string | null = null; private messageBuffer: string = '';
async initialize() { // Set up model path const platform = Capacitor.getPlatform(); const modelPath = platform === 'ios' ? 'gemma-3-270m.gguf' : '/android_asset/gemma-3-270m-it-int8.task';
await CapacitorLLM.setModelPath({ path: modelPath });
// Wait for model to be ready let isReady = false; while (!isReady) { const { readiness } = await CapacitorLLM.getReadiness(); if (readiness === 'ready') { isReady = true; } else if (readiness === 'error') { throw new Error('Failed to load model'); } await new Promise(resolve => setTimeout(resolve, 500)); }
// Create chat session const { id } = await CapacitorLLM.createChat(); this.chatId = id;
// Set up event listeners this.setupListeners(); }
private setupListeners() { CapacitorLLM.addListener('onAiText', (event) => { if (event.chatId === this.chatId) { this.messageBuffer += event.text; this.onTextReceived(event.text); } });
CapacitorLLM.addListener('onAiCompletion', (event) => { if (event.chatId === this.chatId) { this.onMessageComplete(this.messageBuffer); this.messageBuffer = ''; } }); }
async sendMessage(message: string) { if (!this.chatId) { throw new Error('Chat not initialized'); }
await CapacitorLLM.sendMessage({ chatId: this.chatId, message }); }
onTextReceived(text: string) { // Update UI with streaming text console.log('Received:', text); }
onMessageComplete(fullMessage: string) { // Handle complete message console.log('Complete message:', fullMessage); }}
// Usageconst ai = new AIService();await ai.initialize();await ai.sendMessage('Tell me about AI');Platform Support
| Platform | Supported | Requirements |
|---|---|---|
| iOS | ✅ | iOS 13.0+ (18.0+ for Apple Intelligence) |
| Android | ✅ | API 24+ |
| Web | ❌ | Not supported |
Best Practices
-
Model Selection: Choose models based on device capabilities
- Use 270M for most mobile devices
- Use 1B for high-end devices with more RAM
- Test performance on target devices
-
Memory Management: Clear chat sessions when done
// Create new chat for new conversationsconst { id } = await CapacitorLLM.createChat(); -
Error Handling: Always check readiness before use
const { readiness } = await CapacitorLLM.getReadiness();if (readiness !== 'ready') {// Handle not ready state} -
Streaming UI: Update UI incrementally with streaming text
- Show text as it arrives via
onAiText - Mark complete with
onAiCompletion
- Show text as it arrives via
-
Model Download: Download models during app setup, not on first use
// During app initializationawait CapacitorLLM.downloadModel({url: 'https://your-cdn.com/model.task',filename: 'model.task'});
Troubleshooting
Model not loading
- Verify model file is in correct location
- Check model format matches platform (.gguf for iOS, .task for Android)
- Ensure sufficient device storage
Poor performance
- Try smaller model (270M instead of 1B)
- Close other apps to free memory
- Test on actual device, not simulator
No responses
- Check readiness status is ‘ready’
- Verify event listeners are set up before sending messages
- Check console for errors