콘텐츠로 건너뛰기

시작하기

  1. 패키지 설치

    Terminal window
    npm i @capgo/capacitor-speech-recognition
  2. 네이티브 프로젝트와 동기화

    Terminal window
    npx cap sync
  3. 플랫폼 권한 구성 (아래 참조)

앱의 Info.plist 파일에 다음 키를 추가하세요:

<key>NSSpeechRecognitionUsageDescription</key>
<string>We need access to speech recognition to transcribe your voice</string>
<key>NSMicrophoneUsageDescription</key>
<string>We need access to your microphone to record audio for transcription</string>

플러그인이 자동으로 필요한 RECORD_AUDIO 권한을 AndroidManifest.xml에 추가합니다. 추가 구성이 필요하지 않습니다.

음성 인식을 사용하기 전에 장치에서 사용 가능한지 확인하세요:

import { SpeechRecognition } from '@capgo/capacitor-speech-recognition';
const checkAvailability = async () => {
const { available } = await SpeechRecognition.available();
if (!available) {
console.warn('Speech recognition is not supported on this device');
return false;
}
return true;
};

인식을 시작하기 전에 필요한 권한을 요청하세요:

const requestPermissions = async () => {
const { speechRecognition } = await SpeechRecognition.requestPermissions();
if (speechRecognition === 'granted') {
console.log('Permission granted');
return true;
} else {
console.log('Permission denied');
return false;
}
};

선택적 구성과 함께 음성 듣기를 시작하세요:

// Basic usage
await SpeechRecognition.start({
language: 'en-US',
maxResults: 3,
partialResults: true,
});
// With all options
await SpeechRecognition.start({
language: 'en-US',
maxResults: 5,
prompt: 'Speak now...', // Android only
popup: false, // Android only
partialResults: true,
addPunctuation: true, // iOS 16+ only
allowForSilence: 2000, // Android only, milliseconds
});

인식이 활성화되어 있는 동안 부분 결과를 구독하세요:

const partialListener = await SpeechRecognition.addListener(
'partialResults',
(event) => {
const transcription = event.matches?.[0];
console.log('Partial result:', transcription);
}
);
// Don't forget to remove the listener when done
await partialListener.remove();

듣기를 중지하고 리소스를 정리하세요:

await SpeechRecognition.stop();

플러그인 사용 방법을 보여주는 완전한 예제입니다:

import { SpeechRecognition } from '@capgo/capacitor-speech-recognition';
export class VoiceRecognitionService {
private partialListener: any = null;
private isListening = false;
async initialize(): Promise<boolean> {
// Check availability
const { available } = await SpeechRecognition.available();
if (!available) {
throw new Error('Speech recognition not available');
}
// Request permissions
const { speechRecognition } = await SpeechRecognition.requestPermissions();
if (speechRecognition !== 'granted') {
throw new Error('Permission denied');
}
return true;
}
async startListening(
onPartialResult: (text: string) => void,
onFinalResult: (text: string) => void
): Promise<void> {
if (this.isListening) {
console.warn('Already listening');
return;
}
try {
// Set up partial results listener
this.partialListener = await SpeechRecognition.addListener(
'partialResults',
(event) => {
const text = event.matches?.[0] || '';
onPartialResult(text);
}
);
// Start recognition
const result = await SpeechRecognition.start({
language: 'en-US',
maxResults: 3,
partialResults: true,
addPunctuation: true,
});
this.isListening = true;
// Handle final result if partialResults is false
if (result.matches && result.matches.length > 0) {
onFinalResult(result.matches[0]);
}
} catch (error) {
console.error('Error starting speech recognition:', error);
throw error;
}
}
async stopListening(): Promise<void> {
if (!this.isListening) {
return;
}
try {
await SpeechRecognition.stop();
// Clean up listener
if (this.partialListener) {
await this.partialListener.remove();
this.partialListener = null;
}
this.isListening = false;
} catch (error) {
console.error('Error stopping speech recognition:', error);
throw error;
}
}
async getSupportedLanguages(): Promise<string[]> {
const { languages } = await SpeechRecognition.getSupportedLanguages();
return languages;
}
async checkListeningState(): Promise<boolean> {
const { listening } = await SpeechRecognition.isListening();
return listening;
}
}

현재 장치에서 네이티브 음성 인식 서비스를 사용할 수 있는지 확인합니다.

const result = await SpeechRecognition.available();
// Returns: { available: boolean }

오디오 캡처 및 음성 전사를 시작합니다.

interface SpeechRecognitionStartOptions {
language?: string; // Locale identifier (e.g., 'en-US')
maxResults?: number; // Maximum number of results (default: 5)
prompt?: string; // Android only: Dialog prompt
popup?: boolean; // Android only: Show system dialog
partialResults?: boolean; // Stream partial results
addPunctuation?: boolean; // iOS 16+ only: Add punctuation
allowForSilence?: number; // Android only: Silence timeout in ms
}
const result = await SpeechRecognition.start(options);
// Returns: { matches?: string[] }

듣기를 중지하고 네이티브 리소스를 해제합니다.

await SpeechRecognition.stop();

Gets the locales supported by the underlying recognizer.

Note: Android 13+ devices no longer expose this list; in that case languages will be empty.

const result = await SpeechRecognition.getSupportedLanguages();
// Returns: { languages: string[] }

Returns whether the plugin is actively listening for speech.

const result = await SpeechRecognition.isListening();
// Returns: { listening: boolean }

Gets the current permission state.

const result = await SpeechRecognition.checkPermissions();
// Returns: { speechRecognition: 'prompt' | 'prompt-with-rationale' | 'granted' | 'denied' }

Requests the microphone + speech recognition permissions.

const result = await SpeechRecognition.requestPermissions();
// Returns: { speechRecognition: 'prompt' | 'prompt-with-rationale' | 'granted' | 'denied' }

Listen for partial transcription updates while partialResults is enabled.

const listener = await SpeechRecognition.addListener(
'partialResults',
(event: { matches: string[] }) => {
console.log('Partial:', event.matches?.[0]);
}
);

Listen for segmented recognition results.

const listener = await SpeechRecognition.addListener(
'segmentResults',
(event: { matches: string[] }) => {
console.log('Segment:', event.matches?.[0]);
}
);

Listen for segmented session completion events.

const listener = await SpeechRecognition.addListener(
'endOfSegmentedSession',
() => {
console.log('Segmented session ended');
}
);

Listen for changes to the native listening state.

const listener = await SpeechRecognition.addListener(
'listeningState',
(event: { status: 'started' | 'stopped' }) => {
console.log('Listening state:', event.status);
}
);

Removes all registered listeners.

await SpeechRecognition.removeAllListeners();
  1. Always check availability and permissions

    const { available } = await SpeechRecognition.available();
    if (!available) return;
    const { speechRecognition } = await SpeechRecognition.requestPermissions();
    if (speechRecognition !== 'granted') return;
  2. Clean up listeners Always remove listeners when they’re no longer needed to prevent memory leaks:

    await listener.remove();
    // or
    await SpeechRecognition.removeAllListeners();
  3. Handle errors gracefully

    try {
    await SpeechRecognition.start({ language: 'en-US' });
    } catch (error) {
    console.error('Speech recognition failed:', error);
    // Show user-friendly error message
    }
  4. Provide visual feedback Use the listeningState event to show users when the app is actively listening.

  5. Test with different accents and languages Speech recognition accuracy varies by language and accent. Test thoroughly with your target audience.

  • Requires iOS 10.0+
  • Uses native SFSpeechRecognizer
  • Supports punctuation on iOS 16+
  • Requires both microphone and speech recognition permissions
  • Recognition may fail if device language doesn’t match requested language
  • Requires Android 6.0 (API 23)+
  • Uses SpeechRecognizer API
  • Supports segmented sessions with configurable silence detection
  • Android 13+ doesn’t expose list of supported languages
  • Some devices may show system recognition UI
  • Limited support via Web Speech API
  • Not all browsers support speech recognition
  • Requires HTTPS connection
  • May have different behavior across browsers

If permissions are denied, guide users to app settings:

const { speechRecognition } = await SpeechRecognition.checkPermissions();
if (speechRecognition === 'denied') {
// Show instructions to enable permissions in Settings
}
  • Check microphone is working
  • Ensure quiet environment
  • Verify language code matches device capabilities
  • Check network connection (some platforms require it)
  • Use isListening() to check state
  • Listen to listeningState events
  • Implement auto-restart logic if needed