Advanced Features
Refer to the features below for improving accuracy, increasing background noise resilience and more.
🔍 Context Biasing (Hotwords)
You can boost recognition of important or uncommon phrases by specifying hotwords during the WebSocket request.
Using Hotwords
Define your hotwords as a JSON array. You can specifiy a higher "boosting score" if you would like to provide extra emphasis to longer phrases (recommended!). Currently, the default score applied is 1.5
which should be sufficient for single words.
hindi_hotwords = [
{ "phrase": "बोधी" },
{ "phrase": "स्पीच रिकग्निशन", "score": 2.5 }
]
Once you have defined your JSON array of hotwords, you can input it to the model while sending in the configuration
await ws.send(
json.dumps(
{
"config": {
"sample_rate": 8000,
"transaction_id": str(uuid.uuid4()),
"model": "hi-banking-v2-8khz",
"hotwords": hindi_hotwords
}
}
)
)
Best Practices
✅ Use uncommon words
Target domain-specific or rare phrases like "
बोधी स्पीच रिकग्निशन"
✅ Use local script
Always write in Devanagari (e.g. बोधी, not bodhi
)
✅ Avoid punctuation
Remove quotes, commas, periods
✅ Use higher scores for longer phrases
e.g. "
बोधी स्पीच रिकग्निशन " -> 2.5
vs
"
बोधी" -> 1.5
Avoid copying hotwords from other providers without validation. Bodhi may already support commonly spoken Hindi words natively.
Warnings
Avoid very short particles like
"का"
,"की"
,"ए"
, etc.Don’t boost every word in a sentence — only uncommon or error-prone segments.
Phrases work better for commonly missed phrases, individual tokens are better for rare words.
Avoid boosting words that already work as is.
🧠 Confidence Scoring
Bodhi returns both segment-level confidence for all finalized results. You can use this to decide whether to use the response or not. For instance, in extremely noisy surrounding, confidence thresholds can be used to reduce unintended transcriptions.
Segment Confidence
{
"text": "मुझे जानकारी चाहिए",
"is_final": true,
"confidence": 0.79
}
Word-level Confidence
{
"text": "मुझे जानकारी चाहिए",
"is_final": true,
"confidence": 0.79,
"words": [
{ "word": "मुझे", "confidence": 0.82 },
{ "word": "जानकारी", "confidence": 0.91 },
{ "word": "चाहिए", "confidence": 0.72 }
]
}
Recommended Usage
Filter out weak segments
Depending on language the confidence threshold can be somewhere in the range of 0.65 - 0.75.
Tune per language and per use case
Call center vs dictation use cases need different thresholds. This threshold can also vary per language.
🧹 Partial Result Exclusion
To avoid reacting to incomplete guesses, use the exclude_partial
flag in the configuration. The client will only receive complete transcripts.
{
"config": {
"sample_rate": sample_rate,
"transaction_id": str(uuid.uuid4()),
"model": "hi-banking-v2-8khz",
"exclude_partial":True
}
}
🔢 Parse Numbers into Numerals
Bodhi supports converting spoken number words into actual digits using the parse_number
flag in the WebSocket config.
This is useful when transcribing sentences that include monetary values, phone numbers, addresses, or quantities — especially for use cases like banking, insurance, and logistics.
✅ How to Enable
In your config:
{
"config": {
"sample_rate": sample_rate,
"transaction_id": str(uuid.uuid4()),
"model": "hi-banking-v2-8khz",
"parse_number":True
}
}
🧾 Example
Without parse_number
"घर बनाने के लिए मुझे पच्चीस लाख का लोन चाहिए"
With parse_number: True
"घर बनाने के लिए मुझे 2500000 का लोन चाहिए"
🌐 Language Support
This feature is currently available for:
Hindi (
hi
)Malayalam (
ml
)Kannada (
kn
)Gujarati (
gu
)Marathi (
mr
)
Want support for another language? Reach out to support@navanatech.in
📦 Aux Metadata
Set aux: true
in your config to receive server-side diagnostic metadata along with your transcript response.
This is useful for logging, benchmarking, or correlating timestamps across systems.
✅ How to Enable
{
"aux": true
}
📘 What You Get
When enabled, each final transcript message will include an aux_info
block:
"aux_info": {
"request_time": 0.172,
"received_request_time": "2025-05-19T09:32:11.459Z"
}
request_time
Total time in seconds that the server spent handling this request (excluding network transfer delays).
received_request_time
The timestamp (UTC) when the server received the initial WebSocket connection or request.
This can help you:
Profile server-side performance
Track session start times
Debug slow or idle sessions
Last updated