Bodhi Docs
Bodhi Docs
  • Bodhi Overview
  • Developer Quickstart
    • Streaming - Websocket
      • Response Structure
      • Error Responses
      • Advanced Features
      • Measuring Latency
      • Starter Apps
      • Connection Lifecycle
    • Non-Streaming API
      • Response Structure
      • Error Responses
      • Advanced Features
Powered by GitBook
On this page
  1. Developer Quickstart
  2. Streaming - Websocket

Response Structure

This page provides a detailed understanding of the response structure to expect from the streaming api.

The table below explains the structure of responses from the Bodhi API, detailing the meaning of each field and helping you better understand the data returned from the API.

{
  "call_id": "<uuid>",
  "segment_id": <int>,
  "eos": <boolean>,
  "type": "<string>",
  "text": "<string>",
  "segment_meta": {
    "tokens": [],
    "timestamps": [],
    "start_time": <float>,
    "confidence": <float>,
    "words": [
      {
        "word": "<string>",
        "confidence": <float>
      }
    ]
  }
}

Field Descriptions

Key
Description

call_id (string)

Unique identifier associated with every streaming connection

segment_id (string)

Integer associated with every speech segment during the entire active socket connection

eos (bool)

Marks the end of the streaming connection when "eos" is true.

type (string)

Possible values: "partial" | "complete"

partial

  • Partial transcript corresponding to every streaming audio chunk

complete

  • Complete/final transcript generated for each speech segment

    • Generated once per segment_id i.e., when the speech segment end is reached

text (string)

The transcript that has been processed thus far.

segment_meta (object)

  • tokens: Array of strings representing individual text pieces (or "tokens") recognized from the audio. Tokens may include words or parts of words.

  • timestamps: Array of numerical values indicating when each token was detected in the segment/sentence (in seconds). Each timestamp aligns with the tokens array, so the i-th timestamp represents the time at which the i-th token was spoken. Useful for measuring latency.

  • start_time: Starting point (in seconds) of the current segment in the overall audio timeline.

  • confidence: Segment level confidence. Float between 0 and 1.

  • words: Array of word level objects (only populated when type is complete).

    • word: The recognised word.

    • confidence: Float value between 0.0 and 1.0 representing the model’s confidence in the recognized word.

PreviousStreaming - WebsocketNextError Responses

Last updated 14 days ago