I was investigating libcaption as a method of inserting 708 captions into live streams. However it’s only creating EIA 608 SEI messages.
I was wondering if Gstreamer have a method to give it video packets with caption data and return that data back with 708 captions. So use it as a binding to process captured video frames. The easiest to test would be inserting SRT while publishing a video file to RTMP.
Something in those functions might help me. At least insert SRT for testing 708 packets working. the purpose is testing what can support japanese unicode charsets. most are for reading and converting but I guess I need encoding. I need to use the function directly via bindings somehow by sending it a video packet and returning a packet with captions inserted.
Japanese characters probably require encoding the kana (characters) into the undefined P16 code space with some externally defined code space. This scenario is not really supported as-is by things like tttocea708 which only currently supports the character space as defined in the CTA-708 specification (which does not include any asian languages).
Support for externally provided code spaces can be implemented in tttocea708 if needed.
I’m still trying to figure out how to pull it off and seems support for such charsets is very difficult. The goal is to capture a video frame from what is being ingested and insert live captions as 708 back into the video frame. It won’t be specifically timed text as input. But text data on demand. ? 708 supports such charsets but Ive failed to find sample content or a tool to author myself as a reference.