Rtsp, how to deal with messages like "decreasing timestamp" and "Timestamping error on input streams"

Hi.

I have follwing pipeline:

 rtspsrc   ! rtph264depay ! h264parse ! tee name=tp  
  tp. ! queue ! splitmuxsink  \
  tp. ! queue ! avdec_h264 ! jpegenc !  appsink

It seems to be working but eventually I see following messages in my log and then gst hangs:

17:40:39.253 [160132:16] [WRN] [myapp] Gst.DebugMessage
2:20:04.547474531 160132 0x7f183c000b70 WARN            videodecoder gstvideodecoder.c:3133:gst_video_decoder_prepare_finish_frame:<avdec_h264-0> decreasing timestamp (0:00:00.800011111 < 2:19:53.392688889)
17:40:39.307 [160132:16] [WRN] [myapp] Gst.DebugMessage
2:20:04.601351726 160132 0x7f183c000b70 WARN            videodecoder gstvideodecoder.c:3133:gst_video_decoder_prepare_finish_frame:<avdec_h264-0> decreasing timestamp (0:00:00.839988889 < 2:19:53.392688889)
17:40:39.369 [160132:16] [WRN] [myapp] Gst.DebugMessage
2:20:04.663791558 160132 0x7f183c000b70 WARN            videodecoder gstvideodecoder.c:3133:gst_video_decoder_prepare_finish_frame:<avdec_h264-0> decreasing timestamp (0:00:00.920000000 < 2:19:53.392688889)
17:40:39.628 [160132:33] [WRN] [myapp] Gst.DebugMessage
2:20:04.922066135 160132 0x7f183c001030 WARN            splitmuxsink gstsplitmuxsink.c:2594:handle_gathered_gop:<splitmuxsink0> error: Timestamping error on input streams
17:40:39.628 [160132:33] [WRN] [myapp] Gst.DebugMessage
2:20:04.922213087 160132 0x7f183c001030 WARN            splitmuxsink gstsplitmuxsink.c:2594:handle_gathered_gop:<splitmuxsink0> error: Queued GOP time is negative -2:19:52.272722222

What does mesage “decreasing timestamp means” and “Timestamping error on input streams” ? I guess there are some problems with stream source, but is it possible to handle them automatically, like restart stream or tune some parameters on the fly? It is quite critical because after these messages gst simply hangs and do nothing. At least I need to manifest so error, so someone in charge can restart camera or service.

Thanks in advance.

It’s hard to know what’s happening here without a stream capture.

You could try the h264timestamper element to see if that fixes anything for you.

PS: I would also have separate h264parsers in each branch, though I don’t think that changes anything in relation to your issue.

1 Like

Thank you for reply. But I don’t understand possible nature of these messages, I mean I can’t interpret them. For example:

17:40:39.253 [160132:16] [WRN] [myapp] Gst.DebugMessage
2:20:04.547474531 160132 0x7f183c000b70 WARN            videodecoder gstvideodecoder.c:3133:gst_video_decoder_prepare_finish_frame:<avdec_h264-0> decreasing timestamp (0:00:00.800011111 < 2:19:53.392688889)

What is 0:00:00.800011111 and 2:19:53.392688889, from where this timestamps came from? What format is in use for these numbers, like dd:hh:mm.sssssss or what? What does it mean at all, that some element couldn’t set proper timestamp on buffer like it should be > 2:19:53.392688889 but it wasn’t?

PS: I would also have separate h264parsers in each branch, though I don’t think that changes anything in relation to your issue.

Also, can you explain why it is better to separate h264parsers in each branch? What benefits it will give?

What the reasons behind thise messages, and how to deal with them:

gstsplitmuxsink.c:2594:handle_gathered_gop: error: Timestamping error on input streams
gstsplitmuxsink.c:2594:handle_gathered_gop: error: Queued GOP time is negative -2:19:52.272722222

What one can do about this error, what is possible cause of this error – camera clock glitches (I use reference-timestamp-meta) or network delay? How to treat this problem – restart camera, pipeline?

It looks like there’s some kind of timestamp jump happening here for some reason. That’s a bit unexpected in this scenario. Would need to track down where that comes from, e.g. via a GST_DEBUG log.

1 Like

Thank you for reply!

Is there any way to deal with it?

Maybe, it will depend on what’s causing it.

I have posted a similar issue, with reproducer here:

1 Like

In our cases the cause seems to be timestamp rewrites once we get RTCP sync. Waiting until we got RTCP sync before allowing buffers to reach the splitmuxsink helped us.

1 Like

Thank you so much for reply!

Can you explain how you did it? It was pipeline change, some element setting change, or custom written code?

Thanks in advance.

Hi, sorry for late reply, we added a probe and dropped packages until we got the RTCP sync signal on the jitterbuffer I believe.

1 Like