Currently, the paused indicator is never undone, instead relying on the
update timer to eventually erase it away when the recording duration is
updated. If the user spams pause fast enough, the indicator will stack
several times before it is erased - especially if the unpaused branch in
the update timer never has a chance to run.
Instead of mutating the recordTime text on pause and requiring an undo,
extract the UI update to a separate function which computes the full
text based on the current state. Call this function when pause is
toggled, thereby forcing an accurate UI update that either does or does
not incude the paused text.
An added benefit is that the paused indicator now disappears
immediately.
When this call was first introduced in eab10d48b2, it was at top of this
block, albeit after the calls to `pause`. Over time it has slowly
shifted lower and lower in the block.
In reality, it should be the first thing in the block to ensure that
further calls have accurate information about the pause state to update
themselves.
This fixes an issue where, when the MAX_CODECS length was equal to the
amount of supported codecs (3), it would leave the list without a null
terminator and crash when iterating over the elements.
The commit 2fc13540 introduced a typo bug causing the defined height of
the fourth plane in I40A to be 0, instead of the original frame height.
This changes a 0 to a 3, in order to populate that value correctly.
During high graphics thread pressure it can take a significant time to
acquire the graphics lock. This change releases the OpenGL texture after
rendering to avoid the 2nd lock after sending the frame to FFmpeg. This
improves 99%-tile/100%-tile and median encode in a near encoder overload
scenario, and modestly raises the ceiling before encoder overload in my
test scene.
Master:
min=0 ms, median=4.29 ms, max=33.072 ms, 99th percentile=8.877 ms
min=0 ms, median=4.438 ms, max=77.157 ms, 99th percentile=9.853 ms
min=0 ms, median=4.527 ms, max=57.292 ms, 99th percentile=9.282 ms
This commit:
min=0.97 ms, median=3.009 ms, max=13.215 ms, 99th percentile=5.899 ms
min=1.181 ms, median=2.91 ms, max=9.854 ms, 99th percentile=5.56 ms
min=0.461 ms, median=3.013 ms, max=10.693 ms, 99th percentile=5.871 ms
Support building AJA plugins with either the new libajantv2 library, or
the deprecated ntv2 library.
Finder scripts updated to search for libajantv2 and fall back to ntv2 if
not found. This allows this PR to be merged without requiring a
corresponding update to the pre-built obs-deps packages.
Qt Gui virtualkeyboard plugin was removed in Qt 6.x.
Qt Network Bearer Management was removed in Qt 6.0.
Qt Multimedia mediaservice and audio plugins were removed in Qt 6.x.
The commit 2fc13540f implemented to copy a video frame into a different
line-size video frame.
However, when the line-size was different, the frame was not correctly
copied.
The existing caption insertion implementation is to add H.264 specific
SEI NALs, because of this we will skip caption insertion unless the
video stream is H.264. This prevents corruption of AV1/HEVC/etc.
We will now keep a per track list of caption data. When caption
insertion is triggered we will add that to the list for each actively
encoding video track. When doing interleaved send, we pull the data
from the caption data for the corresponding video. This ensures that
captions get copied to all video tracks.
Rewrites the `struct video_frame` implementation, resolving a few
hidden bugs, and improving memory alignment reliability. Should also
make it much easier to implement and maintain texture formats in this
part of the codebase.
This is one of the few remaining actions in this repo that was still
using node16. Updating will remove the associated warnings. Also, pin to
the v0.5.0 commit.
Adds the `whip_output_audio` and `whip_output_video` output kinds,
which selectively advertise only video or only audio support.
To use these types, it is effectively the same as with the AV
version. Just create the output, assign your video or audio encoder,
and you're good.
libobs does not have support for "optional" outputs. With an AV output,
if you only assign a video or audio encoder, start will fail.
We used to swap in a buffer with (0,0,0,1) for all vertex colors for
drop shadows and outlines. However, this vertex buffer couldn't be
uploaded separately from the vertex data in OBS, so we were reuploading
the text vertices every frame.
Instead, let's use a uniform for this uniform data and save 500us (or
more when handles are visible), a significant portion of my test scenes
render time.
Pass down texture/host memory choice for fallback encoders. During
fallback we don't (can't) initialize a shared texture pool and should
use the regular host memory path.
This fixes usage on multi-GPU systems, and enables texture encoders.