Skip to content

ref_iface_IVDXVideoDecoder_SetTargetFormat

shekh edited this page Apr 15, 2018 · 1 revision

VirtualDub Plugin SDK 1.2

IVDXVideoDecoder interface

IVDXVideoDecoder:: SetTargetFormat

Sets the desired image format for decoded frames.

bool SetTargetFormat(int format, bool useDIBAlignment);

Parameters

format Desired format for decoded frame. This must be one of the values from the VDXPixmapFormat enumeration. If it is zero (unknown), the default format is selected.
useDIBAlignment Set if the frame buffer should conform to Windows DIB layout. This is used to avoid extra memory copies when a DIB is required. For RGB formats, the bitmap should be stored bottom-up with scanlines aligned to doubleword boundaries, i.e. pitch = -4*ceil(width * bitdepth / 32). For YCbCr formats, the bitmap should be stored top-down with scanlines having byte alignment, i.e. pitch = ceil(width * bitdepth / 8).

Thread safety

This method is not thread-safe.

Errors

Errors may not be returned from this function (see SetError()).

Return value

True if the format was successfully changed, or false if the format is not supported.

Remarks

After SetTargetFormat() succeeds, the decoder state is reset as if Reset() were called.

SetTargetFormat(0) allows the decoder to select the most appropriate format for decoding, usually the one that is closest to the native decoded format of the compressed stream. A SetTargetFormat(0) call should never fail, because that would mean no formats are supported for decoding (which would be useless).


Copyright (C) 2007-2012 Avery Lee.

Clone this wiki locally