Traitement en cours

Veuillez attendre...

Paramétrages

Paramétrages

Aller à Demande

1. WO2020118220 - DÉDUCTION DE LISTE DE CANDIDATS PARTAGÉS ET DE LISTE DE CANDIDATS PARALLÈLES POUR UN CODAGE VIDÉO

Note: Texte fondé sur des processus automatiques de reconnaissance optique de caractères. Seule la version PDF a une valeur juridique

[ EN ]

WHAT IS CLAIMED IS:

1. A method of coding video data, the method comprising:

determining a first area size threshold;

determining a second area size threshold, wherein the second area size threshold is smaller than the first area size threshold;

partitioning a block of video data into a plurality of partitions;

in response to determining that a first partition of the partitioned block is smaller or equal to the first area size threshold, determining that two or more blocks within the first partition belong to a parallel estimation area;

coding the two or more blocks within the first partition in parallel;

in response to determining that a second partition of the partitioned block is smaller or equal to the second area size threshold, determining that two or more blocks within the second partition belong to an area for a shared candidate list;

coding the two or more blocks within the second partition using the shared candidate list; and

outputting coded video comprising coded versions of the two or more blocks of the first partition and the two or more blocks of the second partition.

2. The method of claim 1, wherein coding the two or more blocks within the first partition in parallel comprises performing parallel merge estimation for the two or more blocks in the parallel estimation area.

3. The method of claim 1, wherein coding the two or more blocks within the area for the shared candidate list using the shared candidate list comprises:

for a first block of the two or more blocks within the area for the shared candidate list, selecting a first motion vector candidate from the shared candidate list; locating a first predictive block for the first block with the first motion vector candidate;

for a second block of the two or more blocks within the area for the shared candidate list, selecting a second motion vector candidate from the shared candidate list; and

locating a second predictive block for the second block with the second motion vector candidate.

4. The method of claim 3, wherein coding the two or more blocks within the area for the shared candidate list using the shared candidate list comprises:

determining a block within the area for the shared candidate list is coded in an advanced motion vector predictor (AMVP) mode; and

coding the block in the AMVP mode using the shared candidate list.

5. The method of claim 3, wherein coding the two or more blocks within the area for the shared candidate list using the shared candidate list comprises:

determining a block within the area for the shared candidate list is coded in an affine prediction mode; and

coding the block in the affine prediction mode using the shared candidate list.

6. The method of claim 1, wherein coding comprises decoding, and wherein outputting the coded video comprises outputting the coded video for display.

7. The method of claim 1, wherein coding comprises encoding, and wherein outputting the coded video comprises generating a bitstream of encoded video data.

8. A device for coding video data, the device comprising:

a memory configured to store video data;

one or more processors implemented in circuitry and configured to:

determine a first area size threshold;

determine a second area size threshold, wherein the second area size threshold is smaller than the first area size threshold;

partition a block of video data into a plurality of partitions;

in response to determining that a first partition of the partitioned block is smaller or equal to the first area size threshold, determine that two or more blocks within the first partition belong to a parallel estimation area;

code the two or more blocks within the first partition in parallel;

in response to determining that a second partition of the partitioned block is smaller or equal to the second area size threshold, determine that two or more blocks within the second partition belong to an area for a shared candidate list;

code the two or more blocks within the second partition using the shared candidate list; and

output coded video comprising coded versions of the two or more blocks of the first partition and the two or more blocks of the second partition.

9. The device of claim 8, wherein to code the two or more blocks within the first partition in parallel, the one or more processors are further configured to perform parallel merge estimation for the two or more blocks in the parallel estimation area.

10. The device of claim 8, wherein to code the two or more blocks within the area for the shared candidate list using the shared candidate, the one or more processors are further configured to:

for a first block of the two or more blocks within the area for the shared candidate list, select a first motion vector candidate from the shared candidate list; locate a first predictive block for the first block with the first motion vector candidate;

for a second block of the two or more blocks within the area for the shared candidate list, select a second motion vector candidate from the shared candidate list; and

locate a second predictive block for the second block with the second motion vector candidate.

11. The device of claim 10, wherein to code the two or more blocks within the area for the shared candidate list using the shared candidate list, the one or more processors are further configured to:

determine a block within the area for the shared candidate list is coded in an advanced motion vector predictor (AMVP) mode;

and coding the block in the AMVP mode using the shared candidate list.

12. The device of claim 10, wherein to code the two or more blocks within the area for the shared candidate list using the shared candidate list, the one or more processors are further configured to:

determine a block within the area for the shared candidate list is coded in an affine prediction mode; and

code the block in the affine prediction mode using the shared candidate list.

13. The device of claim 8, further comprising a display configured to display decoded video data.

14. The device of claim 8, wherein the device comprises one or more of a camera, a computer, a mobile device, a broadcast receiver device, or a set-top box.

15. The device of claim 8, wherein the device comprises a video decoder.

16. The device of claim 8, wherein the device comprises a video encoder.

17. A method of coding video data, the method comprising:

maintaining a buffer of history -based motion vector candidates, wherein the history-based motion vector candidates comprise motion vectors used for inter-predicting previously coded blocks of the video data;

for a first block of a first partition of the video data, determining a predictive block for the first block using a first motion vector;

in response to the first block being greater than a threshold size, updating the buffer of history -based motion vector candidates with the first motion vector;

for a second block of a second partition of the video data, determining a predictive block for the second block using a second motion vector;

in response to the second block being less than or equal to the threshold size, refraining from updating the buffer of history -based motion vector candidates with the second motion vector; and

outputting coded video comprising coded versions of the first block and the second block.

18. The method of claim 17, further comprising:

for a third block of video data, generating a merge candidate list by adding a motion vector from the buffer of history -based motion vector candidates to the merge candidate list.

19. The method of claim 17, further comprising:

in response to determining that the first partition is smaller or equal to a first threshold for area size, determining that blocks within the first partition belong to a parallel estimation area;

in response to determining that the second partition is smaller or equal to a second threshold for area size, determining that blocks within the second partition belong to an area for a shared candidate list, wherein the second threshold for area size is smaller than the first threshold for area size.

20. The method of claim 19, further comprising:

performing parallel merge estimation for one or more blocks in the parallel estimation area.

21. The method of claim 19, further comprising:

coding blocks within the area for the shared candidate list using the shared candidate list.

22. The method of claim 21, wherein coding blocks within the area for the shared candidate list using the shared candidate list comprises:

determining a block within the area for the shared candidate list is coded in an advanced motion vector predictor (AMVP) mode;

and coding the block in the AMVP mode using the shared candidate list.

23. The method of claim 21, wherein coding blocks within the area for the shared candidate list using the shared candidate list comprises:

determining a block within the area for the shared candidate list is coded in an affine prediction mode; and

coding the block in the affine prediction mode using the shared candidate list.

24. The method of claim 17, wherein coding comprises decoding, and wherein outputting the coded video comprises outputting the coded video for display.

25. The method of claim 17, wherein coding comprises encoding, and wherein outputting the coded video comprises generating a bitstream of encoded video data.

26. A device for coding video data, the device comprising:

a memory configured to store video data;

one or more processors implemented in circuitry and configured to:

maintain a buffer of history -based motion vector candidates, wherein the history-based motion vector candidates comprise motion vectors used for inter-predicting previously coded blocks of the video data;

for a first block of a first partition of the video data, determine a predictive block for the first block using a first motion vector;

in response to the first block being greater than a threshold size, update the buffer of history -based motion vector candidates with the first motion vector;

for a second block of a second partition of the video data, determine a predictive block for the second block using a second motion vector;

in response to the second block being less than or equal to the threshold size, refrain from updating the buffer of history -based motion vector candidates with the second motion vector; and

output coded video comprising coded versions of the first block and the second block.

27. The device of claim 26, wherein the one or more processors are further configured to:

for a third block of video data, generate a merge candidate list by adding a motion vector from the buffer of history -based motion vector candidates to the merge candidate list.

28. The device of claim 26, wherein the one or more processors are further configured to:

in response to determining that the first partition is smaller or equal to a first threshold for area size, determine that blocks within the first partition belong to a parallel estimation area;

in response to determining that the second partition is smaller or equal to a second threshold for area size, determine that blocks within the second partition belong to an area for a shared candidate list, wherein the second threshold for area size is smaller than the first threshold for area size.

29. The device of claim 28, wherein the one or more processors are further configured to:

perform parallel merge estimation for one or more blocks in the parallel estimation area.

30. The device of claim 28, wherein the one or more processors are further configured to:

code blocks within the area for the shared candidate list using the shared candidate list.

31. The device of claim 30, wherein to code blocks within the area for the shared candidate list using the shared candidate list, the one or more processors are further configured to:

determine a block within the area for the shared candidate list is coded in an advanced motion vector predictor (AMVP) mode; and

code the block in the AMVP mode using the shared candidate list.

32. The device of claim 30, wherein to code blocks within the area for the shared candidate list using the shared candidate list, the one or more processors are further configured to:

determine a block within the area for the shared candidate list is coded in an affine prediction mode; and

code the block in the affine prediction mode using the shared candidate list.

33. The device of claim 26, further comprising a display configured to display decoded video data.

34. The device of claim 26, wherein the device comprises one or more of a camera, a computer, a mobile device, a broadcast receiver device, or a set-top box.

35. The device of claim 26, wherein the device comprises a video decoder.

36. The device of claim 26, wherein the device comprises a video encoder.