Algum conteúdo deste aplicativo está indisponível no momento.
Se esta situação persistir, por favor entre em contato conoscoFale conosco & Contato
1. (EP2042996) Multi-media processor cache with cache line locking and unlocking
Nota: O texto foi obtido por processos automáticos de reconhecimento ótico de caracteres.
Para fins jurídicos, favor utilizar a versão PDF.
Claims

1. A method comprising:

receiving a read request in a batch of read requests for a read-modify-write function, wherein the read request requests data at a memory location in a memory;

identifying a cache line in a cache storing a copy of the requested data;

determining whether the cache line is locked;

when the cache line is not locked, processing the read request and locking the cache line to accesses by additional read requests in the batch of read requests; and

when the cache line is locked, holding the read request until the cache line is unlocked.


  2. The method of claim 1, wherein identifying a cache line storing the requested data comprises identifying a section of the cache line that includes the requested data.
  3. The method of claim 1, wherein determining whether the cache line is locked comprises determining whether a lock bit associated with a section of the cache line that includes the requested data is set.
  4. The method of claim 1, wherein locking the cache line to accesses by additional read requests in the batch of read requests comprises setting a lock bit associated with a section of the cache line that includes the requested data.
  5. The method of claim 1, further comprising:

receiving a write request in a batch of write requests for the read-modify-write function, wherein the write request requests to update data previously read out from the cache line in the cache;

unlocking the cache line; and

processing the write request.


  6. The method of claim 5, wherein unlocking the cache line comprises unsetting a lock bit associated with a section of the cache line that includes the requested data.
  7. The method of claim 5, further comprising, after unlocking the cache line, processing the held read request and locking the cache line to accesses by additional read requests in a subsequent batch of read requests.
  8. The method of claim 1, wherein identifying a cache line storing the requested data comprises:

identifying a set of potential cache locations in the cache assigned to store the requested data; and

determining whether the requested data is stored at one of the potential cache locations.


  9. The method of claim 8, wherein determining whether the requested data is stored at one of the potential cache locations comprises:

searching cache lines at the potential cache locations for an address tag associated with the memory location that includes the requested data; and

upon finding the address tag associated with the memory location, identifying the one of the cache lines that includes the address tag as storing the requested data.


  10. The method of claim 8, wherein determining whether the requested data is stored at one of the potential cache locations comprises:

searching cache lines at the potential cache locations for an address tag associated with the memory location that includes the requested data;

upon not finding the address tag associated with the memory location, retrieving content of the memory location from the memory;

storing the content of the memory location within a cache line at one of the potential cache locations; and

identifying the cache line as storing the requested data.


  11. The method of claim 1, wherein receiving a batch of read requests for a read-modify-write function comprises receiving a batch of read requests from a hidden primitive and pixel rejection module within a graphics processing unit (GPU) pipeline for a pixel depth comparison function.
  12. A processing unit comprising:

a cache including cache lines that store copies of content of memory locations of a memory;

a processing pipeline that sends a read request in a batch of read requests for a read-modify-write function, wherein the read request requests data at a memory location in the memory; and

a cache controller coupled to the processing pipeline and the cache that identifies a cache line in the cache storing a copy of the requested data and determines whether the cache line is locked,

wherein, when the cache line is not locked, the cache processes the read request and the cache controller locks the cache line to accesses by additional read requests in the batch of read requests, and
wherein, when the cache line is locked, the cache controller holds the read request until the cache line is unlocked.
  13. The processing unit of claim 12, wherein the cache controller identifies a section of the cache line that includes the requested data.
  14. The processing unit of claim 12, wherein the cache controller determines whether a lock bit associated with a section of the cache line that includes the requested data is set to determine whether the cache line is locked.
  15. The processing unit of claim 14, wherein the cache line includes multiple lock bits, and each lock bit is associated with a section of the cache line.
  16. The processing unit of claim 12, wherein the cache controller sets a lock bit associated with a section of the cache line that includes the requested data to lock the cache line to accesses by additional read requests in the batch of read requests.
  17. The processing unit of claim 12, wherein:

the processing unit sends a write request in a batch of write requests for the read-modify-write function, wherein the write request requests to update data previously read out from the cache line in the cache;

the cache controller unlocks the cache line; and

the cache processes the write request.


  18. The processing unit of claim 17, wherein the cache controller unsets a lock bit associated with a section of the cache line that includes the requested data to unlock the cache line.
  19. The processing unit of claim 17, wherein, after the cache controller unlocks the cache line, the cache processes the held read request and the cache controller locks the cache line to accesses by additional read requests in a subsequent batch of read requests.
  20. The processing unit of claim 12, wherein the cache controller:

identifies a set of potential cache locations in the cache assigned to store the requested data; and

determines whether the requested data is stored at one of the potential cache locations.


  21. The processing unit of claim 20, wherein the cache controller:

searches cache lines at the potential cache locations for an address tag associated with the memory location that includes the requested data; and

upon finding the address tag associated with the memory location, identifies the one of the cache lines that includes the address tag as storing the requested data.


  22. The processing unit of claim 20, wherein the cache controller:

searches cache lines at the potential cache locations for an address tag associated with the memory location that includes the requested data;

upon not finding the address tag associated with the memory location, retrieves content of the memory location from the memory;

stores the content of the memory location within a cache line at one of the potential cache locations; and

identifies the cache line as storing the requested data.


  23. The processing unit of claim 12, wherein the processing pipeline that sends a batch of read requests for a read-modify-write function comprises a graphics processing unit (GPU) pipeline that sends a batch of read requests for a pixel depth comparison function from a hidden primitive and pixel rejection module within the GPU pipeline.
  24. A computer-readable medium comprising instructions that cause a computer to:

receive a read request in a batch of read requests for a read-modify-write function, wherein the read request requests data at a memory location in a memory;

identify a cache line in a cache storing a copy of the requested data;

determine whether the cache line is locked;

when the cache line is not locked, process the read request and lock the cache line to accesses by additional read requests in the batch of read requests; and

when the cache line is locked, hold the read request until the cache line is unlocked.


  25. An apparatus comprising:

means for caching content of memory locations of a memory;

means for sending a read request in a batch of read requests for a read-modify-write functions, wherein the read request requests data at a memory location in the memory; and

means for controlling the means for caching by identifying a cache line in the means for caching that stores a copy of the requested data and determining whether the cache line is locked,

wherein, when the cache line is not locked, the means for caching processes the read request and the means for controlling locks the cache line to accesses by additional read requests in the batch of read requests, and
wherein, when the cache line is locked, the means for controlling holds the read request until the cache line is unlocked.