Processing

Please wait...

Settings

Settings

Goto Application

1. WO2020072112 - METHOD FOR MAINTAINING CACHE CONSISTENCY DURING REORDERING

Publication Number WO/2020/072112
Publication Date 09.04.2020
International Application No. PCT/US2019/039919
International Filing Date 28.06.2019
IPC
G06F 12/0864 2016.01
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
FELECTRIC DIGITAL DATA PROCESSING
12Accessing, addressing or allocating within memory systems or architectures
02Addressing or allocation; Relocation
08in hierarchically structured memory systems, e.g. virtual memory systems
0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
0864using pseudo-associative means, e.g. set-associative or hashing
G06F 12/1027 2016.01
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
FELECTRIC DIGITAL DATA PROCESSING
12Accessing, addressing or allocating within memory systems or architectures
02Addressing or allocation; Relocation
08in hierarchically structured memory systems, e.g. virtual memory systems
10Address translation
1027using associative or pseudo-associative address translation means, e.g. translation look-aside buffer
G06F 12/1072 2016.01
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
FELECTRIC DIGITAL DATA PROCESSING
12Accessing, addressing or allocating within memory systems or architectures
02Addressing or allocation; Relocation
08in hierarchically structured memory systems, e.g. virtual memory systems
10Address translation
1072Decentralised address translation, e.g. in distributed shared memory systems
G06F 13/16 2006.01
GPHYSICS
06COMPUTING; CALCULATING OR COUNTING
FELECTRIC DIGITAL DATA PROCESSING
13Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
14Handling requests for interconnection or transfer
16for access to memory bus
CPC
G06F 12/0864
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
12Accessing, addressing or allocating within memory systems or architectures
02Addressing or allocation; Relocation
08in hierarchically structured memory systems, e.g. virtual memory systems
0802Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches
0864using pseudo-associative means, e.g. set-associative or hashing
G06F 12/1027
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
12Accessing, addressing or allocating within memory systems or architectures
02Addressing or allocation; Relocation
08in hierarchically structured memory systems, e.g. virtual memory systems
10Address translation
1027using associative or pseudo-associative address translation means, e.g. translation look-aside buffer [TLB]
G06F 12/1072
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
12Accessing, addressing or allocating within memory systems or architectures
02Addressing or allocation; Relocation
08in hierarchically structured memory systems, e.g. virtual memory systems
10Address translation
1072Decentralised address translation, e.g. in distributed shared memory systems
G06F 2212/1024
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
2212Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
10Providing a specific technical effect
1016Performance improvement
1024Latency reduction
G06F 2212/401
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
2212Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
40Specific encoding of data in memory or cache
401Compressed data
G06F 2212/656
GPHYSICS
06COMPUTING; CALCULATING; COUNTING
FELECTRIC DIGITAL DATA PROCESSING
2212Indexing scheme relating to accessing, addressing or allocation within memory systems or architectures
65Details of virtual memory and virtual address translation
656Address space sharing
Applicants
  • ADVANCED MICRO DEVICES, INC. [US]/[US]
Inventors
  • DONLEY, Greggory D.
  • BROUSSARD, Bryan P.
Agents
  • RANKIN, Rory D.
Priority Data
16/150,52003.10.2018US
Publication Language English (EN)
Filing Language English (EN)
Designated States
Title
(EN) METHOD FOR MAINTAINING CACHE CONSISTENCY DURING REORDERING
(FR) PROCÉDÉ POUR MAINTENIR UNE COHÉRENCE DE MÉMOIRE CACHE PENDANT UN RÉORDONNANCEMENT
Abstract
(EN)
Systems, apparatuses, and methods for performing efficient data transfer in a computing system are disclosed. A computing system includes multiple fabric interfaces in clients and a fabric. A packet transmitter in the fabric interface includes multiple queues, each for storing packets of a respective type, and a corresponding address history cache for each queue. Queue arbiters in the packet transmitter select candidate packets for issue and determine when address history caches on both sides of the link store the upper portion of the address. The packet transmitter sends a source identifier and a pointer for the request in the packet on the link, rather than the entire request address, which reduces the size of the packet. The queue arbiters support out-of-order issue from the queues. The queue arbiters detect conflicts with out-of-order issue and adjust the outbound packets and fields stored in the queue entries to avoid data corruption.
(FR)
La présente invention porte sur des systèmes, sur des appareils et sur des procédés permettant de réaliser un transfert de données efficace dans un système informatique. Un système informatique comprend de multiples interfaces de matrice dans des clients et une matrice. Un émetteur de paquets dans l'interface de matrice comprend de multiples files d'attente, chacune étant destinée à stocker des paquets d'un type respectif, et une mémoire cache d'historique d'adresse correspondante pour chaque file d'attente. Des arbitres de file d'attente dans l'émetteur de paquets sélectionnent des paquets candidats à des fins d'émission et déterminent quand des mémoires caches d'historique d'adresse sur les deux côtés de la liaison stockent la partie supérieure de l'adresse. L'émetteur de paquets envoie un identifiant de source et un pointeur pour la requête dans le paquet sur la liaison, plutôt que la totalité de l'adresse de requête, ce qui réduit la taille du paquet. Les arbitres de file d'attente prennent en charge une émission non ordonnée à partir des files d'attente. Les arbitres de file d'attente détectent des conflits avec une émission non ordonnée et ajustent les paquets sortants et les champs stockés dans les entrées de file d'attente pour éviter une corruption de données.
Also published as
Latest bibliographic data on file with the International Bureau