بعض محتويات هذا التطبيق غير متوفرة في الوقت الحالي.
إذا استمرت هذه الحالة ، يرجى الاتصال بنا علىتعليق وإتصال
1. (WO2019068013) FABRIC CONTROL PROTOCOL FOR DATA CENTER NETWORKS WITH PACKET SPRAYING OVER MULTIPLE ALTERNATE DATA PATHS
ملاحظة: نص مبني على عمليات التَعرف الضوئي على الحروف. الرجاء إستخدام صيغ PDF لقيمتها القانونية

WHAT IS CLAIMED IS:

1. A network system comprising:

a plurality of servers;

a switch fabric comprising a plurality of core switches; and

a plurality of access nodes, each of the access nodes coupled to a subset of the servers and coupled to a subset of the core switches, wherein the access nodes include a source access node and a destination access node each executing a fabric control protocol (FCP),

wherein the source access node is configured to send an FCP request message for an amount of data to be transferred in a packet flow from a source server coupled to the source access node to a destination server coupled to the destination access node, and in response to receipt of an FCP grant message indicating an amount of bandwidth reserved for the packet flow, spray FCP packets of the packet flow across a plurality of parallel data paths in accordance with the reserved bandwidth, and

wherein the destination access node is configured to, in response to receipt of the FCP request message, perform grant scheduling and send the FCP grant message indicating the amount of bandwidth reserved for the packet flow, and in response to receiving the FCP packets of the packet flow, deliver the data transferred in the packet flow to the destination server.

2. The network system of claim 1, wherein the source access node has full mesh connectivity to a subset of the access nodes included in a logical rack as a first-level network fanout, and wherein the source access node is configured to spray the FCP packets of the packet flow across the first-level network fanout to the subset of the access nodes included in the logical rack.

3. The network system of claim 2, wherein each of the access nodes has full mesh connectivity to the subset of the core switches as a second-level network fanout, and wherein each of the subset of the access nodes included in the logical rack is configured to spray the FCP packets of the packet flow across the second-level network fanout to the subset of the core switches.

4. The network system of claim 1, wherein, to spray the FCP packets in accordance with the reserved bandwidth, the source access node is configured to spray the FCP

packets of the packet flow until an amount of data that is less than or equal to the reserved bandwidth for the packet flow is sent, stopping at a packet boundary.

5. The network system of claim 1, wherein, to spray the FCP packets of the packet flow across the plurality of parallel data paths, the source access node is configured to spray the FCP packets of the packet flow by directing each of the FCP packets to a least loaded one of the parallel data paths selected based on a byte count per path.

6. The network system of claim 1, wherein, to spray the FCP packets of the packet flow across the plurality of parallel data paths, the source access node is configured to spray the FCP packets of the packet flow by directing each of the packets to a randomly, pseudo-randomly, or round-robin selected one of the parallel data paths.

7. The network system of claim 1, wherein, to spray the FCP packets of the packet flow across the plurality of parallel data paths, the source access node is configured to spray the FCP packets of the packet flow by directing each of the packets to a weighted randomly selected one of the parallel data paths in proportion to available bandwidth in the switch fabric.

8. The network system of claim 1, wherein, to spray the FCP packets of the packet flow across the plurality of parallel data paths, the source access node is configured to randomly set a different user datagram protocol (UDP) source port in a UDP portion of a header for each of the FCP packets of the packet flow, wherein the plurality of core switches compute a hash of N-fields from the UDP portion of the header for each of the FCP packets and, based on the randomly set UDP source port for each of the FCP packets, select one of the parallel data paths on which to spray the FCP packet.

9. The network system of claim 1, wherein the source access node is configured to, in response to receipt of the FCP grant message, perform FCP packet segmentation including encapsulation of one or more outbound packets of the packet flow within payloads of the FCP packets

10. The network system of claim 1, wherein, in response to receiving the FCP packets of the packet flow, the destination access node is configured to reorder the FCP packets into an original sequence of the packet flow prior to delivering the data transferred in the packet flow to the destination server.

11. The network system of claim 10, wherein the source access node assigns a packet sequence number to each of the FCP packets of the packet flow, and wherein the destination access node reorders the FCP packets based on the packet sequence number of each of the FCP packets.

12. The network system of claim 1, wherein the source access node indicates a flow weight indicating a number of packet flows for which bandwidth is requested in the FCP request message, and wherein the destination access node performs fair bandwidth distribution during grant scheduling based on the flow weight.

13. The network system of claim 1, wherein the destination access node computes a scale down factor for the source access node based on a global view of packet flows in the switch fabric, and indicates the scale down factor in the FCP grant message, and wherein the source access node adjusts an FCP request window based on the scale down factor.

14. The network system of claim 1, wherein the source access node performs adaptive rate control of FCP request messages based on detected failures in the switch fabric, and wherein the destination access node performs adaptive rate control of FCP grant messages based on detected failures in the switch fabric.

15. The network system of claim 1, wherein the source access node and the destination access node are each configured to authenticate and not encrypt FCP headers of FCP data packets and FCP control packets including FCP request messages and FCP grant messages, wherein the FCP headers include FCP sequence numbers.

16. The network system of claim 1, wherein the destination access node performs explicit congestion notification (ECN) marking of FCP packets based on a global view of packet flows in the switch fabric.

17. The network system of claim 1, wherein the source access node is further configured to send non-FCP packets on one of the plurality of parallel data paths in accordance with equal cost multi-path (ECMP) load balancing and any additional bandwidth left over after the reserved bandwidth for the FCP packets.

18. The network system of claim 1, wherein the source access node is further configured to spray unsolicited FCP packets across the plurality of parallel data paths in accordance with a configured amount of bandwidth, wherein the unsolicited FCP packets are used for low latency data traffic without the use of FCP request messages and FCP grant messages.

19. The network system of claim 1, wherein, based on end-to-end admission control mechanisms of the FCP and packet spraying in proportion to available bandwidth, the switch fabric comprises a drop-free fabric at high efficiency without use of link level flow control.

20. A method comprising:

establishing a logical tunnel over a plurality of parallel data paths between a source access node and a destination access node within a computer network, wherein the source and destination access nodes are respectively coupled to one or more servers, wherein the source and destination access nodes are connected by an intermediate network comprising a switch fabric having a plurality of core switches, and wherein the source and destination access nodes are each executing a fabric control protocol (FCP); sending, by the source access node, an FCP request message for an amount of data to be transferred in a packet flow from a source server coupled to the source access node to a destination server coupled to the destination access node; and

in response to receipt of an FCP grant message indicating an amount of bandwidth reserved for the packet flow, forwarding FCP packets of the packet flow by spraying, by the source access node, the FCP packets across the plurality of parallel data paths in accordance with the reserved bandwidth.

21. The method of claim 20, further comprising:

receiving, by the source access node, outbound packets of the packet flow from the source server and directed to the destination server; and

in response to receipt of the FCP grant message, performing, by the source access node, FCP packet segmentation including encapsulating one or more of the outbound packets within payloads of the FCP packets.

22. The method of claim 20, wherein spraying the FCP packets in accordance with the reserved bandwidth comprises spraying, by the source access node, the FCP packets of the packet flow until an amount of data that is less than or equal to the reserved bandwidth for the packet flow is sent, stopping at a packet boundary.

23. The method of claim 20, further comprising sending, by the source access node, non-FCP packets on one of the plurality of parallel data paths in accordance with equal cost multi-path (ECMP) load balancing and any additional bandwidth left over after the reserved bandwidth for the FCP packets.

24. The method of claim 20, further comprising forwarding unsolicited FCP packets by spraying, by the source access node, the unsolicited FCP packets across the plurality of parallel data paths in accordance with a configured amount of bandwidth, wherein the unsolicited FCP packets are used for low latency data traffic without the use of FCP request messages and FCP grant messages.

25. The method of claim 20, wherein the source access node has full mesh

connectivity to a subset of the access nodes included in a logical rack as a first-level network fanout, the method further comprising spraying, by the source access node, the FCP packets of the packet flow across the first-level network fanout to the subset of the access nodes included in the logical rack.

26. The method of claim 25, wherein each of the access nodes has full mesh connectivity to the subset of the core switches as a second-level network fanout, the method further comprising spraying, by each of the subset of the access nodes included in the logical rack, the FCP packets of the packet flow across the second-level network fanout to the subset of the core switches.

27. A method comprising:

establishing a logical tunnel over a plurality of parallel paths between a source access node and a destination access node within a computer network, wherein the source and destination access nodes are respectively coupled to one or more servers, wherein the source and destination access nodes are connected by an intermediate network comprising a switch fabric having a plurality of core switches, and wherein the source and destination access nodes are each executing a fabric control protocol (FCP);

in response to receipt of an FCP request message for an amount of data to be transferred in a packet flow from a source server coupled to the source access node to a destination server coupled to the destination access node, performing, by the destination access node, grant scheduling;

sending, by the destination access node, an FCP grant message indicating an amount of bandwidth reserved for the packet flow; and

in response to receiving FCP packets of the packet flow, delivering, by the destination access node, the data transferred in the packet flow to the destination server.

28. The method of claim 27, further comprising, in response to receiving FCP packets of the packet flow:

reordering the FCP packets into an original sequence of the packet flow;

extracting outbound packets of the packet flow from the reordered FCP packets; and

delivering the outbound packets to the destination server.

29. The method of claim 28, wherein recording the FCP packets comprises reordering the FCP packets based on a packet sequence number assigned to each of the FCP packets by the source access node.