Processing

Please wait...

Settings

Settings

1. WO2007009228 - A METHOD TO EXTEND THE PHYSICAL REACH OF AN INFINIBAND NETWORK

Note: Text based on automatic Optical Character Recognition processes. Please use the PDF version for legal matters

THE EMBODIMENTS OF THE INVENTION FOR WHICH WE CLAIM AND EXCLUSIVE PROPERTY OR PRIVILEGE ARE DEFINED AS FOLLOWS:

I A method carrying InfiniBand packets over a long distance connection, comprising:
encapsulating InfiniBand packets within another protocol;
transmitting the encapsulated packets over the long distance connection (WAN);
de-encapsulating the InfiniBand packets by removing the encapsulation and recovering the InfiniBand packets;
maintaining an InfiniBand physical link state machine over the WAN; and
maintaining an InfiniBand style flow control over the WAN.

2. The method of claim 1 wherein the protocol is an OSI 7 Layer reference model and is selected from the group consisting of layer 1 , layer 2, layer 3 and layer 4.

3. The method of claim 1 or 2, wherein transmitting encapsulated

InfiniBand further comprises extending an InfiniBand link distance over the WAN to distances greater than about 100 km.

4. The method of claim 3 wherein increasing link distance further comprises increasing the InfiniBand credit advertised on the link beyond about 12KiB per VL.

5. The method of claim 4 wherein increasing the available credit comprises increasing the number of bytes per advertised credit block.

6. The method of claim 4 wherein increasing the available credit comprises increasing the number of credit blocks per advertisement.

7. The method of claim 4 wherein increasing the available credit comprises both increasing the number of credit blocks and the number of bytes per block in each advertisement.

8. The method of any one of claims 1 - 7 wherein maintaining the flow control semantics comprises the sending unit choosing the egress VL at the receiving unit for the de-encapsulated InfiniBand packets.

9. The method any one of claims 1 - 8 wherein maintaining the

InfiniBand physical link state machine further comprises exchanging non-InfiniBand packets across the WAN; wherein doing so establishes that an eπd-to-end path exists in the WAN, the exchanging of packets being selected from the group consisting of PPP LCP packets, Ethernet ARP exchanges, TCP session initializations, and establishing ATM SVCs.

10. The method any one of claims 1 - 9 wherein maintaining InfiniBand style flow control further comprises buffering packets received on the WAN port in a buffer memory that exceeds 128KiB.

11. An apparatus for carrying InfiniBand packets consisting of logic circuits, comprising:
an InfiniBand interface coupled to an InfiniBand routing and QOS component, wherein
the InfiniBand routing and QOS block's InfiniBand to WAN path is coupled to a encapsulation/de-encapsulation component (ENCAP);
the ENCAP component's IB to WAN path is coupled to a WAN interface;
the WAN interface's WAN to IB path is coupled to an ENCAP component;
the ENCAP component's WAN to IB path is coupled to a Bulk Buffer Memory;
the Bulk Buffer Memory is coupled to the WAN to IB path of an InfiniBand interface;
a Credit Management unit generates credits for the WAN and produces back pressure onto the InfiniBand interface;
the ENCAP component is coupled to the Credit Management Unit for encapsulating and de-encapsulating credit data; and
a Management block provides an InfiniBand Subnet Management Agent, WAN end-to-end negotiation and management services.

12. The apparatus of claim 11 wherein the apparatus can maintain transfer rates of about 1 gigabyte per second of InfiniBand packets simultaneously in each direction.

13. The apparatus of claim 11 or 12, wherein the InfiniBand interface contains additional flow control buffering units to transition from a WAN clock domain to an InfiniBand clock domain.

14. The apparatus of claim 11, 12 or 13 wherein the ENCAP component is capable of supporting a plurality of networks including any of IPv6, UDP in IPv6, DCCP in IPv6, ATM AAL5 or GFP.

15. The apparatus of any one of claims 11 - 14 wherein the WAN interface further comprises:
a framer unit capable of supporting a plurality of network formats, including any of SONET/SDH, 10GBASE-R, InfiniBand and 10GBASE-W; and
an optical subsystem capable of supporting any of SONET/SDH, 10GBASE-R, or InfiniBand.

16. The apparatus of claim 15 wherein the optical subsystem is further capable of reaching distances greater than specified by IBTA InfiniBand Architecture Release 1.2 alone or when coupled with other equipment such as SONET/SDH multiplexers, optical regenerators, packet routers, cell switches or otherwise.

17. The apparatus of any one of claims 11 - 16 wherein the bulk buffer memory can take packets out of the plurality of FIFO structures rn an order different from the order that the packets were received.

18. The apparatus of any one of claims 11 - 17 wherein the credit management unit advertises more credits than defined by the InfiniBand specification through increasing the credit block size and/or increasing the number of blocks per advertisement.

19. The apparatus of any one of claims 11 - 18 wherein the management block of claim 10 further comprises:
a general purpose processor; and
a mechanism to send and receive packets on both the WAN and IB interfaces.

20. The apparatus of any one of claims 11 - 19 wherein the bulk buffer memory further comprises:
a plurality of DDR2 memory modules (DIMMS);
wherein control logic maintains a plurality of FIFO structures within the DDR2 memory; and
wherein each FIFO structure is used to buffer a WAN to InfiniBand VL; and
wherein the packet flow out of the memory is regulated to ensure that no packets are discarded due to congestion at the InfiniBand interface.

21. The apparatus of claim 11 for maintaining a maximum transfer rate of 1 gigabyte per second of InfiniBand packets simultaneously in each direction, further comprising: additional flow control buffering units to transition from a WAN clock domain to an IB clock domain;
wherein the ENCAP component is capable of supporting a plurality of networks including any of IPv6, UDP in IPv6, DCCP in IPv6, ATM AAL5 or GFP;
a framer unit capable of supporting a plurality of network formats, including any of SONET/SDH, 10GBASE-R1 InfiniBand and 10GBASE-W;
an optical subsystem coupled capable of supporting any of SONET/SDH, 10GBASE-R, or InfiniBand
wherein the optical subsystem is further capable of reaching distances greater than specified by IBTA InfiniBand Architecture Release 1.2 alone or when coupled with other equipment such as SONET/SDH multiplexers, optical regenerators, packet routers, cell switches or otherwise;
wherein the Bulk Buffer Memory can take packets out of the plurality of FIFO structures in an order different from the order that the packets were received and wherein the Bulk Buffer Memory further comprises:
a plurality of DDR2 memory modules (DIMMS);
wherein control logic maintains a plurality of FIFO structures within the DDR2 memory; and
wherein each FIFO structure is used to buffer a WAN to InfiniBand

VL; and
wherein the packet flow out of the memory is regulated to ensure that no packets are discarded due to congestion at the InfiniBand interface, wherein the Credit Management advertises more credits than defined by the InfiniBand specification through increasing the credit block size and/or increasing the number of blocks per advertisement.
wherein the Management block further comprises:
a genera] purpose processor; and
a mechanism to send and receive packets on both the WAN and IB interfaces.

22. The apparatus of any one of claims 11 - 21 wherein the ENCAP component performs a null encapsulation and emits the InfiniBand packets unchanged.

23. The apparatus of any one of claims 11 - 22 wherein the bulk buffer memory further comprises:
a plurality of SRAM memory chips;
wherein control logic maintains a plurality of FIFO structures within the QDR memory; and
wherein each FIFO structure is used to buffer a WAN to InfiniBand VL,- and
wherein the packet flow out of the memory is regulated to ensure that no packets are discarded due to congestion at the InfiniBand interface.

24. The apparatus of claim 23 wherein the SRAM memory chips are QDR2 SRAM.

25. The apparatus of claim 21 ,
wherein the bulk buffer memory comprise a plurality of SRAM memory chips; and
wherein control logic maintains a plurality of FIFO structures within the QDR memory; and
wherein each FIFO structure is used to buffer a WAN to InfiniBand VL; and
wherein the packet flow out of the memory Fs regulated to ensure that no packets are discarded due to congestion at the InfiniBand interface.

26. The apparatus of any one of cfaims 11 - 25 wherein the InfiniBand packets are placed within the payload structure of IPv6 packets.

27. The apparatus of any one of claims 11 - 26 wherein the credit data is encoded in extension headers within the IPv6 header.

28. The apparatus of claim 27 wherein the credit data is encoded in extension headers within the IPv6 header.

29. The apparatus of any one of claims 11 - 28:
wherein the ENCAP component frames the InfiniBand packets in a manner than is compatible with the 66/64b coding scheme defined by IEEE802.3ae clause 49; and
wherein the ENCAP component can remove the clause 49 compatible framing and recover the original InfiniBand packet.

30. The apparatus of claim 29 wherein the credit data is encoded in ordered sets in the 66/64b code.

31. The apparatus of any one of claims 11 - 30 wherein the InfiniBand packets are placed within the payload structure of UDP or DCCP datagrams carried within IPv6 or IPv4 packets.

32. The apparatus of any one of claims 11 - 30 wherein the

InfiniBand packets are segmented into ATM Cells according to the ATM Adaptation Layer 5 (AAL5).

33. The apparatus of any one of claims 11 - 30 wherein the InfiniBand packets are placed within the payload structure of a Generic Framing

Protocol packet and placed within a SONET/SDH frame.

34. The apparatus of any one of claims 11 - 33 wherein the credit data is encoded in the payload structure of the encapsulation.

35. A system, comprising:
a first InfiniBand fabric coupled to a first device;
a first device coupled to a second device;
a second device coupled to a second InfiniBand fabric;
wherein the first and second devices are further comprised of:
logic circuitry to encapsulate and de-encapsulate InfiniBand packets into another network protocol; and
logic circuitry to buffer the InfiniBand packets; and
a network interface that carries the encapsulated InfiniBand packets

36. The system of claim 35 wherein the first device and second device are further indirectly coupled over an extended WAN network, the extended WAN network comprising one or more of SONET/SDH multiplexers, optical regenerators, packet routers, and cell switches.

37. The system of claim 35 or 36 wherein the flow rate of packets into the ENCAP component may be limited by the device based upon conditions within the network or administrative configuration to a rate less than or equal to the maximum rate possible.

38. The system of claim 35, 36 or 37 wherein the system further comprises:
a packet or cell switched or routed network exits between the two devices;

wherein more than two devices can be connected to this network; and
wherein each end device can encapsulate and address packets to more than one destination device.

39. The system of any one of claims 35 - 38 further comprising the apparatus of claim 21.

40. The system of any one of claims 35 - 38 further comprising the apparatus of claim 25.

41. The system of any one of claims 35 - 38 further comprising:
two InfiniBand fabrics having disjoined LID address spaces and different subnet prefixes;
a packet routing component integrated into the devices wherein:
logic circuitry determines the LID address of a given InfiniBand packet by examining the destination GID in the GRH; and
wherein logic circuitry can replace the LID, SL, VL or other components of the InfiniBand packet using information from the GRH.