movementgugl.blogg.se

Earthnet google
Earthnet google






earthnet google

While the cost per gigabit has come down over time from 1 Gb/sec to 10 Gb/sec to 40 Gb/sec as switches have evolved, Sadana says that a single lane of 25 Gb/sec has one pair of wires and is the sweet spot in terms of the lowest cost per gigabit. But if you move to 40 Gb/sec downlinks to the servers, switches typically only have 32 or 36 ports – and that is not enough to cover the machines in the rack and you end up having to buy two 40 Gb/sec switches and having orphaned ports. So, for instance, in a standard rack today, a top-of-rack switch running at 10 Gb/sec might have 48 downlink ports to servers with maybe four or eight uplinks to the aggregation layer of the network. But parallel lanes are not the most cost optimized, and especially for the datacenter where you can put a large amount of servers in a rack and you need the right uplinks for those servers." "SerDes have evolved from 1 Gb/sec to 10 Gb/sec to 25 Gb/sec, and they are all aligned to achieve the IEEE speeds. "With 40 Gb/sec, if you need four lanes, there are more elements on the switch chip, there is more power drawn, and this results in a lower port density compared to what could be achieved with single-lane devices," says Sadana. While they are always happy to have more bandwidth, they are not willing to get it at a higher cost or with lower port density on their switches. The use of these parallel links (running at 10 Gb/sec or 25 Gb/sec) leads to some design choices in both network interface cards and in switches that the cloud providers say do not align with their needs. (Actually, because of encoding overhead, Sadana explained, they actually run at 11.25 Gb/sec, but this is not how people talk about it.) On a 100 Gb/sec link, there are two ways to get there: ten lanes running at 10 Gb/sec speeds or four lanes at 25 Gb/sec speeds. With a 40 Gb/sec switch, the current Ethernet specification calls for four lanes of traffic coming off the serializer/deserializer (SerDes) chips running at 10 Gb/sec speeds. But cloud builders like Google and Microsoft have compelling reasons to want to have such speeds for their Ethernet, and so they are going to form a consortium of their own, called the 25 Gigabit Ethernet Consortium of course, and make it happen, while at the same time adhering to the IEEE 802.3 specification that governs Ethernet.Īnshul Sadana, senior vice president of customer engineering at Arista Networks, explained the situation to EnterpriseTech. In March, the IEEE had a meeting in China and Microsoft put out what is referred to as a "call for interest" or CFI with the idea of establishing a 25 Gb/sec Ethernet standard with a 50 Gb/sec speed bump for certain applications. With the exception of the initial 3 Mb/sec and 10 Mb/sec speeds set back in the early 1980s when Ethernet was just starting to be commercialized after running in the Xerox PARC labs for nearly a decade, Ethernet speeds have been created by the group of networking vendors participating in the IEEE. To get better port density and lower costs, the cloud providers want Ethernet to run at 25 Gb/sec and 50 Gb/sec inside the rack rather than the 10 Gb/sec, 40 Gb/sec, and 100 Gb/sec speeds that are currently available. Unhappy with the cost per gigabit of bandwidth with current Ethernet switches and adapters, two of the cloud computing giants – Google and Microsoft – have teamed up with two switch chip providers – Broadcom and Mellanox Technologies – and one upstart switch maker – Arista Networks – to create a specification for Ethernet speeds that are different from those officially sanctioned by the IEEE.








Earthnet google