Translate

Sunday, July 24, 2011

Yes, the math is misleading ...

I have had more people reading this that I anticipated, and a couple of good comments which I want to reply to in a proper way, so I decided to make it another post ...


Mark,  thank you for your comments. They really add to my post as they provide more insight and context, which was lacking on the interview: the hypothetical 3MW DC is for a cloud provider built on L3 from the access.

And in that sense, I stand by my comment: the math presented at that interview is indeed misleading. I will explain below why I say this ... But first to your comments :-)

1. When I mentioned "claim to support 384 10GE ports" I do not imply that I doubt it to be the reality. I am writing here based on my knowledge and experience only :-) I take your word fot it of course.
2. Good catch on the Arista 7050-64 power consumption. Believe me it was not intentional. I have corrected this :-)
3. Agreed that cloud providers like standardization and could prefer to operationalize cabling with a pair of ToRs per rack.  Again, at the interview it wasn't mentioned which kind of customer we are talking about ... Enterprise customers could think differently and would very much welcome racking ToRs every other rack because it simplifies the network operation by quite a bit (manage 84 devices vs 250).
4. There was no "Cisco defense" because I do not take this as any attack :-) ... and I write here for the fun of it. I simply stated the fact that you picked on platforms which don't allow  a fair comparison. You say that Cisco's best practice design is with 5548 and FCoE ... Where did you read a paper from Cisco that recommends such a thing for an MSDP? ... However an Enterprise datacenter with a high degree of virtualization  will in many cases require to support storage targets on FC, and FCoE is great solution to optimize the cost of deploying such infrastructure. L2 across the racks is also a common requirement in this case, not just for vMotion, but for supporting many cluster applications. L3 access in most Enterprise DC simply (and sadly) does not apply ...

As I said, your comment brings insight because in the interview there is no comment of what was the hypothetical datacenter for, or what kind of logical design you would use. No context at all.  You say it is to be built using L3 at the access layer. I concur with you that the limitations I mentioned do not necessarily apply, for the access switches wont need to learn that many ARP entries, and the distribution switches can work with route summarization from the access, so smaller tables could do the job.

I am well aware that most MSDPs use L3 from the ToR layer. I am sure that you are also well aware that most Enterprise virtualized and non-virtualized datacenters, do not use L3 from the ToR. A 5,000 server DC could be found in either market space. I am sure you are also well aware that Cisco's recommended design with Nexus 5500 and Nexus 7000 (also leveraging FCoE) is thought for Enterprise datacenters primarily where it provides a lot of flexibility.

I still can't see how the topology looks like in your design anyways, with 12 spine switches. I just cannot see how to add up 16 uplinks per ToR/leaf to 12 spine switches and keep ECMPs ... I guess that the design is made up of pairs, so six pairs of 7500s, each then aggregating about 40 ToRs, with 8 links from ToR to each of the 7500s. But if this is the case, each of the 7500s pair has no connection to the other pairs unless a core layer is deployed, which perhaps is assumed in the example (but not factored in for power calculations??).

So again, I think your comment confirms my point, while also acknowledge that for non-L3 designs, the limitations that I mentioned do apply.

What do I mean that you confirm my point?: The math in the interview is presented without context, so it creates the perception that the comparison applies to any and every environment. In fact, it only applies in a specific environment (L3 design), and completely reverses in others (L2 desing ... probably a more common one btw). So you pick an area to your advantage, exaggerated it (by picking Cisco's most sub-optimal platform for such design), and presented it as general. That is misleading.

To answer to Doug ... Let me first say that I respect you a lot, and I have very very much respect for Arista. It is a company with a lot of talent. But talent lives in many places, including Cisco too. My blog post was to state that the math is misleading. Nothing more. I still believe it is, as stated above. You say "Nillo had to significantly modify the design, away from the specification" ... what? :-)  ... what specification? None was provided during the interview ...  how could I change it?!

To your challenge, two things to say:

1. Can YOU spec a 3MW datacenter where you can do life vMotion between any two racks and allow any server to mount any storage target from any point in the network? :-D
2. I write here for the fun of it  and on my free time ... oh ... and I choose what I write about ;-)  (not really the "when", because my free time isn't always mine ;-) ).

... I thought nobody would care a bit about what I write ... wow ...

2 comments:

  1. Jayshree is speaking in generalities there is no attempt to mislead. She clearly articulates the facts that power, space and cooling are critical, as are low over subscription ratios and latency, in any large-scale high performance data center. We can go back and forth about what platform and configuration would work the best in the end Arista does build massive data centers with a much smaller power and space footprint than any other vendor.

    Typically when I have a question I ask, why didn’t you just ask what the design specs for the data center Jayshree was talking about?

    If you or anyone out there has a question about Arista please send me an email and just ask, mark@aristanetworks.com…

    ReplyDelete
  2. Mark, again thanks for reading this and add your comments. To answer your question: when I am told something, I ask if I need to. But this time I was reading something written for a general audience, so I think it is fair to just comment what I think about what I read :-)

    On your statement above, I have already expressed my (honest) respect for Jayshree and for Arista alike. I think competition is great for the industry, particularly for customers, including new and established vendors. I think Arista can bring forward very compelling solutions in the areas you describe, but so can Cisco (and others).

    I also think that in DC designs which need to accomodate for diversity, legacy integration and require flexibility on type of workloads, storage and services, Cisco does bring today the best value (power, space and performance taken into consideration).

    ReplyDelete