Basically, it is
sending FC frames over Ethernet. This localizes the traffic unless you route
based on MAC addresses. So you send 2146 bytes of FC frame plus 18 bytes of
Ethernet overhead as FCoE ?standard? packet. 18 bytes of Ethernet gets stripped
and you have straight FC frame that can go through any FC network. Now you can
have 10G Ethernet pipes into existing FC SANs. Limited market potential as far
as I can see. The key argument is it much easier to implement than iSCSI and
also has less overhead and uses all the benefits of FC. End to End credits are
simulated using PAUSE command on Ethernet and MAC addresses are mapped into
Biggest knock is that
it will not route on the ?global? scale like TCP/IP would.
Original Message -----
Sent: Wednesday, April 25, 2007 8:09
Subject: Re: FW: Recent comments
about FCoE and iSCSI
"John Hufferd" <jhufferd@xxxxxxxxxxx> wrote on
> To be sure you understand
our position; Brocade is pushing iSCSI as an
> outreach protocol from
the Data Center. We also believe iSCSI is very
> useful for
installations that do not have a Fibre Channel
> infrastructure, and in
that case we will be able to sell them our new
> iSCSI and TOE offload
> When I say iSCSI is an outreach protocol, this
is a statement that iSCSI
> is very important to connect "stranded"
servers to the Fibre Channel
> Fabric. That is, we sell
iSCSI-to-FC Gateway devices which will permit
> iSCSI Servers (software
or HBA iSCSI initiators) to connect to the
> Enterprise "Bet Your
Business" FC Storage. This of course also applies
> to Desktops
and Laptop systems, and systems at distance.
You make it sound like:
- most of the servers in the world have their
storage on the network - and that is not the case
- FCP is basically better performing than iSCSI -
and that is not true either
- Gatewaying is expensive - and it is perhaps so
but only if you are completely relying on FCP storage (and there are plenty
of good iSCSI vendors of storage)and pushing the price on the servers is not
cheap either - at least not for the server
> Now with that
positioning, it is important to understand the limitation
> to this
strategy. The primary problem is that iSCSI to FC Bridging
(Gatewaying) is relatively expensive (compared to simple FC
connections). Though we have some of the best priced Gateways on
> market, it is not cost feasible to replace all the server
> to FC storage with iSCSI for the hundred to thousands of
servers in the
> Data Center. And since, if there is to be a
> connection to the servers in the Data Center,
there must be an
> evolutionary replacement of Server Connections to
Storage. That means
> there must be a bridge/Gateway approach.
And as I mentioned before,
> there is just too much cost in the
iSCSI to FC Gateway.
> The issue is the server requirement to
have a single connection type to
> handle cluster messaging, general
messaging, and storage. iSCSI is
> clearly an option for the
storage, however, the gateway costs are too
> high for iSCSI to be used
as the "normal" server connect into a FC based
> Fabric. That is true
for the current 1GE; and for 10GE the cost is just
> out of sight.
The reason for this is the requirement for TCP/IP
and re-initiation with FC at the Gateway.
> Now with respect to
FC over Ethernet the important thing to understand
> is that it is not
Ethernet as we have known it up to know. The Ethernet
> we are
talking about is a type of Ethernet that can only be deployed in
constrained environment such as a Data Center. This form of
> is called DCE (Data Center Ethernet) or CEE (Convergence
> Ethernet). This form of Ethernet is a Loss-less type
> multi-priority and Flow Control. This is NOT an
Internet or Intranet
> type of Ethernet.
> FCoE is all
about using the DCE (CEE) to carry FC frames. The rest of
Host and storage stack remain the same, the functions and features
the switches also remain the same and add the capability to provide
Cluster Message Switching which has latency close to InfiniBand
> Because the FC frames are transported to the
switches intact via a DCE
> frame, the Bridging, if you want to call it
that, is virtually non
> existent. Hence you can deliver the FC
frames to FC devices, or send FC
> frames to DCE FCoE devices, just like
one would do if it was all FC. And
> all this is done while performing
Cluster message switching and general
> message trucking to the IP
future of the yet to appear DCE/CEE and a layer 2 only world.
First you have some terms confused:
Bridging is the term commonly used for Layer-2
switching and routing is therm used for layer-3 (switching).
Bridging has some advantages (less management) that
have created a movement towards an enterprise wide
LAN. But this has a long way to go and will require significant
equipment and protocol changes.
proponents do not call for transportless networks, lossless networks
The second trouble with your argument is
that there are no known large scale networking technologies that
really work at full speed (high speed) and are lossless
(flow-controlled) and errorless like FCoE assumes.
The TCP/IP has solved this issue for every generation using the proven
end-to-end principle (and is doing so now).
And it is not by chance so and that is why all networking applications
are built above layer-3 and not dropping layer-3 (like FCoE) does.
Although I can understand the DCE arguments as a
management statement I would prefer like any rational engineer, to base my
building blocks on structures that are proven and long lasting. And those are
still the end-to-end TCP/IP that can accommodate even your FCP addicts. The
IPS TWG has developed the iFCP that does exactly what FCoE claims to do an a
> This means an
evolutionary process is possible to the solution of
> getting a single
Fabric connection for all networks connected to a
> server, further, the
process has very low interconnection cost on the
> Data Center Fabric.
And it maintains all the FC Fabric Services, and all
> the same Storage
> By the way, this is primarily a Server
driven value statement, there
> seems to be little value in having FCoE
on the storage controller.
> Therefore FC storage controllers (and
FICON) will be the very last
> things that connect using FCoE and that
evolution will take at least a
> decade or more.
It is server cost statement. It costs
nothing to connect a modern server to ethernet it will cost a bundle to
connect to FCoE and it will force users in short lived bad
> We see value in
offering switches and Directors that can support DCE
> switching, FC
switching as well as iSCSI interconnect, and the
> "Trunking" of general
messaging to the Outfacing IP network. That said;
> we do not see
FCoE going beyond the constraints of the Data Center.
Data Centers now grow to tens of thousands of nodes.
There is no layer-2 technology for errorless/lossless operation at this scale
and there is no good reason to pursue one. The only possible reason (good
reason) is the bridging infrastructure but that infrastructure has a
completely different rationale than the flowcontrol.
> This issue and message is quite different from the issues and
> we struggled with when we started iSCSI. There is a
consortium of folks
> both working on the DCE (CEE) and the FCoE.
Without the DCE the FCoE
> will not happen.
> None of the above cancels out the value of iSCSI in numerous
good for all environments. Business consideration (and some politics) keep it
form "exploding" and large storage vendors are completely indifferent to the
network connection they are using.
You and I
have also slightly different views of DCE. I expect DCE (that still has a way
to go) to improve the QoS in the data-center (and for storage too). You expect
it to bring the loss rates down to the levels that FCP assumes (FCP has no
transport layer) and that is probably a pipe dream. Todays transport solution
for loss mitigation are far more cost effective - and that's why iFCP is a
better proposition as a transition technology than FCoE and iSCSI with
gateways is propably better in the long run.
> John L Hufferd
Sr. Executive Director of Technology
Office Phone: (408) 333-5244; eFAX: (408) 904-4688
> Alt Office Phone:
(408) 997-6136; Cell: (408) 627-9606
> From: John Hufferd
> Sent: Tuesday, April 24, 2007
> To: 'Julian_Satran@xxxxxxxxxx'
> Subject: Re:
Recent comments about FCoE and iSCSI
> I think
you are wrong on this one. The arguments are quite different
then the ones we had in pre iSCSI days. (By the way I missed you
> today's Renato meeting/conf call where Brocade took the IBM
> group through FCoE as it is being placed in our plans).
> I will send you more info when I get to my computer. But
> were sent the Brocade charts. Please review them
and I will follow up
> with more information.
> This does NOT
replace iSCSI it applies only to a DataCenter envuornment
lossless DCE ethernet.
> John L.
> Sr. Ex. Director of Technology
> Brocade Communications
> Phone: (408) 333-5244
> Mobile: (408)
> eMail: jhufferd@xxxxxxxxxxx
> (Sent from my BlackBerry
> ----- Original Message -----
From: Julian Satran <Julian_Satran@xxxxxxxxxx>
> To: ips@xxxxxxxx
> Sent: Tue Apr 24 12:10:29 2007
Recent comments about FCoE and iSCSI
> Dear All,
> The trade press is lately full with comments about the
> greatest reincarnation of Fiber Channel over ethernet.
> It made me try and summarize all the long and hot debates that
> the advent of iSCSI.
> Although FCoE proponents make
it look like no debate preceded iSCSI that
> was not so - FCoE was
considered even then and was dropped as a dumb
Here is a summary (as afar as I can remember) of the main arguments.
They are not bad arguments even in retrospect and technically FCoE
doesn't look better than it did then.
> Feel free to use this
material in a nay form. I expect this group to
> seriously expand
my arguments and make them public - in personal or
> collective form.
> And do not forget - it is a technical dispute - although we
> have some doubts about the way it is pursued.
> What a piece of nostalgia :-)
> Around 1997 when a
team at IBM Research (Haifa and Almaden) started
> looking at connecting
storage to servers using the "regular network"
> (the ubiquitous LAN) we
considered many alternatives (another team even
> had a look at ATM -
still a computer network candidate at the time). I
> won't get you over
all of our rationale (and we went over some of them
> again at the end
of 1999 with a team from CISCO before we convened the
> first IETF BOF
in 2000 at Adelaide that resulted in iSCSI and all the
> rest) but some
of the reasons we choose to drop Fiber Channel over raw
> Ethernet where
> * Fiber Channel Protocol (SCSI
over Fiber Channel Link) is
> "mildly" effective because:
> * it implements endpoints in a dedicated engine
> * it has no transport layer (recovery
is done at the
> application layer under the assumption that the error
rate will be very
> * the network is
limited in physical span and logical span
> (number of switches)
> * flow-control/congestion control is achieved
> mechanism adequate for a limited span network (credits). The
> rate is almost nil and that allows FCP to avoid using a
> (end-to-end) layer
FCP she switches are simple (addresses are local and the
requirements cam be limited through the credit mechanism)
* However FCP endpoints are inherently costlier than
simple NICs - the cost argument (initiators are more expensive)
* The credit mechanisms is highly unstable for
> networks (check switch vendors planning docs for the network
> limits) - the scaling argument
The assumption of low losses due to errors might
> radically change when
moving from 1 to 10 Gb/s - the scaling argument
Ethernet has no credit mechanism and any mechanism with
> a similar
effect increases the end point cost. Building a transport
> layer in the
protocol stack has always been the preferred choice of the
community - the community argument
> * The
"performance penalty" of a complete protocol stack
> has always been
overstated (and overrated). Advances in protocol stack
and finer tuning of the congestion control mechanisms
conventional TCP/IP performing well even at 10 Gb/s and over.
the multicore processors that become dominant on the computing
have enough compute cycles available to make any "offloading"
as a mere code restructuring exercise (see the stack reports
Intel, IBM etc.)
> * Building on a complete stack
makes available a wealth of
> operational and management mechanisms
built over the years by the
> networking community (routing,
provisioning, security, service location
> etc.) - the community
> * Higher level storage access over an IP
network is widely
> available and having both block and file served over
the same connection
> with the same support and management structure is
compelling - the
> community argument
Highly efficient networks are easy to build over IP with
(shortest path) routing while Layer 2 networks use bridging and
limited by the logical tree structure that bridges must follow. The
effort to combine routers and bridges (rbridges) is promising to
> that but it will take some time to finalize (and we don't know
> how it will operate). Untill then the scale of Layer 2 network
> to seriously limited - the scaling argument
> As a side argument - a performance
comparison made in
> 1998 showed SCSI over TCP (a predecessor of the
later iSCSI) to perform
> better than FCP at 1Gbs for block sizes
typical for OLTP (4-8KB). That
> was what convinced us to take the path
that lead to iSCSI - and we used
> plain vanilla x86 servers with
plain-vanilla NICs and Linux (with
> similar measurements conducted on
> The networking and storage community
> arguments and developed iSCSI and the companion
protocols for service
> discovery, boot etc.
> The community also acknowledged the need to
> infrastructure and extend it in a reasonable fashion
and developed 2
> protocols iFCP (to support hosts with FCP drivers and
IP connections to
> connect to storage by a simple conversion from FCP
to TCP packets) FCPIP
> to extend the reach of FCP through IP (connects
FCP islands through TCP
> links). Both have been
implemented and their foundation is solid.
> The current attempt of developing a "new-age" FCP
> Ethernet link is going against most of the arguments that have
> iSCSI etc.
ignores the networking layering practice, build an
protocol directly above a link and thus limits scaling,
elements at the link layer and application layer that make
applications more expensive and leaves aside the whole "ecosystem"
> accompanies TCP/IP (and not Ethernet).
> In some related effort (and at a point also when
> iSCSI) we considered also moving away from SCSI (like some
> standardized" but popular in some circles software did - e.g.,
> decided against. SCSI is a mature and well understood
> architecture for block storage and is implemented by many
> vendors. Moving away from it would not have been justified at