Programmable Devices
CPLDs, FPGAs, SoC FPGAs, Configuration, and Transceivers
20688 Discussions

Ethernet question

Altera_Forum
Honored Contributor II
1,204 Views

I'm new to Ethernet interface and I have a few questions regarding my next project. 

 

My requirements are to send data over ethernet from custom FPGA board to a PC over dedicated link. The throughput requirements are roughly 800Mbits/s. I need to receive commands other this link as well, but @ much lower rates. 

I'm planning to use external 88E1111 RGMII module and TSE MAC on the FPGA side. From what I've researched so far, I need to implement a transport stack (either TCP or UDP). Also, from what I've read, SOPC approach with NIOS will yield ~110 Mbits/s at best. Is there any reference design on how to implement TCP/UDP in hardware? Will implementing these protocols in hardware provide me with desired throughput? 

 

thanks in advance
0 Kudos
12 Replies
Altera_Forum
Honored Contributor II
468 Views

You are really pushing the limit of Gigabit ethernet. By the time you take into account the overhead of the MAC and TCP and IP protocols and the time you're going to spend processing, you're not going to be able to get the full 800Mb/s. I'd really be surprised if your PC could keep up with this anyway. It has all the same issues. The stack is implemented in software on the PC. 

 

Jake
0 Kudos
Altera_Forum
Honored Contributor II
468 Views

Are you open to other transport methods? Most people are going PCI Express these days. Can your data be easily compressed? 

 

Jake
0 Kudos
Altera_Forum
Honored Contributor II
468 Views

Actually 800Mbits/s is our upper limit. We might be able to get away with 500Mbits/s, but from the sounds of it, this might also be a challenge. 

What would you say is a feasible upper limit on for this type of interface? 

 

PCIe is an option we're considering, but we'd like to run this board stand alone.
0 Kudos
Altera_Forum
Honored Contributor II
468 Views

To my opinion, you should be able to achieve almost Ethernet 1GBit hardware speed with a FPGA hardware network stack, reduced only by the protocol overhead. Comparable results are achieved e. g. by network switches. I also expect, that you can deliver the data stream at the FPGA side. But how about the PC side? What's the data sink? I suggest to perform some basic tests with PCs if the intended throughput is feasible.

0 Kudos
Altera_Forum
Honored Contributor II
468 Views

Wouldn't the PC performance depend on NIC? Does the NIC handle network layer or is it the PC processor?

0 Kudos
Altera_Forum
Honored Contributor II
468 Views

NIC handles MAC and PHY layers. All higher layers (IP->TCP/UDP->telnet/HTML/etc.) are normally done in software on the PC. 

 

Jake
0 Kudos
Altera_Forum
Honored Contributor II
468 Views

We are currently doing a similar thing with 10GbE in a StratixII GX. Our data rate is 500MB/s (4Gbits/s) sustained. We are currently encapsulating our data in raw ethernet packets, but have plans to switch to UDP/IP. The computer on the receiving end has no trouble keeping up with this data rate as long as copies are minimized. 

 

Here are some issues to be aware of: 

 

1) The entire data path must be hardware. The NIOS cannot be in the data path at these rates. Hardware must therefore be able to split/merge the ethernet link between the high rate data packets and the control packets to/from the NIOS. 

 

2) Raw ethernet or UDP/IP may be implemented in hardware with relative ease. TCP is much more complex and requires more resources, like lots of RAM for the retransmit buffer. Also, TCP will put much higher load on the receiving computer. I would not recommend TCP for the data packet format. TCP can be used for the low rate control to the NIOS using a software stack. 

 

3) Overhead can be reduced by using jumbo frames (up to ~9kB), allowing throughput closer to the theoretical limit. The MAC, the receiving NIC and any switches on the network segment must explicitly support these. Also, your packet buffers will need to be much larger for jumbo frames. Jumbo frames may be required to hit 800Mbits/s on GbE.
0 Kudos
Altera_Forum
Honored Contributor II
468 Views

 

--- Quote Start ---  

TBut how about the PC side? What's the data sink? I suggest to perform some basic tests with PCs if the intended throughput is feasible. 

--- Quote End ---  

 

 

+1 on testing with PC's. Just write a simple transmit program and receive program that connect to each other over a raw ethernet or UDP socket. Have the receiver verify the data so it knows if packets start being dropped. Have the transmitter be able to throttle to a given data rate. You can also configure the NIC's to a specific MTU size. 

 

With this configuration, you can easily test the effects of different protocol and frame size combinations.
0 Kudos
Altera_Forum
Honored Contributor II
468 Views

Thank you for the interesting insights. Can you achieve the said speeds through UDP, when you connect at the socket driver level (winsock2 on windows OS), or do you need to connect the device driver at a lower level?

0 Kudos
Altera_Forum
Honored Contributor II
468 Views

 

--- Quote Start ---  

Thank you for the interesting insights. Can you achieve the said speeds through UDP, when you connect at the socket driver level (winsock2 on windows OS), or do you need to connect the device driver at a lower level? 

--- Quote End ---  

 

 

On Linux, we can achieve these throughputs with a socket connection using UDP, so it is through the OS's protocol stack and the standard sockets library. We have not benchmarked on Windows. 

 

We had to do a fair amount of tweaking driver settings to achieve 500MB/s. The settings that made the most difference were the ring buffer size and the IRQ affinity. The ring buffer needed to be huge. The NIC IRQ affinity had to be set to one CPU, and the affinity of all other IRQ's had to be assigned to the remaining CPU's, so that the NIC's IRQ essentially had a dedicated CPU on which it was the only interrupt that would be serviced.
0 Kudos
Altera_Forum
Honored Contributor II
468 Views

Hello everybody, 

 

I have the same problem with the 10GbE.  

I am sure that the Nios is not enough fast to provide a throughput of 6 Gbps. 

However, I'm seeking a hard IP with can provide UDP protocol but every stack seams to be soft IP.  

Do you know a vendor who provide a 10GbE UDP stack ? 

 

regards.
0 Kudos
Altera_Forum
Honored Contributor II
468 Views

 

--- Quote Start ---  

Thank you for the interesting insights. Can you achieve the said speeds through UDP, when you connect at the socket driver level (winsock2 on windows OS), or do you need to connect the device driver at a lower level? 

--- Quote End ---  

 

 

On the Cyclone III (100MHz), NIOS II and TSE we're close to 900Mb with UDP. This includes hardware calculated internet checksum done while the data is being transferred into memory from other hardware to be sent by UDP. This is with highly optimized UDP packet code and TSE driver using lwIP (for a TCP connection for lower bandwidth control). The PC can keep up with these UDP bursts but also it can and will drop packets. Surprisingly, we've found even on a LAN that UDP can be error free for extended times transferring lots of data (21MB/S). But we developed a reliable UDP protocol to be safe.
0 Kudos
Reply