- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello all!
I'm working on gaining a better understanding of using .sdc files, and fully constraining my FPGA (Cyclone III LS) design. I have created clocks and set certain ones exclusive to others where appropriate. I have set some output constraints (the FPGA interfaces with a CPU) using the formulas: Output max delay = max(board_data_delay) + tsu_ext - min(board_clk_delay) Output min dalay = min(board_data_delay) - th_ext - max(board_clk_delay) So, that gets me : set_output_delay -clock CPU_O_CLK40 -max 15.39 [get_ports "CPU_IO_DATA[31]"] set_output_delay -clock CPU_O_CLK40 -max 15.41 [get_ports "CPU_IO_DATA[30]"] and set_output_delay -clock CPU_O_CLK40 -min -11.01 [get_ports "CPU_IO_DATA[31]"] set_output_delay -clock CPU_O_CLK40 -min -10.99 [get_ports "CPU_IO_DATA[30]"] So my questions are: What do I do to determine input delay formulas/values? Some of these signals are asynchronous to the clocks I'm using as well. How I determine output delays for signals I have little information on? There are some that exit the board, pass though another board, then onto a third board where they pass through buffers and interact with ancient IC's that I'm working on getting more info on. I assume a "good guess" output delay is better than none at all. Is there anything to be done with constraining internal modules? We have a third-party IP core that emulates the DMA controller that our CPU would normally interface with. I'm battling an issue right now that seems like it could be timing related within this core, and it can change slightly as I bring out test points from within the core, thereby changing the routing and the timing. Frustrating. Anyway, thanks in advance for everyone's help!Link Copied
2 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Ah, just one more thing.
If my outputs go to external IC's that don't use clocks, should I use whatever signal the data is latched on in the "-clock" field? I.E., a write enable for Memory. Or should I use an internal clock instead? Thanks.- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Turns out the DMA IP core issue was caused by an interrupt occurring, and then a higher-priority interrupt happening before the first one was completely serviced.
But, if anyone could shed any light on my other questions, it'd make my day!
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page