Intel® Quartus® Prime Software
Intel® Quartus® Prime Design Software, Design Entry, Synthesis, Simulation, Verification, Timing Analysis, System Design (Platform Designer, formerly Qsys)
16612 Discussions

about chip planner in Quartus II

Altera_Forum
Honored Contributor II
6,486 Views

I tried to change the placement and routing by Chip planner, and estimate the delay time. 

For example, I input a signal to a buffer and then output, the delay time in Chip planner is: 

 

after I generate fan-out connections: 

 

input port to the LE, which is the buffer: 2.590 ns 

delay time in LE : 0.2 ns 

LE to the output port : 1.695 ns 

 

then, the total delay time from input to output signal should be around 5 ns. 

 

But, when I use oscilloscope to see the delay between the input and output signal, It is around 15 ns. 

It is very differnet to the 5 ns of the estimated time. 

 

I dont know if I do something wrong why they are not same. 

 

Thanks for your help. 

 

--- 

My Quartus II is version 7.1 Web edition
0 Kudos
25 Replies
Altera_Forum
Honored Contributor II
268 Views

 

--- Quote Start ---  

OK, 

 

I assume that your paths are asynchronous. Is it possible that you post some of your Listed paths ? In case of the claasic timing analyzer you find them under tpd. 

 

Kind regards 

 

GPK 

--- Quote End ---  

 

 

Hi, pletz. 

 

I took some pictures of my test file, including schematic plot, tpd data, the chipplanner and waveform. 

 

Maybe it will be more clear about my question that 

the delay time estimated is 6.34 ns, not as same as 15 ns of the wavefrom. 

 

Thanks.
0 Kudos
Altera_Forum
Honored Contributor II
268 Views

 

--- Quote Start ---  

I wanted to supplement my previous remark regarding the reported pin-to-pin delay of 15 ns. For a pin driven by a short asynchronous logic path (through a single LE), I observe about 5 ns pin-to-pin delay with Cyclone II or III and 3.3V LVTTL IO standard. This delay is also indicated by Quartus timing simulation, and as I suppose by all other tools, that are using the same device data base, e. g. Pin Planner and Timing Analysator. 

 

With MAX II, that is utilized by ccmkn, the basic pin-to-pin delay i slightly larger, about 7 ns, but far away from said 15 ns. Also the micro-timing parameters from MAX II datasheet are basically resulting in a similar delay amount. 

 

When designing delay chains with cascaded LEs, you observe a rather uniform delay spacing around 0.5 ns within a single LAB. Advancing to the next LAB, larger steps of e.g. 1.5 ns can be seen. So the basic problem with the said TDC problem is the design of uniform delay chains across LAB boundaries, I think. Besides explicite assignment of LEs, it requires most likely partially parallel structures to get sufficient resolution during LAB boundary crossing. 

--- Quote End ---  

 

 

I have post the tpd and chipplanner file in my previous reply, and I dont see the pin-to-pin delay of about 7 ns. 

 

Do I misunderstand the meanning of pin-to-pin delay? 

 

And thanks your advice about my design, I am sure it will help me a lot. 

 

By the way, 

I guess that the problem may be the scope. Its bandwidth is 100 MHz, maybe it just cant measure the delaytime. Maybe that is why the delay is 15 ns and not the 6 ns of tpd report if there is surely not another delay source that I dont figure out.
0 Kudos
Altera_Forum
Honored Contributor II
268 Views

I don't think that we need to argue about a 6 to 7 ns difference. The actual pin-to-pin delay depends on the selected I/O cells and much more. I just wanted to mention, that 7 ns would be a typical pin-to-pin delay for a MAX II short logic path involving a single LE. Your Chip Planner results basically seem to confirm this value. 

 

You can get a slower response with modified I/O parameters like lower current strength or slow slew-rate, or with excessive capacitive load of the output.  

 

As far as I understand your application, the output delay part of pin-to-pin delay doesn't matter anyway. So, if you doubt the timing simulation or Chip Planner results, you have to perform differential timing measurements with signal capture inside the device. It may be meaningful, to use a basic TDC design (if you already have one) for test. 

 

As another supplement to my previous posting. The said uniform delay spacing can't be seen, when routing the signals to output pins, cause arbitray routing delays are added in this case. But this isnt't a model for TDC operation, I think. 

 

I also wonder about a suitable basic structure for the TDC design. I suppose, it may be a chain of asynchronous latches, that can be used as delay and storage element in-one. Usage of synchronous FF would imply different LEs for delay and storage and thus involve additional routing delays and delay variation, and require a doubled LE amount.
0 Kudos
Altera_Forum
Honored Contributor II
268 Views

 

--- Quote Start ---  

I don't think that we need to argue about a 6 to 7 ns difference. The actual pin-to-pin delay depends on the selected I/O cells and much more. I just wanted to mention, that 7 ns would be a typical pin-to-pin delay for a MAX II short logic path involving a single LE. Your Chip Planner results basically seem to confirm this value. 

 

You can get a slower response with modified I/O parameters like lower current strength or slow slew-rate, or with excessive capacitive load of the output.  

 

As far as I understand your application, the output delay part of pin-to-pin delay doesn't matter anyway. So, if you doubt the timing simulation or Chip Planner results, you have to perform differential timing measurements with signal capture inside the device. It may be meaningful, to use a basic TDC design (if you already have one) for test. 

 

As another supplement to my previous posting. The said uniform delay spacing can't be seen, when routing the signals to output pins, cause arbitray routing delays are added in this case. But this isnt't a model for TDC operation, I think. 

 

I also wonder about a suitable basic structure for the TDC design. I suppose, it may be a chain of asynchronous latches, that can be used as delay and storage element in-one. Usage of synchronous FF would imply different LEs for delay and storage and thus involve additional routing delays and delay variation, and require a doubled LE amount. 

--- Quote End ---  

 

 

I thought the pin-to-pin delay is like the pin transition time at first. It is my mistake. It is surely meaningless to argue about the difference between 6 and 7 ns. 

 

The main problem is still that I cant sure the delay time the tpd report gave me is right by scope or other possible way to confirm. Once I can sure the delay shown on chip planner is correct, I can continue to do my design. 

According to your kind reply, it seems that you think the tpd report is right and reasonable. Maybe the scope measurement is wrong. And I am wondering that how do I capture the signal inside as you said in red color. Could you give me a hint? Thanks. 

 

At last, I think you are right. One of the reasons that I use latches instead of FFs is that I have to store the data since latch is level trigger and FF is edge trigger.
0 Kudos
Altera_Forum
Honored Contributor II
268 Views

Generally, the FPGA architecture is optimzed for fast synchronous procssing. For this reason, capturing of data with an edgle triggered FF is probably better defined and at least more precisely predicted by the Quartus timing analysis. But in this case a chain of latches controlled by a common enable signal seems to be more suitable. It can be used to freeze the signal propagating through the delay chain and capture it e. g. to a serial shift register for readout. 

 

I think, that the Quartus Simulator in timing simulation mode is able to vizualize the circuit behaviour.
0 Kudos
Reply