- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Guys,
I am simulating a serial receiving via verilog. I added some kind of phase noise to my digital signal. Instead of one bit having exactly 5 local clocks (ideal case) I made it to have 4.9 Local clocks. I get one bit error after 25 correctly received bits. If I make it 4.8, I get one bit error after 12 corret bits. And so on.. I want to draw a graph representing this data, with BER on the Y axis and Noise/EnergyPerBit on the X axis. I am not sure how to model the Noise (No) here. And how would I calcualte the Energy per bit (Eb) since I am modeling using digital verilog. Please help me.Link Copied
6 Replies
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The error probability models using Eb/No ratio, assume:
1. The noise is additive 2. The noise is Gaussian. In that case makes sense the bit energy (Eb) You should generate a normal distributed pattern of random delays and measure the bit error rate (ber) in a sufficiently long frame of bits . Your x axis should be the delay and the x axis the ber.- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
--- Quote Start --- The error probability models using Eb/No ratio, assume: 1. The noise is additive 2. The noise is Gaussian. In that case makes sense the bit energy (Eb) You should generate a normal distributed pattern of random delays and measure the bit error rate (ber) in a sufficiently long frame of bits . Your x axis should be the delay and the x axis the ber. --- Quote End --- so if i delay my edges on a gaussian distribution bases, i can model my BER using BER vs Eb/No? Ok thats great. Lets assume I made my delays on gaussian basis. why ur saying my x-axis is the delay? isnt it supposed to be Eb/No? where is Eb in that case? thank you u helped me imagine better..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You should find a valid ratio like the Eb/No, but using your random delay.
Something like... bit time/delay.... this would be a useful relative metric in your system- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
--- Quote Start --- You should find a valid ratio like the Eb/No, but using your random delay. Something like... bit time/delay.... this would be a useful relative metric in your system --- Quote End --- Okay I got u. So now how to do that in verilog? generate gaussian random numbers and be able to simulate those? I get an error trying to simulate on variable intervals..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A typical way to achieve this modeling is to generate a pseudo-random binary sequence (PRBS) by means of a LFSR (Linear Feedback Shift Register) architecture and then to decide a rule which introduces an error in the original sequence. For example you may decide that an errored bit is introduce in correspondence to a particular pattern occuring in the LFSR structure.
In this way your Eb/No is directly correlated to the probability of the occurence of your pattern in the PRBS strem, which is easily computable because a PRBS generator with enough length can be modeled by Gaussian noise. I hope this helped enough. Cheers OD- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanx guys.
This is what I have done: I defined a constant called DPERIOD that contains my one bit time. since in verilog i cannot make edge occur on a gaussian random since its a floating point, and I cannot make the edge occur on a floating point unit interval from the simulation time (can i?) I made the occurance of the edge based on a gaussian random decision. after DPERIOD time, I generate a gaussian random variable between with mean 0 and variance one. If this random variable is less than a certain threshold I specify, the edge occurs and the data changes its level. If its bigger than the threshold, it repeats the generation until the condition is met, so the edge occurs. This way, changing the threshold would help me control how much noise I want in my signal. And also I overcome the fact that I cant make the transition occur on a variable floating point in simulation time. (again, can i?) Now I will be plotting my BER vs THRESHOLD. would this actually be a good replacement for BER vs Eb/No? What do u think of the above?
Reply
Topic Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page