A very important consideration in data communication is how fast we can send data, in bits per seconds, over a channel. A data rate depends on three factors as shown in fig:
Two theoretical formulas were developed to calculate the data rate:
- Nyquist for a noiseless channel
- Shannon for a noisy channel.
Fig: Data rates limits
# Channel Capacity:
We have seen that there are a variety of impairments that distort or corrupt a signal. For digital data, the question that then arises is to what extent these impairments limit the data rate that can be achieved. The rate at which data can be transmitted over a given communication path, or channel, under given conditions, is referred to as the channel capacity.
There are four concepts here that we are trying to relate to one another:
1) Data rate: This is the rate, in bits per second (bps), at which data can be communicated.
2) Bandwidth: This is the bandwidth of the transmitted signal as constrained by the transmitter and by the nature of the transmission medium, expressed in cycles per second, or hertz.
3) Noise: The average level of noise over the communications path.
4) Error rate: The rate at which errors occur, where an error is the reception of a 1 when a 0 was transmitted or the reception of a 0 when a 1 was transmitted.
Let us consider the case of a channel that is noise-free. In this environment, the limitation on data rate is simply the bandwidth of the signal. A formulation of this limitation, due to Nyquist, states that if the rate of signal transmission is 2W, then a signal with frequencies no greater than W is sufficient to carry the data rate. The converse is also true: Given a bandwidth of W, the highest signal rate that can be carried is 2W.
As an example, consider a voice channel being used, via modem, to transmit digital data. Assume a bandwidth of 3100 Hz. Then the capacity āCā of the channel is 2W = 6200 bps
With multilevel signaling, the Nyquist formula becomes
Where, W is the bandwidth of the channel & L is the number of discrete signal or voltage levels.
Example: Consider a noiseless channel with a bandwidth of 3000 Hz transmitting a signal with two signal levels. The maximum bit rate can be calculated as
All of these concepts can be tied together neatly in a formula developed by the mathematician Claude Shannon. As we have just illustrated, the higher the data rate, the more damage that unwanted noise can do.
For a given level of noise, we would expect that greater signal strength would improve the ability to correctly receive data in the presence of noise. The key parameter involved in this reasoning is the signal-to-noise ratio (SNR), which is the ratio of the power in a signal to the power contained in the noise that is present at a particular point in the transmission. Typically, this ratio is measured at a receiver, as it is at this point that an attempt is made to process the signal and eliminate the unwanted noise. For convenience, this ratio is often reported in decibels:
This expresses the amount, in decibels, that the intended signal exceeds the noise level. A high SNR will mean a high-quality signal and a low number of required intermediate repeaters.
The signal-to-noise ratio is important in the transmission of digital data because it sets the upper bound on the achievable data rate. Shannon’s result is that the maximum channel capacity, in bits per second, obeys the equation
Where, C is the capacity of the channel in bits per second and W is the bandwidth of the channel in hertz. As an example, consider a voice channel being used, via modem, to transmit digital data. Assume a bandwidth of 3100 Hz. A typical value of S/N for a voice-grade line is 30 dB, or a ratio of 1000:l. Thus,