Degraded Signal Testing
Introduction
Two different testing philosophies have evolved in our communications networks: a “perfection” approach, under which the Device Under Test (DUT) is subjected to signals produced by test equipment whose performance exceeds the DUT’s required specifications and a “worst-case” approach that subjects the DUT to signals that simulate those likely to be encountered in the field.
Traditionally, the telecommunications equipment industry has used the perfection approach while the data communications equipment industry has favored the worst-case approach. The new IEEE 802.3ae standard (which requires that the test signal be purposefully degraded) means that some manufacturers seeking to have their equipment included in 10GE systems deployed in the network may need to employ worst-case testing techniques for the first time. The purpose of this discussion is to compare and contrast these two testing philosophies including their effect on time-tomarket, cost and interoperability.
“Perfection” Testing Approach
The “perfection” approach is a trial and error model under which equipment is initially tested under ideal conditions with initial performance specification margins provided to account for the vagaries of real world conditions.
Under this approach, if the equipment does not initially perform satisfactorily in the field (that is, if the initial specification margins are not sufficient), the equipment will be returned to the manufacturer and retested; and the equipment or the specification margins will be changed until the equipment reaches an acceptable level of performance with the component vendors, equipment vendors and service providers each providing feedback. If internal specifications are tightened, factory yields may be reduced because more units are rejected during test, resulting in higher costs and longer manufacturing cycles.
The benefit of following a testing philosophy with a feedback loop that adds margin is that over time, the equipment will work in a great variety of applications. The downside is that early in the life cycle, as the system is deployed into a variety of applications, failures may appear due to inadequate testing margins leading to production cost increases, delays and customer dissatisfaction. Some of the failures may be in marginal applications; but when sent back as a failure and the internal specification is tightened, the increased costs and delays from these marginal failures effectively tax everyone.
Communications test equipment has traditionally been designed to produce as perfect a signal as possible. Indeed, test equipment is often marketed on the basis of how good the signal is compared to another piece of test equipment and manufacturers are continually improving these “perfect” signals.
This has resulted in a testing environment in which the transmitted test signal is likely to be far superior to that produced by a typical piece of transmission equipment. As a consequence, the receiver may pass in the test environment yet fail in the real world when it is subjected to a borderline transmitted signal. So margin is added to account for the difference and the receiver is again tested with the perfect signal to a tighter specification.
If the added margin is not sufficient, customers may again return the system as a field failure; the testing system may again pass the factory specification; and costly investigations into the cause of the failure may ultimately ensue. This results in possible changes in the equipment or further expansion of the internal margin in an attempt to reduce the failure rate. This approach ultimately produces equipment with very high reliability and with flexibility for many different networks if the feedback loop operates long enough and sufficient margin has been provided, albeit possibly in an unnecessarily costly and prolonged manner.
“Worst-Case” Testing Approach
In contrast, equipment tested under “worst-case” allowable parameters should function within those parameters and interoperate on the day it is initially shipped, without the need for an extended feedback loop. However, equipment that has been tested under worst-case allowable conditions may need more qualification if it will be used outside its predefined allowable operating range.
While testing with a perfect signal has many adherents, it is a time consuming process that may not assure that the equipment tested will ultimately function well even after an extensive feedback process is completed and margin is added. Were proponents of this approach to also incorporate degraded signals in their testing, the feedback loop could be shortened, specification margins could be reduced and an overall cost savings would be likely.
Some manufacturers recognize the problems associated with making tests with “perfect” test equipment and instead test with a “typical” or “golden” unit. Unfortunately, as production lines are expanded and multiple contract manufacturing locations are established, the ability to replicate the typical unit may become difficult or impossible. Further, margin still has to be added since “typical” is not worst-case, and there is variability from test station to test station.
10GE Use of Degraded Signals
The 10GE IEEE 802.3ae specification requires the use of degraded signals to test receivers and transmitters that will carry 10GE LAN or WAN traffic. For example, in the Stressed Eye Test for 10 Gb/s Receivers, the eye is partially closed vertically and horizontally while carrying a test pattern generated from a laser with a poor extinction ratio.
This Layer 1 test simulates a “worst-case” transmitter feeding a receiver under test. The standard requires the minimum Bit Error Rate be met with the degraded signal feeding the receiver. Because all 10GE manufacturers must meet the specification with these “worst-case” signals, interoperability is assured. In addition to the interoperability advantage, equipment tested to the standard can quickly be deployed in the network.
Conclusions
“Perfection” testing relies on added test margins to compensate for real world conditions. These added test margins are often in a state of flux as a new standard is introduced into the network and can delay widespread deployment. With a “perfection” approach to testing, the feedback cycle of incrementally resolving issues and adding margin may result in unnecessarily delayed deployments.
Companies may still add margin to components and systems tested under a “worst-case” approach, but the margin is likely to be considerably smaller than if the testing was done with perfect signals. The degraded signal testing requirements of the IEEE 802.3ae standard should improve Layer 1 system interoperability, reduce costs, and shorten the trial and error feedback process between component suppliers, system manufacturers and service providers speeding the deployment of 10GE systems into the network.