To examine where we stand in the Red-Queen’s Race of product quality, National Instruments (NI) recently published its 10th annual Automated Test Outlook. This year’s “guest” editor, Dr. James Truchard, retired on January 1st as NI’s CEO after 40 years at the helm. In that time, he has overseen the industry with a perspective that few other people can appreciate. Here, he looks at the past and the future of electronics test, then introduces five articles from prior years’ editions that remain as relevant today as they were at the time of their first appearance.
Since NI opened its doors in 1976, test has progressed dramatically from a hardware-based activity, where the software served only to set up the hardware before triggering a test. As electronics became more sophisticated, the hardware setups became increasingly complicated and error-prone. Designing an effective test strategy required human-friendly software that communicated with the engineers in their own language.
The most significant change in test introduced flexible, modular, software-based test instruments running from a computer host rather than hard-wired boxes that included lots of features that engineers didn’t need for a particular application and couldn’t easily adapt for new and unfamiliar circumstances. The key, from the beginning, revolved around driving instruments from software, but it began with a hardware innovation, the General Purpose Interface Bus (GPIB–IEEE 488), with which engineers could trigger real I/O signals in parallel instead of laboring to set up simultaneous triggering with series solutions such as RS-232. Triggering true parallel signals removed one major source of error and allowed taking advantage of the multicore architectures of modern microprocessors.
Even with such tools, however, test engineers had to create test strategies using human interfaces optimized for computer scientists and other computer-savvy staff rather than the “in the trenches” engineers who wanted a simple way to build programs that adequately verified the performance of devices under test. Test engineers needed intuitive, graphical tools that allowed them to easily represent test logic without resorting to actual ones and zeros to get the equipment to execute correctly. Truchard comments, “The pace of change in modern electronics means that you can’t waste time doing by hand what a tool can easily do for you.” Out of that fundamental idea, LabVIEW was born.
Best of the Future
Dr. Truchard looks to a future that in many respects mirrors the past. Today’s factories feature primarily software-driven functions that combine computer hardware, electromechanical systems (loosely referred to as “robots”), and human beings working cooperatively in an environment that emphasizes safety, efficiency, and cost. As technology advances further, these systems will become increasingly interwoven. He comments, “the more you connect things, the more you’d be crazy not to take advantage of the data you can collect from billions of sensor nodes.” To accomplish these goals requires an increasingly user-friendly graphical environment, comparable to the transition from machine-language to assembly language (machine language with mnemonics to help programmers remember instruction sequences), and then to more modern high-level languages and graphics-oriented environments. To keep pace with increasingly complex technology, this transition will continue, stressing the necessity for cooperation between vendors and customers to be sure that products continue to supply the functions that the market demands.
The individual articles in the collection concentrate on different aspects of this inexorable trend.
“Optimizing Test Organizations” addresses the perennial management perception that test incurs only a cost with no benefit so that the effort (and budget) expended in test often falls short of what is necessary to achieve measurable results. Yet, research confirms that effective test reduces development times, ensures that high-quality products deliver the claimed performance, and reduces product returns and the accompanying costs. The biggest drawback to this approach rests with the unavoidable time lag between implementing an efficient quality and test strategy and reaping its benefits.
“Reconfigurable Instrumentation” explores the introduction of software-defined instrumentation, also called virtual instrumentation, describing how many of today’s test problems require speed and complexity that far exceed the levels attainable purely in software. The author describes testing an RF receiver, which requires coding and decoding, modulation and demodulation, packing and unpacking, and similar data-intensive tasks within a single clock cycle. Such cases require pushing the software-created test steps down to user-programmable hardware, often field-programmable gate arrays (FPGAs)—reprogrammable chips with massive parallel I/O that can take on a wide variety of test tasks and execute them at hardware speeds.
The author of “Software-Centric Ecosystems” examines the fundamental difference between hardware-driven and software-driven environments. Defining a system primarily in hardware renders it inflexible and vendor-dependent. Software definitions, on the other hand, depend upon following an algorithm embedded in a software system, running off operating systems such as Windows or Android. Even MacOS—Apple’s proprietary operating system—defines only the Mac’s internal behavior. With a compatible interface, such as PXI, LXI, or IVI (Interchangeable Virtual Instruments), the operating systems can drive the same kinds of test and measurement instruments. The author contends that the future will move even further toward software-driven systems, increasing their flexibility and reducing their implementation cost.
“Managed Test Systems” looks at the enormous flexibility realized when you assemble a system from individual modules. Although the number of individual components (controllers, chassis, and instruments) may increase, managers can reduce test costs by maximizing the utilization of resources. As an example, the author cites taking advantage of system-wide temperature sensing to adjust fan speeds, thereby striking a compromise between cooling the chassis and keeping the fan noise (and power consumption) to a minimum. This approach also lowers learning curves by iterating toward an optimized solution on one system, then replicating that system when necessary. It also allows upgrading or replacing individual functions without scrapping entire test systems.
The last item, “Driven By Necessity,” focuses on the changes in safety requirements and test philosophies for unattended vehicles such as cars and aircraft. Manufacturers can increase reliance on FPGAs and other customization circuitry, then subject finished products to real-world simulations that come closer than ever before to actual human experience, increasing product quality and reducing time to market. This trend, too, will only grow over the next few years.
The source documents expand on all of these trends, and much more.
Source: Automated Test Outlook: 2017