Care Bit Density and Test Cube Clusters: Multi-Level Compression Opportunities
نویسنده
چکیده
Most of the recently discussed and commercially introduced test stimulus data compression techniques are based on low care bit densities found in typical scan test vectors. Data volume and test times are reduced primarily by compressing the don’t-care bit information. The original care bit density, hence, dominates the theoretical compression limits. Further compression can be achieved by focusing on opportunities to compress care bit information in addition to the don’t-care bit information. This paper discusses at a conceptual level how data compression based on test cube clustering effects, as used in Weighted Random Pattern methods, could be combined with care bit oriented methods to achieve multi-level test stimulus compression. Introduction and Background Test data compression using simple on-chip decoding hardware has become a very hot topic in research as well as in commercial scan test generation tools. The recently introduced commercial scan test data compression tools take advantage of the fact that typical scan test vectors resulting from Automatic Test Pattern Generation (ATPG) contain relatively few care bits which must be at specific logic values for target fault detection. The vast majority of bit values in the test vectors are don’t-care bit values that are generated by more or less arbitrary fill algorithms. How these two scan test properties the relatively low density of care bits and the freedom to chose don’t care bit values can be utilized for test data compression was first described in [1]. The so called LFSR-Coding algorithm introduced in that paper shows how an LFSR (Linear Feedback Shift Register) seed can be calculated, such that cycling the LFSR with that seed will produce the correct care bit values while filling the don’t-care bits with pseudo-random values as a byproduct. Only the care bit values contribute to the seed calculation. It was shown that under certain statistical assumptions, a size of the seed vector needed to represent the care bit values essentially is somewhat larger than the number of care bits. In other words, the original care bit density more or less determines the achievable compression ratio. For example, if the original care bit density in a test vector is 2.5%, then a close to 40x stimulus data compression ratio can theoretically be achieved by using LFSR seeds to represent the care bits. Subsequent technical papers by different researchers have introduced various extensions and improvements to the original LFSR-Coding method and have shown empirically that the theoretical compression limit can indeed be approached [e.g., 2, 3, 4]. The seed calculation used in LFSR-Coding takes advantage of the fact that the stimulus values generated by cycling the LFSR are linear Boolean sums (XOR sums) of the seed values. The care bit values in a test vector, thus, create a set of linear equations that is solved to determine the appropriate seed vector for the test. We therefore refer to the encoding algorithm as linear Boolean encoding. The linear Boolean encoding concept can be generalized to work in conjunction with linear decoding circuits other than LFSRs. In fact, any combinational or sequential linear network could be used. The simplest form of a linear decoding network is to fan out from each scan input to multiple scan chains. This approach is known as broadcast scan, and has been shown to produce very good compression results also [5, 6, 7]. The on-chip decoding methods can be complemented by equally simple and effective tester-resident methods. One concept that is used in practice is to algorithmically generate the fill data for the don’t-care bits on the tester. The RLE (Run Length Encoding) approach described in [7, 8] utilizes the repeat features of certain testers for this purpose. The ATPG fill algorithm is changed to repeat the last care bit value rather than using the more common pseudo-random fill. This simple change creates runs of repeating scan-in vectors that can be represented by a single broadside vector and a repeat op-code rather than loading the vectors directly into buffer memory. It has been shown [7] that the tester-resident RLE method can be combined with moderate on-chip broadcast scan (e.g., 10x or 20x fan-out) for very dramatic data volume reductions (100x). All techniques described so far focus on the compressing the don’t-care bit values. However, there exists a long practical history of test data compression Proceedings of the 21st International Conference on Computer Design (ICCD’03) 1063-6404/03 $ 17.00 © 2003 IEEE using a fundamentally different approach. Weighted Random Pattern (WRP) test exploits the fact that test cubes (the care bits in a test vector) for multiple fault tests tend to form clusters with small Hamming Distance between the respective cubes in each cluster [9]. That is, the cubes in a cluster differ from each other in very few bit positions. For example, the stuck-at fault test cubes for multi-input AND gate differ from each other in at most 2 bit positions irrespective of the width of the AND gate. Each stuck-at fault test for a 20-input AND gate, for instance, has 20 care bits. There are 20+1=21 different stuck-at fault tests associated with the 20-input AND gate. The total number of care bits for all tests, hence, is 420 bits. From an information content point of view, recognizing the clustering effect, each cube could be represented as the XOR sum of one common base cube for each cluster and a difference vector for each additional unique test cube within the respective cluster. The difference vectors for the AND gate example will have at most 2 non-zero positions, meaning that the difference vectors are very amenable to data compression using sparse vector techniques. In other words, instead of storing 21 full test cubes for the AND gate, the tests would be represented by some form of data for a single base cube and by highly compressed data for the 21 difference vectors. In the WRP approach, the base cube information is implicitly expressed by so called weight value sets [9, 10] that more or less strongly bias the output values from a Pseudo Random Pattern Generator (PRPG) towards the base cube values. Each test vector bit position is associated with a 4-bit weight value, meaning that representing the base vector in this way consumes 4 test vectors’ worth of data volume. The care bit values of the vectors produced by cycling the weighted PRPG structure will, due to the biasing, be within a close Hamming distance of the base cube. Groups of “trial” vectors with different PRPG seeds are fault-simulated to determine which seeds actually produce useful test cubes. The resulting effective seeds can be interpreted as being a compact data representation for difference vectors that complement the base cube information encoded in the weight sets. Storage-efficient hardware/software methods for jumping from one effective PRPG seed to the next one are described in [11]. For large circuits with many scan cells, the data volume for the seed information is negligible compared to the weight set information. Many years of experience with WRP suggest that the combined storage for the weight value sets and effective PRPG seeds tends to be 5x to 20x less than the storage required for a full stored pattern scan test set with equivalent fault coverage. WRP, often combined with PRPG seed jumping, has been and still is in production use for some of the industry’s most complex chips [12, 13]. The concepts discussed in the following show how methods utilizing low care bit density and methods taking advantage of test cube clustering could be used in concert to achieve higher compression ratios than can be achieved with either method by itself.
منابع مشابه
A High Performance Image Data Compression Technique for Space Applications
M M AbstractA highly performing image data compression technique is currently being developed for space science applications under the requirement of high-speed and pushbroom scanning. The technique is also applicable to frame based imaging data. The algorithm combines a two-dimensional transform with a bitplane encoding; this results in an embedded bit string with exact desirable compression r...
متن کاملSearch Based Weighted Multi-Bit Flipping Algorithm for High-Performance Low-Complexity Decoding of LDPC Codes
In this paper, two new hybrid algorithms are proposed for decoding Low Density Parity Check (LDPC) codes. Original version of the proposed algorithms named Search Based Weighted Multi Bit Flipping (SWMBF). The main idea of these algorithms is flipping variable multi bits in each iteration, change in which leads to the syndrome vector with least hamming weight. To achieve this, the proposed algo...
متن کاملSearch Based Weighted Multi-Bit Flipping Algorithm for High-Performance Low-Complexity Decoding of LDPC Codes
In this paper, two new hybrid algorithms are proposed for decoding Low Density Parity Check (LDPC) codes. Original version of the proposed algorithms named Search Based Weighted Multi Bit Flipping (SWMBF). The main idea of these algorithms is flipping variable multi bits in each iteration, change in which leads to the syndrome vector with least hamming weight. To achieve this, the proposed algo...
متن کاملLow Power testing by don’t care bit filling technique
Test power is major issue of recent scenario of VLSI testing. There are many test pattern generation techniques for testing of combinational circuits with different tradeoffs. The don’t care bit filling method can be used for effective test data compression as well as reduction in scan power. This paper gives a new advancement in automatic test pattern generation method by feeling don’t care bi...
متن کاملMulti-Level Halftoning by IGS Quantization
Improved gray-scale (IGS) quantization is a known method for re-quantizing digital gray-scale images for data compression while producing halftones by adding a level of randomness to improve visual quality of the resultant images. In this paper, first, analyzing the IGS quantizing operations reveals the capability of conserving a DC signal level of a source image through the quantization. Then,...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2003