2017年8月31日星期四

Manufacturing Variability Measurement and Control

Control charts have been traditionally used as the method of determining the performance of manufacturing processes over time by the statistical characterization of a measured parameter that is dependent on the process. They have been used effectively to determine if the manufacturing process is in statistical control. Control exists when the occurrence of events (failures) follows the statistical laws of the distribution from which the sample was taken.
Manufacturing Variability Measurement and Control

Control charts are run charts with a centerline drawn at the manufacturing process average and control limit lines drawn at the tail of the distribution at the 3 s_points. They are derived from the distribution of sample averages X, where s is the standard deviation of the production samples taken and is related to the population deviation through the central limit theorem. If the manufacturing process is under statistical control, 99.73% of all observations are within the control limits of the process. Control charts by themselves do not improve quality; they merely indicate that the quality is in statistical “synchronization” with the quality level at the time when the charts were created.
There are two major types of control charts: variable charts, which plot continuous data from the observed parameters, and attribute charts, which are discrete and plot accept/reject data. Variable charts are also known as X, R charts for high volume and moving range (MR) charts for low volume. Attribute charts tend to show proportion or percent defective. There are four types of attribute charts: P charts, C charts, nP charts, and U charts (see Figure 3.1).
The selection of the parameters to be control charted is an important part of the six sigma quality process. Too many parameters plotted tend to adversely affect the beneficial effect of the control charts, since they will all move in the same direction when the process is out of control. It is very important that the parameters selected for control charting be independent from each other and directly related to the overall performance of the product.
When introducing control charts to a manufacturing operation, it is beneficial to use elements that are universally recognized, such as temperature and relative humidity, or take readings from a process display monitor. In addition, the production operators have to be directly active in the charting process to increase their awareness and get them involved in the quality output of their jobs. Several shortcomings have been observed when initially introducing control charts. Some of these to avoid are:
Improper training of production operators. Collecting a daily sample and calculating the average and range of the sample data set might seem to be a simple task. Unfortunately, because of the poor skill set of operators in many manufacturing plants, extensive training has to be provided to make sure the manufacturing operator can perform the required data collection and calculation.
Using a software program for plotting data removes the focus from the data collection and interpretation of control charting. The issues of training and operating the software tools become the primary factors. Automatic means of plotting control charting should be introduced later in the quality improvement plan for production.
Selecting variables that are outside of the production groups direct sphere of influence, or are difficult or impossible to control, could result in a negative perception of the quality effort. An example would be to plot the temperature and humidity of the production floor when there are no adequate environmental controls. The change in seasons will always bring an “out-of-control” condition.
In the latter stage of six sigma implementation, the low defect rate impacts the use of these charts. In many cases, successful implementation of six sigma may have rendered control charts obsolete, and the factory might switch over to TQM tools for keeping the quality level at the 3.4 PPM rate. The reason is that the defect rate is so low that only few defects occur in the production day, and the engineers can pay attention to individual defects rather than the sampling plan of the control charts.



Control of Variable Processes and Its Relationship with Six Sigma

Variable processes are those in which direct measurements can be made of the quality characteristic in a periodic or daily sample. The daily samples are then compared with a historical record to see if the manufacturing process for the part is in control. In X, R charts, the sample measurements taken today are expected to fall within three standard deviations 3 s of the distribution of sample averages taken in the past. In moving range (MR) charts, the sample is compared with the 3 <r of the population standard deviation derived from an R estimator of cr. When the sample taken falls outside of the 3 s limits, the process is declared not in control, and a corrective action process is initiated.
Control of Variable Processes

Another type of charting for quality in production is the precontrol chart. These charts directly compare the daily measurements to the part specifications. They require operators to make periodic measurements, before the start of each shift, and then at selected time intervals afterward. They require the operator to adjust the production machines if the measurements fall outside a green zone halfway between the nominal and specification limits.
Precontrol charts ignore the natural distribution of process or machine variability. Instead, they require a higher level of operator training and intervention in manufacturing to ensure that production distribution is within halfway of the specification limits, on a daily basis. This is in direct opposition to six sigma concepts of analyzing and matching the process distribution to he specification limits only in the design phase, and thus removing the need to do so every time parts are produced.
Moving range charts (MR) are used in low-volume applications. They take advantage of statistical methodology to reduce the sample size. They will be discussed further in the Chapter 5. In high-volume manufacturing, where several measurements can be taken each day for production samples, X and R control charts are used to monitor the average and the standard deviation of production. It is important to note that 叉 control charts are derived from the sample average distribution, which is always normal, regardless of the parent distribution of the population σ, which is used for six sigma calculations of the defect rate, and is not always normal, as discussed in the previous chapter.
The X chart shows whether the manufacturing process is centered around or shifted from the historical average. If there is a trend in the plotted data, then the process value, as indicated by the sample average X, is moving up or down. The causes of X chart movements include faulty machine or process settings, improper operator training, and defective materials.
The R chart shows the uniformity or consistency of the manufacturing process. If the R chart is narrow, then the product is uniform. If the R chart is wide or out of control, then there is a nonuniform effect on the process, such as a poor repair or maintenance record, untrained operators, and nonuniform materials.
The variable control charts are generated by taking a historical record of the manufacturing process over a period of time. Shewhart, the father of control charts, recommends that “statistical control can not be reached until under the same conditions, not less than 25 samples of four each have been taken to satisfy the required criterion. ” These observations form the historical record of the process. All observations from now on are compared to this baseline.
From these observations, the sample average X and the sample range R, which is the absolute value of highest value minus the lowest value in the sample, are recorded. At the end of the observation period (25 samples), the average ofXs, designated as^ and the average of R% designated as R, are recorded.

Variable Control Chart Limits

The control limits for the control charts are calculated using the following formulas and Table 3.1 for control chart factors. The control chart factors were designated with variables such as A2, Ds, and D4 to calculate the control limits of X and R control charts. The factor d2 is important in linking the average range and hence the standard deviation of the sample (s), to the population standard deviation a. The control chart factors shown in Table 3.1 stop at the number 20 of observations of the subgroup. Control charts are based on taking samples to approximate a large production output. If the sample be, comes large enough, there is no advantage to using samples and their associated normal distributions to generate variable control charts.
Variable Control Chart Limits

Instead, 100% of production could be tested to find out if the parts produced are within specifications.

Control and specification limits

Control chart limits indicate a different of conditions than the specification limits. Control limits are based on the distribution of sample averages,whereas specification limits are related to population distributions of parts. It is desirable to have the specification lim. It’s as large as possible compared to the process control limit.
The control limits represent the 3 5 points,based on a sample 〇f n observations. To determine the standard deviation of the product population, the central limit theorem can be used:
240.jpg

where
s = standard deviation the distribution of sample averages 
σ= population deviation 
n = sample size
Multiplying 173 the distance from the centerline of the X chart to one of the control limits by Vn will determine the total product population deviation. A simpler approximation is the use of the formula a = R/d2 from control chart factors in Table 3.1 to generate the total product standard deviation directly from the control chart data, d2 can be used as a good estimator for a when using small numbers of samples and their ranges.

X, R variable control chart calculations example

Example 3.1
In this example, a critical dimension for a part is measured as it is being inspected in a machining operation. To set up the control chart, four measurements were taken every day for 25 successive days, to approximate the daily production variability. These measurements were then used to calculate the limits of the control charts. The measurements are shown in Table 3.2.
X, R variable control chart

It should be noted that the value n used in Equation 3.5 is equal to 4, which is the number of observations in each sample. This is not to be confused with the 25 sets of subgroups or samples for the historical record of the process. If the 25 samples are taken daily, they represent approximately a one-month history of production.
During the first day, four samples were taken, measuring 9,12,11, and 14 thousands of an inch. These were recorded in the top of the four columns of sample #1. The average, or X was calculated and entered in column 5, and the R is entered in column 6.
X Sample 1 = (9 + 12 + 11 + 14)/4 = 11.50
The range, or R, is calculated by taking the highest reading (H in this case), minus the lowest reading (9 in this case).
R Sample 1 = 14 - 9 = 5

The averages of X and R are calculated by dividing the column totals of X and R by the number of subgroups.
X = (SUM OFXs)/number of subgroups
X=315.50/25 = 12.62
R = (SUM OF R's) number of subgroups
R=111/25 = 4.44
Using the control chart (Table 3.1), the control limits can be calculated using n = 4 as follows:
X Control limits
UCKX = X + A2 R = 12.62 + 0.73 • 4.44 = 15.86

UCLX =1-A2^= 12.62 - 0.73 • 4.44 = 9.38 
R Control limits
Upper control limit (UCL^) = D4R = 2.28 * 4.44 = 10.12 
Lower control limit (LCL/f) = = 0
Since the measurements were recorded in thousands of an inch, the centerline of the-X" control chart is 0.01262 and the control limits for X are 0.01586 and 0.00938. For the R chart, the centerline is set at 0.00444 and the limits are 0.01012 and 0.
These numbers form the control limits of the control chart. After the limits have been calculated, the control chart is ready for use in production. Each production day, four readings of the part dimension are to be taken by the responsible operators, with the average of the four readings plotted on the X chart,and the range or difference be- tween the highest and_ lowest reading to be plotted on the R chart. The daily numbers of X and R should plot within the control limits. If they plot outside the limits, the production process is not in control, and immediate corrective action should be initiated.

Alternate methods for calculating control limits

The control limits are set to three times standard deviation of the sample distribution (s). s can be calculated from a the population standard deviation using the factor d2 according to the central limit theorem:
σ= R/d2 = 4.44/2.059 = 2.156
S = a/V^ = 2.156/2 = 1.078
control limits

± 3 s = 1.078.3 = 3.23, which is close to the A2 •R value of 3.24, which corresponds to the distance from the centerline to one of the control limits in the variable control charts.
It is interesting to note that of the total population of l〇〇 numbers (Table 3.2), then the standard deviation is a = 2.156, which is exactly the one predicted by the R estimator. If the specifications limits are given, then the Cp, Cpk, and reject rates can be calculated as in the example in the previous chapter.

Examples of variable control chart calculations and their relationship to six sigma

These examples were developed to show the relationship of variable control charts and six sigma. They can be used as guidelines for communications between an enterprise and its suppliers.

Example 3.2a

A variable control chart for PCB surface resistance was created. There is only one minimum specification for resistance. The X bar was 20 megaohms (MH) and the UCL^ was 23 MH, with a sample size of 9. A new specification was adopted to keep resistance at a minimum of 16 MH. Assuming that the resistance measurement or process average =specification nominal (N), describe the Cp and Cpk reject rates and show the R chart limits.
control chart

Example 3.2a solution
Since the process is centered, Cp = Cpk. The distance from the X to UCLx = 35 = 3, therefore:
s=1
σ = s * Vn = 3 
LSL=16 MH 
Process average = 20 MH
Cp = Cpk = (LSL - process average)/3a = (20 -16)/3 * 3 = 4/9 = 0.444
z = (SL - average)/a = (16 - 20)/3 = 1.33 OTZ = 3 • Cpk = 1.33 Reject rate =f[-z) = 0.0976 = 91,760 PPM (one-sided rejects only, below LSL)
R= σ•d2 (n = 9) = 3 • 2.97 = 8.91 
UCLR = 1.82 • 8.91 = 16.22 MH 
LCLR = 0.X8-8.91 = 1.60 MH

Example 3.2b

A four sigma program was introduced at the company in Example 3.2a. For the surface resistance process, the lower specification limit (LSL) remained at 16 MH and the process a remained the same. Describe the Cp and Cpk reject rates and show the X and i? chart limits, using the same sample size of 9. Repeat for a six sigma program, with 1.5 σ shift, with the process average and sigma remaining the same.
Example 3.2b solution
The four sigma program implies a specification limit ofN±4(r = N±4•3 = N ± 12. The process average which is equal to the nominal N,is 4 tr away from the LSL,and is 16 + 12 = 28 MH,given LSL = 16 MH. Cp = Cpk = ± 4 CT/± 3 〇• = 2.33 and two-sided reject rate from the z table(Table 2.3) = 64 PPM.
The R chart remains the same as Example 3.4a, since the process variability σ did not change. The X chart is centered on X = 28 MH; ICL, = 28 - 3s = 25 MH; UCLX = 31 MH.
For six sigma, the same methodology applies, except that there is a *1.5 σ shift. The specification limits are N ± 6σ~ N ± 6 • 3 = N ± 18.
246.jpg

Given the LSL = 16 MH, the specification nominal N is 16 + 18 = 34 MH. Therefore, Cp = 2; Cpk « 1.5; reject rate from previous tables (±1.5 cr shift) = 3.4 PPM.
Assuming that the shift is toward the lower specification, then the process average could be +4.5 <T from the LSL or —1.5 (T from the nominal: 34 ~ 1.5 ■ 3 = 29.5 MH; or 16 + 4.5 3 = 29.5 MH. '
The R chart remains the same as_Example 3.4a, since the process variability <r did not change. If the X chart is centered on 5 = 29.5 then LCLV = 29.5 - 3 s = 26.5 MH and UCL^ = 32.5 MH.

Attribute Charts and Their Relationship with Six Sigma

Attribute charts directly measure the rejects in the production operation,as opposed to measuring a particular value of the quality characteristic as in variable processes. They are more common in manufacturing because of the following:
Attribute Charts

1. Attribute or pass—fail test data are easier to measure than actual variable measurement. They can be obtained by devices or tools such as go/no-go gauges, calibrated for only the specification measurements, as opposed to measuring the full operating spectrum of parts.
2. Attribute data require much less operator training, since they only have to observe a reject indicator or light, as opposed to making several measurements on gauges or test equipment.
3. Attribute data can be directly collected from the manufacturing equipment, especially if there is a high degree of automation.
4. Storage and dissemination of attribute data is also much easier, since there is only the reject rate to store versus the actual measurements for variable data.
Attribute charts use different probability distributions than the normal distribution used in variable charts, depending on whether the sample size is constant or changing, as shown in Figure 3*1. For C and U charts, the Poisson distribution is used, whereas the P and nP charts use the binomial distribution.

Checking for Normality Using Chi-square Tests

Chi-square(X 2)  tests can be used determine whether a set of data can be adequately modeled by a specified distribution. The chi-square test divides the data into nonoverlapping intervals calls boundaries. It compares the number of observations in each boundary to the number expected in the distribution being tested, in this case the normal distribution. Sometimes this test is called “the goodness of fit test.”
Chi-square Tests

The boundaries are chosen for convenience, with five being a commonly used number. The boundary limits are used to generate a probability for the expected frequency. This is done in the case of the normal distribution by calculating the 2 value based on the boundary limit and the average and standard distribution of the data set, in the following manner:
1. List the data set in ascending order.
2. Determine the number of boundaries (variable k) to be used in this test.
3. Let mi be the number of sample values observed in each boundary
4. Calculate a z value for each boundary. For the two outermost boundaries, there is one single z value. For inside boundaries, there are two z values.
5. Calculate the expected frequency for each boundary by determining the Pi = f(z) and multiplying that number by the total number in the data set.
6. Determine the contribution of each boundary to total chi-square value through the formula
    ∑(mi - nPI)2
X2=    nPi                 with k – 1 DOF
238.jpg

A hypothesis rejet, which indicates that distribution is not normal is when X2≥X2a, which obtained from a X2 table forα=1 – confidence; k is the number of boundaries, and DOF is the degrees of freedom. Select value of the X2 table are given in Table5.3.

Quick visual check for normality in Six Sigma

Using graph paper, spreadsheets, or statistically based software, measurement data from randomly selected samples of parts can be quickly checked for normality as follows: 
Six Sigma

1. Randomly select a number of parts samples for measurement of the quality characteristic, which is the part attribute of interest to the six sigma effort. Thirty samples are considered statistically significant. However smaller numbers might be used for a quick 1^ at the distribution. 
2. Rank the data in ascending order, from 1 to n.
3. Generate a normal curve score (NS) corresponding to each data point. Each ranked data point is subtracted by 0.5, then divided by the total number of points n so that it sits in the middle of a box 〇f
ranked points. Each data point probability is based on the rank 〇f point i, with i ranging from 1 to n. The normal score (NS) represents the position of that ranked point versus its equivalent value of the z distribution:
P(z) = (i — 0.5)/AI i = 0,1,. . .,AI (2.14)
NS=z of P(z)
N = total number of parts to be checked for normality
5. Plot each data point value on the Y axis against its normal score. If the data is normal, it should show as a straight line.
Example for 5 points: 67, 48, 76, 81, and 93
DataRank (i)P(z) = (i- 0.5)/n  z from P{z)
6720.3-0.52
4810.1-1.28
7630.50
8140.70.52
9350.91.28
A quick graphical check for normality is given in Figure 2.12. It can be visually determined that the data represents close to a straight line.
An even quicker method to determine normality is to use the same procedure but with seminormal graph paper. This would eliminate the z calculations in step 3 above.

How to quickly Improve Your Electronic Assembly Processes?

In the electronics industry, if you want to maintain your competitiveness, you need to constantly improve your process to ensure the quality of production. This will attract more customers and retain your old customers. Of course, this is just a new business model, which requires the company has the ability to practice to improve the specific process.
Electronic Assembly

Best Practices for Improving Your Electronic Assembly Processes

Take Nothing for Granted

Sure, you may believe that your design specs and boards are as efficient as they could possibly be, but why should you settle for the status quo? When we become overly comfortable, we stop changing and improving. In order to avoid getting stuck in a rut, you might want to give some thought to teaming up with a trusted contract manufacturing company that can review your current plans. By taking a look at what you've got, a new set of eyes might be able to pinpoint areas that could actually be improved, enabling you to cut down on the number of components that need to be used, improve efficiency, and lower costs. 

Be Smart About Batch Sizes

Going back and forth between handling a large batch order and smaller-scale batch orders is not efficient. Being required to re-set the assembly for mismatched batch orders will slow you down, increasing production times and spending. Take care to plan ahead and plan strategically so that you are handling all large orders together and all smaller orders together. If you don't have the means of doing this, you might even want to consider subbing out some of your jobs to a PCB manufacturing partner in order to improve efficiency.

Make Use of the Best

In order for you to speed up your electronic assembly processes without compromising quality, you'll need to take advantage of the most advanced machinery, design tools, robotics, and other state-of-the-art technologies. Unfortunately, for many small and midsize electronics companies, this can be very expensive and difficult to manage. If you're unable to keep up with your larger competitors, though, you can still stay in the game. A good PCB manufacturer will keep up with all of the best techniques, practices, and technologies. Your partnership with such a third-party contractor will enable you to enjoy access to this tech without having to purchase, maintain, or house the equipment.

Don't Cut Corners

One of the biggest mistakes that "underdog" electronics companies tend to make is hiring a contract manufacturing company that operates "across the pond". These offshore manufacturing services often offer lower prices than their American competitors, but you should understand that you'll get what you pay for. Many of these overseas companies have a tendency to cut corners, purchasing counterfeit parts in order to reduce their costs. This can completely compromise the quality of the product, resulting in board failures, and can even create unsafe conditions that could get you penalized or force a recall. To preserve your integrity and reputation, be sure to take the time to find a high quality American company to partner with. 

Minimize Warranty Replacements

Replacing a faulty board with a brand new warranty item can be very expensive, and is often a huge waste of money. In many cases, the board is failing because of one faulty component or a simple problem. When you or your contract manufacturer can troubleshoot and make easy repairs, you can save significantly and also improve customer confidence in your product.

Attribute Processes and Reject Analysis for Six Sigma

For attribute processes (those with quality measured in terms of defects in a sample or number defective), an implied Cpk will have to be calculated in the quality assessment of design and manufacturing. It is assumed that defects are occurring because of violation of a particular or a composite specification(s). The composite specification can be one-sided or two-sided, depending on the interpretation of the defects. For example, a wire bond defect could be the result of one-sided specifications, since it is assumed that in specifying the bond, only a mini- mum value is given. For solder defects, a composite specification can be assumed to be two-sided, since solder defects can be one- or two, sided, as in excessive or insufficient solder. The difference between implied one- or two sided specifications is that the number of defects representing the f(z) value under the normal curve should be halved for two-sided specifications, or used directly for one-sided specifications, resulting in different implied Cpk interpretations. The decision for one- or two-sided specifications for implied Cpk should be left to the appropriate design and manufacturing engineers.
Six Sigma

An example of an attribute process calculation to generate an implied Cpk is for solder defects. They are usually measured in PPM or parts per million of defects obtained in production divided by the total number of solder joints in the product (total number of opportunities for solder defects). Solder defects may result from the combination of several specifications of design parameters such as component pad size, drill hole size, fabrication quality of plated metal surface, and the material and process parameters of the soldering equipment. A 100 PPM solder process (1 solder defect in 10,000 terminations or joints) is calculated to have a Cpk = 1.3 as follows:
1. 100 PPM defects (assuming a two-sided specification), 50 PPM per each tail of the normal curve
2. 50 PPM is f(z) = 0.00005 or z ~ 3.89, from standard normal curve tables.
3. Implied Cpk =z/3= 1.3
The assumptions are that the defects can occur on either side of the implied specifications, the process is normally distributed, and the process average is equal to the specification nominal. If this example of Cpk was for a wire bond machine, then it could be assumed that the defects occur due to one side of the specification limits of minimum pull strength. In this case, the Cpk can be calculated as follows:
1. 100 PPM defects (assuming a one-sided specification) is 100 PPM per one tail of the normal cxirve
2. 100 PPM is f(z) = 0.0001 or 2 = 3.72, from standard normal curve tables
3. Implied Cpk =z/3 = 1.24, which is lower quality than two-sided defects
It can be seen that the method of implied Cpk could lead to various interpretations of one- versus two-sided specifications when the Cpk methodology is used. If the six sigma interpretation of quality is used, the 100 PPM error rate is significant because it is larger than the target of 3.4 PPM. If a quality team has to report on their progress toward six sigma using 100 PPM current defect rate, then they canpresent the following arguments:
1. For two-sided specifications, f{z) = 0.00005 or z = 3.89. If a shift of 土1.5σis assumed,then all of the failures result from one side of the distribution, whereas the other side is much lower in defects, and therefore contributes no defects. The design is 3.89 + 1.5 = 5.39 or 5.39σin the classical six sigma definition.
2. For one-sided specifications, f(z) = 0.0001 or z = 3.72. If we assume a shift of ±1.5σthen the design is 3.72σ+ 1.5σ= 5.22σor 5.22σin the classical six sigma definition.

How to Setting the Process Capability Index of Six Sigma

Many companies are beginning to think about the process capability index, be it six sigma or Cpk, as a good method for both design manufacturing engineers to achieve quality goals jointly, by having both parts work together. Design engineers should open up the specifications to the maximum possible, while permitting the product to operate within customer expectations. Manufacturing engineers should reduce the process variations by maintenance and calibration of process and materials, training of operators, and by performing design of experiments (DoE) to optimize materials and processing methods.
Six Sigma

Another advantage of using the six sigma or CPK as a quality measure and target is the involvement of the suppliers in the design and development cycle. To achieve the required quality target, the design engineer must know the quality level and specification being delivered by the suppliers and their materials and components. In some cases, e suppliers do not specify certain parameters, such as rise time on integrated circuits, but provide a range. The design engineers must review several samples from different lots from the approved supplier and measure the process variability based on those specifications. A minimum number of 30 samples is recommended.
Many companies use six sigma or a specific Cpk level to set expected design specifications and process variability targets for each part or assembly. Usually, this number has been used to set a particular defect rate such as 64 PPM, which is a Cpk = 1.33 with a centered distribution and specification limit of ±4σ. The six sigma goal of Cp = 2 results in a defect rate of 3.4 PPM based on a specification limit of ±6 cr and an average shift of ±1.5σ.
Six sigma or a high Cpk increases the robustness of design and manufacturing. A temporary process average shift does not significantly affect the defect rate. Six sigma (Cp = 2) implies that a shift of the average by as much as ±1.5σimparts a defect level of 3.4 PPM to the end product. A comparable shift of the average for a Cp of 1.33 increases the defect rate from 64 PPM to 6210 PPM.

Choosing Six Sigma or Cpk in Manufacturing Process

Although both six sigma and CPK are excellent measurement systems for quality improvement in design and manufacturing , a consensus has not been reached as to which system should be selected based on some of the issues discussed in this section. Currently, major industries and companies have either opted for one or the other , or for their own company brand of six sigma. In the latter case, a combination of rules from both systems is developed to clarity some of the issues, especially when dealing with internal manufacturing and the supply chain. This is important, since the requirement for six sigma or CPK levels are becoming part of the contractual agreements between companies and their supply chain, as well as performance measures for design and manufacturing centers in modern enterprises. 
 Six Sigma

Some of the issues to be considered when a company plans to launch a quality program based on six sigma or Cpk approaches,and how they can converge, are:
The classical definition of six sigma corresponds to the last line in Table 2.2. Six sigma is equivalent to Cp = 2 or Cpk = 1.5, while allowing a process average shift to the specification nominal of ±1.5 a. However, Cpk = 1.5 does not always equate only to six sigma. Many different conditions of specifications tolerance and process average shift can result in Cpk = 1.5, as shown in Table 2.2
The implication of the six sigma average shift of 土 1.5 (T is that the production process variability will not improve beyond the ±1.5 a shift of the process average. This may be considered as a negative, since it does not encourage those in the supply chain to improve their process variability. By specifying a particular Cpk, a company can encourage its suppliers to minimize their variability, since it is apparent from Table 2.2 that the smaller the average sh^, the wider the specification tolerance can be.
It is widely recognized that older manufacturing processes are more stable than newer processes, and shift. This has led to specifying a particular and then a different Cpk when the process matures, in 3 to 6 months after production start-up. In the auto industry, the starting Cpk is set at 1.67 and the mature CPK at 1.33. This was done to force the supply chain to pay attention  to the process in the initial stage of production, a form of learning-curve-based improvements. This is sue of time improvements has long been recognized in the supply chain, with commonly used incentives for cost reduction base on time. The six sigma program maintains a constant ±1.5σallowable average shift, which is an easier goal to manage irrespective of time. It is the author’s opinion that it is better to manage quality with a single number and concept, as opposed to a time-dependant standard. In addition, the reduced life cycle of electronic products, and the emphasis on “doing it right the first time” should encourage the supply chain to set a goal for first production quality and then maintain it. This might prove less costly in the long run.
The choice of focusing on the process average shift correction to equal the specification nominal or reducing variability or both will be discussed in greater detail together with the quality loss function (QLF), discussed in Chapter 6.
Cpk and six sigma can have different interpretations when considering attribute processes. These are processes in production, where only the defect rates are determined and there are no applicable specification limits. Examples of attribute processes are assemblies such as printed circuit boards (PCBs) where rejects could be considered to be the result of implied specifications interacting with production variability of materials and processes. In these cases, the quality methodologies are centered around production defect rates and not specifications, thereby clouding the relationships and negotiations between design and manufacturing. Different levels of defect rates based on Cpk levels could be allowed for different processes, resulting in an overall product defect goal setting and test strategy based on these defects. Six sigma quality provides the power of the single 3.4 PPM defect rate as a target for all processes.
• A similar issue arises when using six sigma or Cpk for determining total system or product quality. This is the case when several six sigma designs and parts are assembled together into a system or product. Six sigma practitioners handle this issue by using the concept of rolled yield, that is, the total yield of the product based on the individual yields of the parts. Those using the Cpk terminology can continue to use Cpk throughout the product life cycle, assigning different Cpk targets as the product is going through the design and manufacturing phases.

The Quality Measurement Techniques: SQC, Six Sigma, CP, and CPK

These quality techniques were developed originally for manufacturing quality and then used for determining product design quality. Six sig. ma has been used alternately with various assumptions of the manufacturing process average shift from the design specifications to set the defect rate due to design specifications and manufacturing variability.
SQC

1. The statistical quality control (SQC) methods

Control charts have been traditionally used as the method of deter- mining the performance of manufacturing processes over time by the statistical characterization of a measured parameter that is dependent on the process. They have been used effectively to determine if manufacturing is in statistical control. Control exists when the occurrence of events (failures) follows the statistical properties of the distribution of production samples.
Control charts are run charts with a centerline drawn at the manufacturing process average and lines drawn at the tail of the distribution at the 3 o points. If the manufacturing process is under statistical control, 99.73% of all observations are within the limits of the process. Control charts by themselves do not improve quality. They merely indicate that the quality is in statistical “synchronization” or “in control” with the quality level at the time when the charts were created.
A conceptual view of control charts is given in Figure 2.1. The out- of-control conditions indicate that the process is varying with respect to the original period of time when the process was characterized through the control chart, as shown in the bottom two cases. In the bottom case, the process average is shifted to the right, whereas in the next higher case, the process average is shifted to the left. For the two processes shown in control, the current average of the process is equal to the historical one that is centered with the historical average, and with a small amount of variability, indicating that the standard deviation is small. It is important to note here that the control charts do not reflect the relation of the process to the specification limit, only the performance of the process to historical standards. Six sigma gives that additional dimension of relating the process performance to the specification tolerance.
SQC

2. The relationship of control charts and six sigma

There are two major types of control charts: variable charts, which plot continuous data from the observed parameters, and attribute charts, which are discrete and plot accept or reject data. Variable charts are known as X and R charts. They can be directly related to the six sigma calculations through the product specification. Attribute charts are measures of good or bad parts, and therefore are indirectly related to specifications. The relationship of attribute charts to six sigma is that of an assumed set of specifications that produces the particular defect rate plotted in the charts. More on these charts in the next chapter.
The selection of the parameters to be control charted is an important part of the six sigma process. Too many parameters plotted tend to adversely confuse the beneficial effect of the control charts, since they will move together in the same direction when the process is out of control. It is very important to note that the parameters selected for control charting are independent from each other and are directly related to the overall performance of the product. When a chart shows an out-of-control condition, the process should be investigated and the cause of the problem identified on the chart.
When introducing control charts to a manufacturing operation, it is preferred to use parameters that are universally recognized and with simplified data collection, such as temperature and relative humidity, or take readings from a process display monitor, such as the temperature indicator in a soldering system. These initial control charts can be used to introduce and train the operators in data collection and plotting of parameters. The same principles in selecting these elements also apply to six sigma parameter selections.

3. The process capability index (Cp)

Electronic products are manufactured using materials and processes that are inherently variable. Design engineers specify materials and process characteristics to a nominal value, which is the ideal level for use in the product. The maximum range of variation of the product characteristic, when products are in working order (as defined by customer needs), determines the tolerance of that nominal value. This range is expressed as upper and lower specifications limits (USL and LSL), as shown in Figure 2.2.
The manufacturing process variability is usually approximated by a normal probability distribution, with an average of (JL and a standard deviation of a. The process capability is defined as the full range of normal manufacturing process variation measured for a chosen characteristic. Assuming normal distribution,99.74% of the process out- put lies between μ- 3σandμ+ 3σ
A properly controlled manufacturing process should make products whose average output characteristic or target is set to the nominal value of the specifications. This is easily achieved through control charts. If the process average is not equal to the product specification nominal value, corrective actions could be taken, such as recalibrating production machinery, retraining the operators, or inspecting incoming raw material characteristics to fix this problem.
Cp

The variation of the manufacturing processes (process capability) should be well within the product tolerance limits. Process capability is commonly depicted by a standard normal distribution. The intersection of the process capability and the specification limits determines the defect level, as shown in Figure 2.3
Cp

Process capability could be monitored using control charts. The manufacturing process variability can be reduced by increased operator training, using optimized equipment calibration and maintenance schedules, increased material inspection and testing, and by using design of experiments (DoE) techniques to determine the best set of process parameters to reduce variability.
The classical design for manufacturing conflict of interests between design and manufacturing engineers is usually about controlling product quality and cost. The design engineers would prefer the narrowest possible process capability, so they can specify the minimum tolerance specification to ensure the proper functioning of their designs. The manufacturing and process engineers the widest possible tolerance specification, so that production can continue to operate at the largest possible manufacturing variability with a reduced amount of defects. The process capability index and six sigma are good arbiters of the two group’s interests.
A good conceptual view of this argument is the use of the term “capability.” A process could be either “in control,” or “capable,” or both. Obviously, the desired condition is both in control and capable, as shown in Figure 2.4. Six sigma assures that the desired outcomes are processes that are highly capable and always in control. If there is a short-term out-of-control condition in manufacturing, then the robust- ness of the process,which is its capability versus its specifications, is good enough to withstand that deviation and continue to produce parts with low defects.
Six Sigma

There are two methods used to increase the quality level and hence approach six sigma for new product designs: either increase the product specification limits to allow manufacturing variability to remain the same, or keep product specifications limits constant and reduce manufacturing variability by improving the quality level of materials and processes. The latter can be achieved through inspection, increased maintenance, and performing design of experiments (DoE) to determine variability sources and counteract them. The ratio of the interaction of two sources of defect is the measure of design for quality, called the process capability index or Cp. Six sigma is a special condition in which Cp is equal to 2:
CP

USL = upper specification limit
LSL = lower specification limit
σ= manufacturing process standard deviation
The Cp value can predict the reject rate of normal probability distribution curves. A high CP index  indicates that the process is capable of replicating faithfully the product characteristics, and therefore will produce products of high quality.
The utility of the Cp index is that it shows the balance of the quality responsibility between the design and manufacturing engineers The quality level is set by the ratio of the efforts of both. The design engineers should increase the allowable tolerance to the maximum value that still permits the successful functioning of the product. The manufacturing engineers should minimize the variability of the manufacturing process by proper material and process selection, equipment calibration and control, operator training, and by performing design of experiments (DoE).
An example of design and manufacturing process interaction in the electronics industry is the physical implementation of electronic designs in printed circuit board (PCB) layout. The design engineer might select a higher number of layers in a multilayer PCB, which will speed up the layout process because each additional layer increases the PCB surface available for making electrical connections. Speedier layout time could result in a faster new product introduction, bringing in new revenues into the company faster. Minimizing the number of layers requires more layout time, but would produce lower-cost PCB's and fewer defects, because there are fewer process steps. This is a classical case of the balance between new product design and development expediency and manufacturing cost and quality.