I n t e r n a l   C o r r e s p o n d e n c e         Portland Technology Development         GFAA Laboratory
 

Subject:  Erratic GFAA Data
 
 

SUMMARY

The fluctuation of GFAA data can be attributed to many variables such as:

1) drawing samples not using a pre cleaned tube each time a sample is drawn

2) using a sample bottle that is not recleaned before each use

3) the reported negative numbers are due to the fact that the instruments background correction values are reading higher than the sample itself

4) fluctuation between the same sample in different vials can be attributed to the contaminated micro pipettes and perhaps not the vials themselves

5) poor or sloppy technique in preparing the samples for analysis
 
 
 

 In response to your earlier concern regarding the erratic fluctuation of GFAA data, it appears to be a multi-variable problem. It seems to have been going on for a considerable length of time. It has taken me a while to get to the root or at least near to the root of the problem.

 Although we would like all of our analytical procedures to be so straight forward that virtually anyone could develop methodology and run the determinations, many techniques require some minimum operator expertise in order to obtain the best results, especially on the difficult samples we frequently encounter in today's semiconductor laboratory. For GFAA spectroscopy, the conditions, once optimized and stored, can be recalled and used by any operator in the lab. However, someone must take the time to develop, or at least verify, methods for each class of matrix analyzed. Fortunately, GFAA parameters allow some leeway and optimization can be accomplished easily if approached in a systematic way.

 In my search for answers, I found out that samples were drawn from the same tube having it only be rinsed with DI water between drawings. This seemed to have gone on for months beginning around mid to late summer. It was also noted that the sample bottles were the same bottles with only DI water rinsing between sample collections. At the part per billion range this can lead to additive errors.

 In response to the negative numbers that appear in the database, it seems that these are not actually negative concentrations but the fact that the instruments background correction readings are above the actual sample concentration. Since most of the samples are not diluted this represents a ranging problem not a dilution problem.  Basically it means that the absorbance value for the sample is in the noise region. The way to eliminate this is to find the minimum reported detection limits. This will say that any value above that minimum report limit is a true number and cannot be misinterpreted as noise. I have a procedure to find these minimum detection limits but as of yet had not had the time to actually make up the solutions and run them. I hope to in the very near future, until then we use the vendors detection limits.

 Another issue that was brought about was the fact that there were extreme differences in the values given for the same liquid in two separate sample vials. This seemed to point directly to the sample vials as the potential problem. In looking at the cleaning process, I found that what was being done was to soak the lab ware in 1:1 HCl for a week followed by 1:1 HNO3 for a week and soaking in water until time of use. In talking to others, doing trace metals bottle clean, the difference between our method and the bottle method was temperature. In going through some trouble to get the GFAA lab ware in the bottle clean time sequence has been quite a chore. This may all be unnecessary due to the fact that it may not be the vials but the contaminated micro pipettes that inject the sample into the vials. It seems that we traced an iron excursion to the metal on a micro pipette. In checking the others out, I found that ALL of them had been contaminated with acids and the stainless steel plunger had been rusted to the point of needing repair. I am in the process of getting return authorization for our pipettes.

 In our actual run method I sometime wonder about the matrix difference in what we make the standards out of and the actual matrix of the actual sample that is analyzed. What I mean is that our standards are in approximately a 5% nitric acid solution. The HF samples that are analyzed have no nitric acid in them at all.  In my past lab experiences, what is done is a sample prep called a digestion using nitric acid and a good oxidizer such as hydrogen peroxide. This puts the sample in the same matrix as the blank and the calibration standards.

 In summary what I have found is that to get good analytical data one needs each variable, such as sample bottles, lab ware, etc., in the process to be absolutely free of trace metals, especially the metals you are looking for. Secondly,  each operator needs to have a certain amount of focus when using the GFAA and each operator needs to be sure the furnace workhead is clean and free of particulate and the optical path is free of obstructions. It is unfortunate when one takes the time to dilute a set of samples and calibrate the instrument and none of the data is good.