Q1: What is the difference between comparison and proficiency testing?
Proficiency testing involves comparison of testing between a number of laboratories with or without multiple testers in each laboratory with sufficient samples to enable rigorous statistical analysis and homogeneity testing to be conducted. Usually this is conducted by proficiency testing providers but it is becoming more common for corporations or producer associations to conduct these programs.
Comparison testing is usually conducted within a laboratory between operators although it can be conducted between laboratories. The number of participants is usually small which often makes it difficult to conduct statistical analysis of the test results.
Q2: What differences might a laboratory consider acceptable between operators for comparison tests?
There is no simple answer to this question as it is dependent on the test being performed. If the repeatability of the test is detailed in the test method, this is usually the maximum difference one might expect between operators.
Experienced technical supervisors of laboratories have a very good idea of the actual values but often these people have slightly different answers.
In the NATA Technical Assessor News of April 2013, a discussion about these differences and what is practical is provided.
Q3: Are Quality Assurance Activities required for each sub class of test?
Yes. However, some subclasses do not lend themselves to comparison testing, for example, sampling, site selection.
In other cases, the laboratory may need to be innovative to develop a quality assurance activity. Different types of these activities are also listed in the Construction Materials Testing Application Document.
Q4: Proficiency testing requirements are in various documents - is it possible to use a single document for this?
It is possible and this is being reviewed when the next edition of the Application Documents are issued. In the case of the Construction Materials Testing Application Document, it will be updated to make it consistent with Policy Circular #2.
Q5: Can visual assessments for sampling, use of questionnaires, quizzes and discussion of methods form a quality assurance activity?
No. This is a training and competency issue and not for QAA.
Q6: Should rulers used for slump tests be calibrated?
Yes as the ruler is the only device used and its calibration will contribute significantly to the uncertainty of measurement of the test.
However, given the precision of reporting (5 or 10 mm) most CMT technical assessors should be able to assess the procedure for calibrating the ruler against a reference vernier or ruler, this may not need an external calibration provider to perform the calibration.
Q7: When using pair difference assessment of concrete cylinders are we assessing consistency of the testers making the specimens - whether consistently right or wrong?
Usually pair differences are a good measure of how well test specimens are prepared and the actual acceptable differences are listed in AS 1012. There are other contributions to these differences which may or may not occur, e.g. capping failures whilst testing.