EliA Technology

  1. How does IDM calculate from RU to EliA units?

     ug/l x Dilution factor x Correction factor x Lot Specific factor = U/ml 

    RU are calculated into the number of IgG antibodies (µg/l) with help of the calibration curve (the calibrators have a defined concentration of antibodies in µg/l). These calibrator concentrations are traceable to WHO standards.

     

    Dilution factor:
    This result has to be set into relation to the dilution of the sample. Example: EliA Sm with the instrument dilution of 1:100. The instrument found 300 RU and "translates" this with help of the curve into 9 µg/l. Accordingly, the "real" result for the sample is 900 µg/l. The dilution factor is 100. For ANCA it is 50, for dsDNA it is 10. The individual dilutions of all EliA tests can be found in the "EliA at a glance" table on our product page ».

     

    Correction factor:
    We calculate the "real" titre in the original sample when we, anyhow, give out EliA Units. And if we do so, why don't we include this factor into the lot-specific factor? The lot-specific factor is given on the barcode. The barcode can not code an unlimited variation of factors. Thus, the lot-specific factors have to be about in the same range. They are much more in the same range when we compensate the different dilutions. This is done with the correction factor. Thus, all µg/l results are calculated with a correction factor so that they are about in the same level. If you put in another dilution, for example 1:50 for anti-dsDNA, this influences the dilution factor but not the correction factor. Correction factor for dsDNA = 0,1, for Symphony = 0,001, for ANCA 0,02, others = 0,01

     

    Lot-specific factor:
    Cut-off of all products shall be 10 EliA Units. If the cut-off is set at 40 µg/l (x100 = 4000 µg/l in the original sample) we have to give a factor, that calculates 40 x factor = 10 (which is a factor of 0,25). This was for example the first lot-specific factor for Sm. In the next lot the cut-off could be at 50, then the factor was 0,2.

     

    EliA on Phadia 250 or Phadia 2500/5000:
    For the bigger Phadia instruments, a factor compensates the small difference of SOME EliA parameters between the Phadia 100 results and the Phadia 250 results or Phadia 2500/5000 results, respectively .This "instrument factor" is included in the correction factor. Thus, the correction factor is not 0.01 but for example 0.0112 (for centromere on Phadia 250). You can find the correction factors in the EliA at a glance overview.

     

     

     

     

     

     

     

  2. How are Ratio calculated?

    The instrument measures a fluorescence signal which is given in relative units (RU). RU-values are calculated into the number of IgG antibodies (µg/l) with help of the calibration curve. The calibrators have a defined concentration of antibodies in µg/l. These calibrator concentrations are traceable to WHO standards.

    EliA Symphony and EliA CTD Screen are ENA Screening tests with all antigens in one well. This does not allow an exact quantification. However, to obtain a rough idea how positive a patient sample is IDM calculates the concentration in Ratio. The value tells you how many times above the cut-off the concentration is - a result of 2.0 Ratio is 2x the cut-off.

    Equivocal zone is 0.7 to 1.0 Ratio (for EliA CTD Screen and EliA Symphony).

    A rough rating to assess the positivities: 1.0 to 4.0 is low positive, 4.0 to 10.0 is moderate and more than 10 is strong positive. If only one antibody is positive, it won't go so high as if several positivities add up.

    Apart from that difference, the calculation of Ratio in IDM is done in the same way as the calculation of EliA Units/ml (see here »). It is not calculated in the same way as Ratio for Varelisa Screens, where the ratio of patient OD towards Cut off OD is calculated (see below).

    ug/l x Dilution factor x Correction factor x Lot Specific factor = Ratio

    Dilution factor: This result has to be set into relation to the dilution of the sample. EliA Symphony has a default instrument dilution of 1:100. Example: The instrument found 300 RU and "translates" this with help fo the curve into 9 µg/l. Accordingly, the "real" result for the sample is 900 µg/l.
    standard dilution EliA Symphony: 1:100
    standard dilution EliA CTD Screen: 1:10

    Correction factor:
    for EliA Symphony: 0,001 (Phadia 100)      0,00112 (Phadia 250).
    for EliA CTD Screen: 0,01 (Phadia 100)      0,01 (Phadia 250).

    Lot-specific factor / Code:
    This factor allows for the correction of slight signal differences between lots, and it is generally used to convert concentrations in µg/l into U/ml or Ratio. The factor is included as a two-digit code in the barcode.

    Example:
    Example of EliA Calculation

  3. What are the criteria of acceptance for the calibration curve on Phadia 250?

     

     

     

    This is the answer of Peter Boss, Product Specialist in Uppsala

    When it comes to the acceptance criteria for the calibration curve and the curve controls it is not easy to give a straight answer by giving some numbers. The values differ between the different ideal curves of the different methods and even between different calibrator points. Most important when checking the curve is to make sure that the shape is good but there are several other parameters that are controlled.

    First the response of all calibrator replicates is checked against upper and lower response limits. These limits are wide and will flag the most extreme outliers. In a second step curve shape is controlled, how each calibrator replicate lies compared to the other calibrator replicates. This is done by calculating the natural logarithm for the quotas between the calibrator replicates and an ideal-curve. The ideal-curve is the expected response for that method. In a third step for each calibrator point the variation between the two calibrator replicates is checked.

    An established calibrator curve is checked by using curve controls where the response of these curve controls is compared towards inner and outer limits. The inner limit is set as 2.2*SD and the outer limit as 3.5*SD.

  4. How is the lot specific code or factor for each EliA batch determined?

    The lot specific code for each EliA batch that is used to calculate the final concentration of the test (in IU/ml, U/ml, Ratio etc) is determined at the end of the internal Quality Control procedure for the release of the newly produced EliA Well batch.

    In this QC test, amongst other results, the concentration of several predefined positive and negative Quality Control samples and the EliA Positive Control for this test is determined. For each positive sample a target value and a target range is predefined. For the negative samples a threshold level (maximum concentration) is specified. The samples are not only tested on the new batch but also on a reference batch.

    Within certain limits the factor is adjusted to best fit the combination of these different specifications on the positive and negative samples in comparison to the absolute targets and to the reference batch results. If the factor cannot be adjusted to fit one of the specifications (for example, the positive samples do not fit inside the concentration range without the blood donors being inside the maximum value) this batch cannot be released.

    The factor is needed to minimize the impact of biological, chemical, biochemical variances of the coating processes on the concentration results.

    The limitation of the factor selection is that the factor has to be higher than a certain minimum factor. This minimum factor cannot fall below this minimum because otherwise the upper limit of the measuring range given in the DfU would not be valid anymore. This minimum is defined during the tests development & validation by allowing a 20% decrease of the factor from the so called master factor determined during development. The master factor is the factor used to release the validation batches (for the product validation studies) and it is the aim of the production of subsequent batches to always stay as close as possible to this master factor.

  5. Why are the detection limits of our EliA tests so different although it is always the same curve?

    The detection limit of all EliA IgG tests is 600 µg/l as this is the highest calibration point. However, we do not give results in µg/l but in Units/ml and, thus, every limit is multiplied with the individual lot-specific factors. How this calculation is done in IDM, please see Calculation of EliA Units ».

    In our DfUs we give a minimum upper limit, specific per test. This value was calculated with the lowest lot-specific-factor, which is possible with this test. In most cases it is higher (that is why we give the upper limit of the measuring range with a ">"-sign). Please find the minimum measuring ranges given in the EliA DfUs. (see further explanation on the limit here)

    Of course, the fluorometer is able to measure higher RU-values than that of the highest calibration curve. The fluorometer measures 4-MUF in concentrations between 6.7 nM – 54 uM. This leads to signals between approximately 0.3 mV – 1900 mV or, expressed in RU, 7.5 RU – 47500 RU.

    However, as the curve is not a line but a curve with an S-shape, the values above the highest calibration point are not valid. If a sample is "above" it could be diluted until it is in the measuring range. However, for autoimmune sera this is not recommended, because they often do not dilute linearly (see linearity »).

  6. Do we have sensitivity and specificity data for all EliA analytes?

    Sensitivity and specificity data for CCP, Celikey and ANCA on EliA are available. Sometimes taken from external studies, but e.g. for ANCA we have done them internally before launch. 

    Some of the EliA analytes are markers of very rare diseases and are found only in a portion of patients. Obviously, for those markers it is not possible to define clinical sensitivity and specificity when developing such a test.
    For those analytes a technical sensitivity and specificity was evaluated by measuring samples with defined antibody specificity such as AMLI samples, CDC samples and WHO samples.
    Of course the data for technical sensitivity and specificity are not at all comparable to clinical sensitivity and specificity.

  7. Do we have linearity data for all EliA analytes?

    All EliA assays have very broad measuring ranges.  In practice, patient samples will rarely exceed these ranges and, thus, a linearity in dilution is not often needed in practice.
    For samples which do exceed the measuring range, the actual quantitation is rarely clinically significant.  It is enough to state that the result is "greater than..."

    In all studies on linearity in EliA the specifications were set very narrowly with O/E= 0.8-1.2. It was found in a separate study (see validation plan UEVPPC03-02) that "a linearity for all tests and all sera is not achievable inside the targets O/E= 0.8-1.2 due to the unique calibrator system in ImmunoCAP".

    Statements on linearity are made under these restrictions.

    Limitations on linearity are stated in the respective DfU: "Please note that due to differing binding characteristics of the antibodies in patient samples, not all sera can be diluted linearly within the measuring range."

  8. Do we have any studies of the interference about Lipemic, hemolyzed or microbially contaminated samples poor results and not applicability?

    We are doing interference studies each time when we develop a new test. We use an interference kit from Japan, measuring bilirubin F and C, hemoglobin, chyle and rheumatoid factor. We test normally 2 positive samples with concentration level around the cutoff and 1 medium to high positive sample. The data are available for all tests. However, we never saw any interference in one of our tests.

    However, as we never can measure all kinds of samples, we put this warning of possible poor results in all our DfUs.

    Poor means bad and can go in every direction. It depends which part of the test reacts with the substance. As we have never seen these poor results we have no personal experience.

  9. Why do some samples not give linear results when diluted (particularly in Celikey IgG)?

     The immune response in the human body is polyclonal. There are different B-cells producing slightly different antibodies with different avidity and affinity and which may even react with different epitopes on the same antigen. If you dilute a sample, the different avidities and affinities may affect their frequency of binding. The more the serum contains specific antibodies and the more it is positive for the respective parameter, the more these effects of different antibody types may be pronounced. Thus, a relatively small concentration of high avidity antibodies may give the same result in a diluted sample as a big amount of low avidity antibodies in an undiluted sample will give. Because real samples contain a mixture of such antibodies, they will not necessarily behave linearly.

     In some tests such as tTG IgG assays there is an additional reason for possible non-linearity: most sera which are positive in this test are also positive for tTG IgA and quite often these IgA antibodies against the same antigen may be present in high amounts. These IgA antibodies compete with the tTG IgG antibodies for the binding to the antigen and, thus, occupy most of the available binding sites. The higher the avidity of the IgA antibodies is, the more this effect occurs.

     If the sample is diluted the tTG IgA antibodies are diluted in the same degree and more binding sites can be accessed by the tTG IgG antibodies. For some samples, this can even lead to higher results for tTG IgG in diluted samples than in non-diluted ones. In samples with very high antibody concentration it may also happen that the antigen coated to the wells is not sufficient to bind all tTG antibodies (IgG and IgA). Even if the sample is diluted there may still be enough antibodies to bind all antigens.

     Result in both cases: the sample does not behave linearly.

     EliA Celikey IgG is evaluated for a sample dilution of 1:100. If a sample is diluted more with diluent, the matrix of the antibody containing sample will change. The more it is diluted the more it will differ from the original, physiological environment of the antibodies and will be more artificial with every dilution step. This may result in the binding features of the tTG antibodies changing which would lead to non-expected results. Result: the sample does not behave linearly.

  10. Why is the EliA upper limit of the measuring range given with ">"?

    The upper limit of the measuring range for EliA is lot-specific!

     The measuring range for EliA is limited at the upper end (usually) by the highest calibrator:

    EliA IgG System: CAL-6 is 600µg/l
    EliA IgA System: CAL-6 is 80µg/l
    EliA IgM System: CAL-6 is 1000µg/l
    EliA Calprotectin System: CAL-6 is 750ng/ml

    But how does this transfer to the measuring range for a specific product in EliA Units?

    For the calculation of the concentration in EliA Units (full explanation see here) the lot-specific factor is relevant. Thus, the actual upper limit for an EliA test is lot-specific, depending on the value of the factor. But we are required to give a limit in the DfU, and we also want to guarantee a certain measuring range. Therefore, we have limits in which the lot-specific factor can vary. In this case, we have the general limit that the lot specific factor can only decrease 20% from the usual lot-specific factor (master factor, defined during validation). The limit in the DfU is calculated on this lowest allowed lot factor, because it means that we cannot relese any batch for which the upper limit of the measuring range will be lower than this limit in the DfU. That is also why we give the limit in the DfU with ">" because usually the detection limit for any batch at the customer will be higher than this value.

    Example:

    EliA Ro
    Master factor: 0.50
    Lowest allowed lot-specific factor: 0.40

    detection limit in the DfU: ">240 U/ml"
    (600 µg/l x 0.40 = 240 U/ml)

    detection limit for a batch with factor 0.48: 288 U/ml
    (600 µg/l x 0.48 = 288 U/ml)
    So, results between 240 and 288 will still be given with actual, correct, numeric concentration results. Only results above 288 U/ml will be reported as "ABOVE".

     

As in all diagnostic testing, the diagnosis is made by the physican based on both test results and the patient history.