Jump to content

What would you do ????? 13 members have voted

  1. 1. What would you do ?????

    • Report the QA material as screen negative (with an explanation of findings)
      5
    • Report the QA material as screen positive and identify antibody
      8

Please sign in or register to vote in this poll.

Featured Replies

Posted
comment_17542

I was asked the following question regarding a hypothetical situation:

What would you do if you were testing an external quality control material and you obtained different results on your 4 analysers, two gave positive screens and two analysers gave negative screens. On further investigation you identified anti-Jka in the QC material.

Would you report the results to the external quality scheme as negative, with an explanation, but knowing that you would incur penalties with the scheme, or Positive even though some of your analysers failed to detect the antibody on first testing?

I look forward to your answer- thanks!!

  • Replies 45
  • Views 9.3k
  • Created
  • Last Reply

Top Posters In This Topic

comment_17544

I would, of course, do the honest thing. I would take the analyzers that did not detect the antibody out of commission and get them checked and the parameters altered so that they did detect the anti-Jka.

comment_17553

If all four analyzers are the same manufacturer & model, I agree with Malcolm; but, if the analyzers are different manufacturer OR model, I would report the positive and make note of which analyzers were positive vs negative. At the time of submission, one does not know which is the correct result. In the US with CLIA regulations, you are not supposed to compare results of one analyzer with the other prior to submission AND are supposed to handle proficiency test material the same as patient material. (But how do you not compare results prior to submission?)

  • Author
comment_17555

Thanks Bill, if the analysers were the same model and testing on the first was negative, would you call this negative for CLIA reporting?

comment_17557

Rashmi

In theory, yes; in real life, no way but would never admit that the neg was first result I looked at.

comment_17558
Rashmi

In theory, yes; in real life, no way but would never admit that the neg was first result I looked at.

Bill - You do sound like an honest man!! I've never thought about this situation before, never had more than one piece of automation before. In reality, what good would it do (and to whom??) to knowlingly report results that you know are incorrect? As Malcolm said, the most important thing is to take the questionable instrument out of service, investigate and correct the problem.

Never-the-less, this is an uncomfortable scenario/question that you pose, Rashmi!

  • Author
comment_17561

If the purpose of external agencies such as CLIA or NEQAS in the UK is to set and maintain standards of testing how will anyone know to what extent there may be a problem looming, if the correct result isn't submitted? I know it's somewhat 'painful' to have to deal with knowingly fail an exercise....but what if that had been an actual patient sample tested?

Thank you all for your responses, hopefully many more on this site will vote, so we can get a better picture of how things are perceived and dealt with. These are uncomfortable questions, but I feel we need to ask if we are handling these situations correctly.

comment_17562

I don't have multiple insturments, so I had to really think about this one. I thought that you were supposed to report per instrument? It seems most useful to everyone to report the whole scenario, so essentially it would need to be reported both ways with an explanation and, of course, removal of the two instruments that reported "incorrectly." I would think that you need to know the expected result before you decide which is incorrect. Also, it would make a difference whether the analysers were the same type or not, as has already been mentioned. In addition, is the PT material all from the same batch? It has been known in the past to be an issue that different lots of the PT material have been "contaminated" with unexpected antibody. Lots of parameters to be considered here...

:eek::confuse:

  • Author
comment_17586

I agree we would need to perform corrective actions, be it removing the equipment until fixed or changing reagent batches which may be faulty, and inform the supplier of these problems so they can consider a batch recall of reagents if necessary.

Surely we are obliged to report the true findings of any QA scheme we participate in to maintain the integrity of everything we do in the lab regardless of what test we are performing.

comment_17588
I agree we would need to perform corrective actions, be it removing the equipment until fixed or changing reagent batches which may be faulty, and inform the supplier of these problems so they can consider a batch recall of reagents if necessary.

Surely we are obliged to report the true findings of any QA scheme we participate in to maintain the integrity of everything we do in the lab regardless of what test we are performing.

Rashmi, my earlier post was, to say the least, facetious, for which I apologise now there's a first)!

Like other posters, as I said earlier, I have no analysers other than, as TimOz says, carbon-based ones, so I cannot answer your question directly (and so will not vote). that having been said, despite the fact that I may be marked down, I would report honestly my findings. There is no future in hiding mistakes (as I said in the "Golden Rules" thread and, more to the point, it may not be the mistake of the operator. It may be the fault of the material, the reagents, the analyser, the way the individual analyser is set up, or the technology as a whole.

It is of paramount importance that trends are discovered, and if all similar (or, at the very least, a considerable number) analysers are missing an antibody, the "powers that be" and, in fairness, the manufacturers of the analyser need to be made aware, so that improvements can be made.

Before there was the Serious Hazards of Transfusion (SHOT) Scheme in the UK, the same mistakes were being made over an over again in many hospitals, endangering patient lives, but, because nobody would admit to these mistakes, the mistakes carried on. Now that errors can be reported ina way that is anonymised, the build-up to the erros can be, and is published, and others learn about these errors before they are made.

WE MUST BE HONEST ABOUT THESE THINGS.

Indeed, there may be a cogent argument for setting up a scheme equivalent to SHOT for technical/technological erros.

comment_17729

Ah.....the joys of analyzers.......

This is why we do correlations between analyzers every 6 months in Chemistry. The intent is to prove that at any given time, you can run a patient on any available analyzer, and expect the same results. The 50-50 prospect you share is SCARY!!!! though can happen more than you think.....anything with a probe + variables can spell discrepancy, so trouble shooting is the way to go......(speaking as a former chemistry lab supervisor).......BUT>>>>>>>>:confused:

Interestingly, you said you found a JKA......so, dosage could come into play here, and could wreak havoc in real life too.....if all available machines are not operating optimally, this scenario could occur, and easier because of doasge......just another consideration, and leads back to the first paragraph, stating that proving consistent performance between analyzers periodically is crucial.:cool:

comment_17734

Rashmi

I agree that the analysers giving the negative results should be taken out of use immediately. I would suggest that discussions with the scheme organisers would be a constructive way forward. All methodologies used within the laboratory should be used for testing any external QC material and, particularly where there are discrepant results, all results should be submitted. I suspect that most labs. will not participate in a separate scheme for each analyser or method and most external QC schemes will not have the facility to allow the recording of the scenario that you have outlined.

At least we have moved away from the days when any external QC sample was tested to death to get the 'right' answer!

John

  • Author
comment_17757

Thanks for your comments John, and I agree with the actions that you and others have stated. I do however think that we have not yet moved away from testing external QC samples so that we get the 'right' answer- it's still being done. I have spoken on occassions to folk who have stated that they would not knowingly submit the incorrect first testing result.

This is what we need to establish with this poll, the fact that not many are voting seems to indicate this is an uncomfortable topic for most of us to confront.

comment_17762

I don't wish to be more controversial than normal, but I just wonder if this thread should have been started after the current NEQAS exercise had run its course, rather than before?

The reason I am saying this is that (fairly obviously) both NEQAS and the manufacturer of the analyser have got to hear that there is something amiss with one of the NEQAS samples containing an anti-K.

NEQAS are upset that telephone calls have been made between sites asking what the others have got, before the exercise has been completed. It has to be remembered also that there are very many UK members of BBT, even if not too many of them post on a regular basis. Over this, I think I can see their point.

Such telephone calls immediately alert people to the fact that there may be a problem. As you say, Rashmi, this may be a "difficult area" for certain people. If they did not know that there was such a problem, they would have put in "negative" results automatically for the screen, because that is what they have found. Now that they have been alerted, however, they are going to "test to competency" (I think that is the phrase you quoted in another thread about the MHRA), and NEQAS will then get "false" answers. This may, on another occasion/exercise mean that a problem is not flagged up by a trend.

Sorry. I know I sound a bit grumpy over this one, but I can see from where NEQAS are coming.

:confused::confused::confused::confused::confused:

Because not everyone is as honest as you, therefore, if there is a real problem with one make of analyser, the seriousness of the problem will be "diluted" by the fact that some people may test the exercise plasma to destruction, until they get the desired (or "broadcast") answer, and put in the fact that they detected the anti-K, even though, in reality, they did not, until they were made aware that there was this anti-K present.

  • Author
comment_17767

Hi Malcolm,

This thread is about a 'hypothetical' situation. The antibody concerned is anti-Jka. Over the years similar situations have arisen with both manual and automated testing, with labs phoning each other up etc. Been there, seen it, done it. We just need to improve reporting of these events.

For manual users I would ask the same question: If a member of staff made a mistake on an external QA sample, and this was picked up at retrospective review ( unlike a patient sample), would you knowingly fail this on the report?

This poll is trying to establish anonymously how we each perceive these situations and how we have seen them dealt with in real life. How do we know how accurate external (NEQAS/ CLIA) results are unless folk are aware that there are inherent problems with the set up of such exercises anyway?

Edited by RR1

comment_17776

For a manual sample it would depend upon the mistake. If it was a mistake on something we ordinarily review and correct in patients, I would correct it. Otherwise we would not catch it because we review survey samples the same way we do patient samples. The one exception is clerical review. Because survey samples are not reported the same way patients are (extra form involved with a completely different setup than our internal reports), we do a clerical check to be sure that the survey report has been completed with the correct information from our report. A clerical error between our report and their report would be corrected before it is sent.

comment_17816

Just a question to complicate things: do you not treat QA like a patient ..... would a patient be run on all 4 of your analyzers (does all the staff do a QA sample if you have manual methods??)

  • Author
comment_17821

Some good points raised so far. Performing regular correlations between your analysers as suggested by Linda0623 is something I will definitely be formalising in my lab, together with correlation to our manual methods. This would be an excellent way to prove testing consistancy. Even though we run reagent QC through both, this isn't always representative of patient samples, so we will be cross-testing a range of patient antibodies.

Thanks Janet- you are correct in your response, the QC material is not being treated as a patient sample. It would be good to obtain QC material for each analyser and a manual method. If each set was different and had to be registered for each analyser/ technique this would allow a fairer testing regime.

Thank you so far to everyone that has responded, and also to those that have voted, i'm sure clarifying these issues can only help the QC scheme operators to improve their systems to ensure more accurate data is recorded. Our work after all does impact hugely on patient safety.

Edited by RR1

comment_17863

Hi Rashmi,

Thank you for a thought provoking post.

I think that mulling over the answer gives a great insight into why you should choose and use QC material that is HARD!

Get on soapbox now - Challenge the test. If your QC material never fails, it is poor material. A "slam dunk" control is worthless. Phew, that feels better.

Now, to answer the question. I think the situation described should not really happen. You should know and monitor the analytical sensitivity of your tests and the variability between instruments and manual systems. If the event happened as described I would thoroughly investigate the cause - cards, ragents, diluents, fluidic accuracy and carryover, temp, time, centrifugation variables etc etc. I would repeat the testing after this and report what I found.

The case also begs the question as to EQAP penalties. Why a well managed lab be penalised for a problem that may be due to a manufacturer or engineer???

  • Author
comment_17903

Thanks for your comments Tim, I have recently also begun to think that EQA panalty systems are wrong. These just encourage folk to 'pass' their EQA rather than report their findings. I think these schemes should run a year or two without penalties being given, we would then have a better indication of any problems.

comment_17908

Seriously good idea Rashmi.

I can't see it being accepted by the authorities though.

  • Author
comment_17916

Thanks Malcolm,

May be we need to challenge the authorities on this, surely they should be more concerned with obtaining accurate data.

comment_17927
Thanks Malcolm,

May be we need to challenge the authorities on this, surely they should be more concerned with obtaining accurate data.

Agreed....and we should challenge them on an awful lot of other things (see my comments about Change Control of documents).

:mad::mad:

Create an account or sign in to comment

Recently Browsing 0

  • No registered users viewing this page.

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.

Configure browser push notifications

Chrome (Android)
  1. Tap the lock icon next to the address bar.
  2. Tap Permissions → Notifications.
  3. Adjust your preference.
Chrome (Desktop)
  1. Click the padlock icon in the address bar.
  2. Select Site settings.
  3. Find Notifications and adjust your preference.