Now that Cerner has migrated from major release upgrades to incremental upgrades, how are you managing the Blood Bank validation effort?
We have not yet received any guidance from Cerner and at this point we feel that we will need to make the determination to validate on our own.
How are you some of you handling this?
Maureen Slackway, MS, MT(ASCP), CPHIMS, CQA, CAPM
Senior Application Analyst – Laboratory Systems, Universal Health Services, Inc.
One of our smaller sites uses a specimen rocker to keep platelets agitated before being transfused. They do not routinely stock platelets which is why purchasing a platelet incubator/agitator is not on the agenda. On our last accreditation inspection we were cited for not performing a validation of the rocker for platelet storage. Has anyone done a validation of this type and could you share the details of how to set this up?
I am about to venture into performing my first ever validation of a test. I have been tasked to validate Rh+K phenotype testing on the BioRad IH-1000 (two of them). We have been, until now, performing the test manually, but as work is getting busier, performing it on the analyser might prove easier (or at least motivate people to perform phenotype). I have been given advise by my senior on how to go about it:
Select 10 Donor Red Cell units which have Rh+K phenotype performed Perform the phenotype manually Perform the phenotype on the analyser compare the result pat myself on the back, provided I don't mess it up The issue I have is donor red cells doesn't indicate if they are K+, and I wanted some K+ as well as K-. I could always keep testing a lot of donor units until I come across a K+ unit, but I don't want to was a lot of cards (but that might be my last option). If I choose the NBS or BioRad Antibody Panel Cells, then the issue is the strength of the test cells, as they are, I think, 0.8%, and it does not fully represent the way we perform our phenotype manually, as it uses around 5%. I can always try and make the strength of the solution stronger, that is another option.
So this is where I am at. If anyone has a suggestion, or better a complete plan, then I am happy to hear it. Also if there is any point I missed or need clarification, please feel free to ask, I'm fixed to this specific thread all night long.
By Janet C.
We recently got a new freezer for the Blood Bank. I am performing my first validation. I am comparing the NIST against the digital display as well as the chart and our own, seperate thermometer. I cannot find any documentation to support how much difference is acceptable between the NIST and the digital read/our thermometer/ etc. Some places I have read say 0.5 degrees which seems very tight to me. Another facility I work for uses 1.0 degree difference which seems more acceptable and able to obtain but I have not found any supporting documentation anywhere!
Can someone help me? When validating, what difference is acceptable between the NIST and the other thermometers, be it the probes in the unit or your own internal thermometers?