CHERT Certification
Situation
In 2019, I led the research for a project aimed at securing CEHRT 2015 certification for a Hospice EMR system. This process required stringent usability testing across multiple units, targeting healthcare providers and clinicians. The scalability challenge posed by the certification’s guidelines, which mandated testing across numerous units and participants, was a significant obstacle. I addressed this by streamlining and standardizing data collection methods to ensure both efficiency and compliance.
To meet the CEHRT 2015 certification criteria, our EMR software needed to pass very specific usability testing standards. These tests had to be conducted with clinicians and providers, as the functionalities being tested were specific to clinical workflows. According to the certification guidelines, all findings were to be submitted to Drummond Group before the end of Q4 2019.
Method
The main challenge was scalability. The certification mandated testing across multiple units with a minimum of 10 participants per unit. The timeline was tight—14 weeks to gather and process all data.
I standardized the testing process across all units to streamline the collection and analysis of data. We adhered to guidelines from NISTIR-7741, which detailed specific requirements on participant numbers and data collection. While the UX industry standard often involves five participants per test, this project required 10 participants per unit. After running this by our proctor, we optimized the testing process by having one participant session include tasks for multiple units - doubling up smaller units of criteria.
Action
I had 14 weeks to complete 70 interviews. With the clock ticking, I devised a strategy to optimize resources and time. By leveraging additional researchers, doubling up units for testing, and carefully planning participant sessions, we managed to complete the project efficiently. The key to success was maintaining strict adherence to the testing requirements while minimizing redundancy and participant fatigue. We ensured that each usability session was comprehensive yet manageable.
STUDY DESIGN
Overall, the objective of this test was to uncover areas where the application performed well – that is, effectively, efficiently, and with satisfaction – and areas where the application failed to meet the needs of the participants. The data from this test may serve as a baseline for future tests with an updated version of the same EHR and/or comparison with other EHRs, provided the same tasks are used. In short, this testing serves as both a means to record or benchmark current usability, but also to identify areas where improvements must be made.
During the usability test, participants interacted with one EHR. Each participant used the system remotely through screen-share in Zoom and was provided with the same instructions. The system was evaluated for effectiveness, efficiency and satisfaction as defined by measures collected and analyzed for each participant:
If the task was successfully completed within the allotted time without assistance
Time to complete the task
Path deviations
Participant’s verbalizations (comments)
Participant’s satisfaction ratings of the system
Result
The data gathered identified clear trends in user behavior and system usability. For example, while most participants successfully completed their tasks, pain points emerged, particularly around search functionality and floating action buttons (FAB). Here’s a snapshot of some key findings:
Medication Orders: Participants successfully navigated generating and editing medication orders, though improvements were needed in search functions.
Contraindications: Participants handled contraindication warnings well, easily interpreting their severity and understanding the clinical implications.
Patient Demographics: The workflow around patient demographic editing was intuitive, with participants experiencing minimal issues.
Clinical Decision Support (CDS): While participants appreciated the potential of the CDS feature, it required further refinement for usability.
Implantable Devices: Most participants successfully completed tasks related to implantable device information, highlighting the usability of the feature.
The report, which complied with NISTIR-7742 standards, was delivered on time, and the feedback from both participants and stakeholders provided valuable insights into areas that needed improvement, ensuring the software was not only functional but aligned with the real-world needs of clinicians.
The full report can be found here: https://www.drummondgroup.com/pdfs/NISTIR-7742-Submission-Report.pdf
Conclusion
The CEHRT certification process posed significant challenges, both in terms of timeline and complexity, but by standardizing our approach, we were able to complete the necessary usability tests without sacrificing quality. The data-driven insights from this project led to concrete usability improvements, ensuring that the software not only met certification requirements but also enhanced user experience for clinicians. This experience underscored the importance of strategic thinking and adaptability in UX research, especially in high-stakes, regulatory-driven projects.
In future iterations, leveraging this baseline data will enable continuous improvement, aligning our EMR system more closely with both user needs and WCAG accessibility guidelines. This work reinforces my belief that usability testing is not just a certification checkbox—it's an essential process for creating better healthcare tools that truly support their users.