In our first blog, we discussed the first fundamental goal of peer review – how clinics can more effectively identify problematic plans. Today, we’ll focus on the second fundamental goal – driving engagement during a peer review meeting. In our exploration of this goal, we’ll evaluate what clinics currently do to garner participation, and where struggles exist. We’ll also provide recommendations on how to alleviate these struggles so that peer review becomes a better vehicle to improve patient care.
Many clinics struggle with engagement during their peer review meeting. Often, attending the meeting is a requirement; however, participating is only encouraged. The sources for the disengagement in peer review are numerous, and many have been mentioned in a few academic publications already. It’s important to quickly review those, but also to dive into some that have not been previously discussed.
First, if you ask most physicians, especially those who are early in their career or newly hired, feedback about their plan is something they crave. Therefore, one can assume, desire for feedback is not the problem. The most obvious culprits are cognitive biases that creep into the meeting. Namely, bystander apathy, the wallpaper effect, and authority bias. It is extremely difficult to completely eliminate these. So with that in mind – how do we motivate physicians to engage with their colleagues and provide more and better feedback? Are there other factors in the traditional chart rounds meeting today that actually feed disengagement?
Through observing many traditional chart rounds meetings over two years, it quickly became clear to us that physicians are not eager to provide feedback about another physician’s plan. In actuality, a physician will often have to be solicited directly, usually by the moderator. As previously mentioned, those cognitive biases certainly play a role in a quiet room, but what can we learn about the questions being asked to the physicians?
In many meetings, the moderator will ask, “Are there any thoughts about this case?,” or, “Does this look good to everyone?” While at least an attempt, these generalized “yes/no” questions that lack a specific area of focus do little to encourage discussion. Second, these questions are asked as the team is starting to wrap up discussion of the current review and ready to move on, which is the wrong time entirely.
An additional problem may be in what is actually presented to the clinical team during the review. Least helpful in making clinical decisions are a few screenshots within a report and limited availability of plan information. These types of reviews do little to build the physician’s confidence that they have the information necessary to make a judgment call and provide feedback, especially to a more tenured physician.
Finally, traditional chart rounds meetings actually feed disengagement when there are interruptions in the meeting. Most often, these are ad hoc requests from clinicians because necessary plan information was not readily available. This problem is further compounded by slow loading times in the treatment planning system for that requested data.
To improve, one must first recognize the need to do so. Striving for engagement is very important in accomplishing the first goal of peer review – identifying problematic plans, which is a known struggle. Second, it is important to avoid limiting the scope of engagement to only the physicians in the room.
In a recent publication that examined the effectiveness of the traditional chart rounds meeting, the authors stated, “empowering non-physician participation in plan review, beyond a solely supporting role, could be very valuable in improving detection rates going forward.” The authors also suggest that “maximizing the number of ‘eyes on’ a plan could be a means for improving QA and safety in radiation departments.”
Beyond simply increasing the number of participants, there are additional strategies that are easier to implement, especially with limited resources. First, during the review, more context is needed about the plan. Second, more focused questions need to be asked throughout the review.
Today, clinics should not be in a position where they struggle to consistently and efficiently gather all the plan and demographic information needed for a high-quality, three-dimensional review. A peer review system that accomplishes these things automatically reduces the number of ad hoc requests and the propensity for disengagement.
Additionally, a peer review system should drive engagement and discussion by systematically changing the presentation of the plan data as focused questions about the plan are asked during the review. Questions such as, “Are the target volumes accurate?,” should be asked when the software automatically displays the target volumes and the diagnostic image fused. By doing so, the team has the necessary context to provide feedback, and the feedback is required to continue on with the review. The focused questions not only increase engagement, but also ensure that all critical plan elements are addressed. Those plan elements are specifically mentioned by ASTRO in their recommendations for peer review.
In conclusion, cognitive biases will always exist in the context of peer review, but that does not mean that there is little hope to improve the lack of engagement experienced during peer review. There are strategies that clinics can adopt today to reduce opportunities for disengagement and actively drive engagement within their peer review meeting.
A peer review system should reduce opportunities for distraction and disengagement by automatically gathering all necessary plan data and patient demographic information for a high-quality, three-dimensional review. By doing so, ad hoc requests will be reduced. It will also increase engagement by systemically displaying the data necessary to provide proper context to the team and by asking focused, required questions to elicit feedback.
Implementing these improvements will make the meeting a more beneficial experience for the participants and also increase the likelihood of problematic plans being identified by peers. Making these changes, as well as a few others we will discuss in the next blog, will ensure the peer review process continuously improves the practice.
[1, 2] “A Blinded, Prospective Study of Problematic Plan Detection During Physician Chart Rounds” Talcott, W.J. et al. International Journal of Radiation Oncology • Biology • Physics, 2019 Volume 105, Issue 1, S23 - S24
Jeff Kuhn is a Senior Account Executive at MIM Software. Jeff works hand-in-hand with centers across the country to revamp and improve their current peer review processes to improve patient care. He has been a key contributor to the development of MIM Harmony.
You can visit him online at
With the 2021 ASTRO Annual Meeting in the books, it’s a great time to revisit ASTRO’s...
Cloud-based solutions are becoming increasingly common in multiple industries. In the medical...
It is a well-established fact that automation and standardization are two key pillars in not only...