Self-Assessment versus Self-Monitoring


by Sarah A. Pierce, Pharm.D., PGY1 Pharmacy Practice Resident VA Maryland Health Care System

Most pharmacy residents are familiar with the use of ResiTrakTM to complete self-evaluations, an arduous process made more difficult by having to recall performance over a long period of time.  Is this method of self-evaluation effective?

In both pharmacy education and residency training, self-assessment is a commonly utilized tool intended to encourage a learner to evaluate his or her performance, identify strengths and weaknesses, and note areas for self-directed learning and growth. In its accreditation standards for Doctor of Pharmacy programs, the Accreditation Council for Pharmacy Education (ACPE) discusses the importance of self-assessment for students, faculty, and staff.1 The theme of self-assessment and self-directed learning persists into post-graduate residency training. The American Society of Health-System Pharmacists (ASHP) includes “resident self-assessment of their performance” as a requirement in their accreditation standards for PGY1 pharmacy residency programs.2   The ASHP standards require “summative evaluations” at the end of each learning experience (aka “rotation”) and  encourages optional spontaneous “snapshot” self-evaluations too.


Implicit in these requirements is an assumption is that self-assessment is valuable and accurately reflects a person’s strengths and weaknesses. However, according to Eva and Regehr, there is substantial body of literature which suggests that learners often cannot accurately self-assess their strengths and weaknesses and that self-assessment correlates poorly with actual performance.3  However, there may be a distinction between self-assessment and self-monitoring: “self-assessmentas a cumulative evaluation of overall performance, and self-assessment as a process of self-monitoring performance in the moment” [emphasis added].3


Eva and Regehr discuss the results of two studies which explored self-monitoring and self-assessment.3,4 In each study, participants answered sixty trivia questions divided into six categories.  Participants were asked to evaluate their performance at different times during the testing. To measure “self-assessment” (that is, a cumulative evaluation of overall performance), the researchers had participants predict their overall score for each category both before and after completing all ten trivia questions in that category. To measure “self-monitoring” (that is, an evaluation of performance while in the moment), the researchers had participants rate their confidence in a given answer immediately after answering the question. The results showed that a “self-monitoring” measure demonstrated a higher correlation with actual performance when compared to the cumulative “self-assessment” measure. 

They hypothesized a potential explanation for these findings.   Self-monitoring likely requires a “fundamentally different cognitive process” than self-assessment. With self-monitoring, learners have many inputs and sources of information at their disposal to predict potential success or failure on a moment-to-moment basis. However, with self-assessment, the learner must rely on memory to aggregate information of multiple past events in order to determine the overall success or failure.3The concept of self-monitoring was replicated and expanded in work by McConnell and colleagues.4

What are the potential implications of these findings? In the Educational Theory and Practice course, the idea of self-directed learning was introduced.5   Self-assessment is a tool used to facilitate self-directed learning. However, if self-assessment is not as accurate as one may hope, then perhaps self-assessment is not the best tool to facilitate self-directed learning. I would argue that more attention should be directed to self-monitoring on a moment-to-moment basis, rather than on cumulative self-assessments.  Through  self-monitoring, individuals would develop a more accurate picture of their abilities and this could lead to more focused self-directed learning needs. As self-assessment is by far the most common self-evaluation tool used in pharmacy education and residency training today, new, creative ideas to transition to self-monitoring are needed.

Self-monitoring could be implemented in pharmacy education and residency training in several ways.  In pharmacy school, an early and consistent emphasis on self-monitoring could prove to be more effective than intermittent, reflective self-assessments.  By asking students to identify their strengths and weaknesses in real-time, this may motivate them towards focused self-directed learning.   For example, students taking an exam on the pathophysiology of diabetes, the pharmacology of diabetes medications, and diabetes management could be required to rate their confidence on each test question. After the exam, students could receive a report with their self-monitoring responses and a breakdown of their actual performance in each domain (e.g. pathophysiology, pharmacology, and patient management). In this way, students would be better at identifying areas they struggled with, gradually improve the accuracy of self-monitor their performance, and note areas that require further study. Regularly repeating this process may help students develop stronger self-monitoring skills and help them become independent practitioners after graduation.

A similar argument could be made for pharmacy residency programs. I believe “snapshot” evaluations should be used more frequently.  During my internal medicine rotation, I was asked to complete a snapshot evaluation related to my data gathering skills and treatment plan for a specific patient. This was much more focused than the summative self-evaluation done at the end of my rotation whereby I had to assess my overall performance related to several different goals and objectives. I believe I gained more insight into my strengths and weaknesses with the snapshot evaluation.  I was able critical examine my performance on narrow set of tests “in the moment” rather than having to search my memory for past events that related to my performance.

While self-assessment is certainly a necessary component of pharmacy education and helps facilitate self-directed learning, I believe there should be a greater emphasis on “real-time” self-monitoring.   Self-monitoring should be taught and required in Doctor of Pharmacy programs and frequent self-monitoring “snap shots” should be a mandatory component of pharmacy residency training.

References
1.  Accreditation Council for Pharmacy Education. Accreditation standards and guidelines for the professional program in pharmacy leading tothe doctor of pharmacy degree. Chicago: Accreditation Council for Pharmacy Education; 2011.  97 p.  [cited 2012 Oct 8]
2.  ASHP Commission on Credentialing. ASHP accreditation standard for postgraduate year one (PGY1) pharmacy residency programs. Bethesda (MD): American Society of Health-System Pharmacists; 2005. 23 p.  [cited 2012 Oct 8]
3.  Eva KW, Regehr G. Exploring the divergence between self-assessment and self-monitoring. Adv Health Sci Educ Theory Pract 2010;16(3):311-29. [cited 2012 Oct 8]
4.  McConnell MM, Regehr G, Wood TJ, Eva KW. Self-monitoringand its relationship to medical knowledge. Adv Health Sci Educ Theory Pract 2011;17(3):311-23. [cited 2012 Oct 8]
5.  Kaufman DM. Applyingeducational theory in practice. BMJ 2003; 326: 213-6. [cited 2012 Oct 8]

Going Around in (Questioning) Circles


by Paul Ortiz, Pharm D., PGY1 Pharmacy Resident, Johns Hopkins Bayview Medical Center

The art of questioning is a useful tool to engage students and encourage critical thinking.  Instruction that includes well-crafted questions is a way to encourage higher order thinking about a subject.1  Asking and answering questions often reveals knowledge that we never realized we possessed.  Whether in an academic or an informal social setting, questioning can force us to dig deep into our own psyche and reveal our thoughts and feelings.  Educators often use questioning as a tool to teach students, but how does one know which questions to ask? 

A model of questioning by Christenbury and Kelly called Questioning Circles (see figure) provides a structure for educators to develop questions about a topic.2  The Questioning Circle is comprised of 3 distinct areas of knowledge that overlap.  The 3 areas included in this model are Text (knowledge of the text/subject matter), Reader (personal response to the text), and World (knowledge of the world and other texts).2  Christenbury states that instruction using the Questioning Circles technique should include not only questions in the three separate circles (Text, Reader, and World), but more importantly, questions in the areas where the circles overlap.2,3  The areas where 2 circles overlap (Text/Reader, Text/World, and Reader/World) allow the individual components to collide and enrich each other.  Finally, there is an area of Dense Questions in which all 3 circles must be considered.  These Dense Questions represent the most important questions and whose answers provide the deepest consideration of the subject matter.4
  

Christenbury and Kelly use the following example from The Adventures of Huckleburry Finn by Mark Twain to illustrate the Questioning Circles technique in action:

Text: What does Huck say when he decides not to turn Jim in to the authorities.
Reader: When would you support at friend when everyone else thought he/she was wrong?
World: What was the responsibility of persons finding runaway slaves?
Text/Reader: In what situations might someone be less than willing to take the consequences for his or her actions?
Reader/World: Given the social and political circumstances, to what extent would you have done as Huck did?
Text/World: What were the issues during that time which caused both Huck’s and Jim’s actions to be viewed as wrong?
Dense Question: When is it right to go against the social/political structures of the time as Huck did when he refused to turn Jim in to authorities?”
(Christenbury, 1983)

Using questions as a means of instruction can be very effective for educators, and the Questioning Circles is a particularly useful tool.
  Throughout my years in pharmacy school and currently during my residency, questioning from professors and preceptors has helped me learn and think critically.  The Socratic Method of teaching and questioning has been widely studied and used, and I wanted to further investigate other established methods of questioning.  One aspect of the Questioning Circles method that I thought was especially useful was the Reader’s perspective in this model.  This elicits the learner’s own thoughts and feelings about the particular topic, and brings meaning and relevance.  In my own experience, learning about a topic that is relevant to me and having an educator that makes the topic relatable to my own worldview is among the most effective means of learning.  Further, this questioning strategy encourages the teacher to relate the material to the World, and puts the subject matter into a larger context than perhaps the learner initially imagined.  Questioning Circles is a useful teaching tool for both new and experienced educators and can be applied to many different learning settings.

References
1.  Ciardiello AV. Did you Ask a Good Question Today? Alternative Cognitive and Metacognitive Strategies. J Adolesc Adult Lit. 1998; 42(3): 210-219.
2.  Christenbury L and Kelly P. Questioning: A Path to Critical Thinking. Urbana: ERIC Clearinghouse on Reading and Communication Skills and the National Council of Teachers of English; 1983. (ED 226 372)
3.  McComas, WF and Abraham L. Asking Better Questions.  USC University of Southern California. Los Angeles (CA): USC Center for Excellence in Teaching; 2005.
4.  Meyers, G. Whose Inquiry Is It Anyway? Using Student Questions in the Teaching of Literature. Urbana: National Council of Teachers of English; 2002.

Peer Assessment: More Than Busy Work


By Anh Tran, Pharm.D., PGY1 Pharmacy Practice Resident, Medstar Union Memorial Hospital

Take a moment and think about a time when you were in high school or college and you were asked to assess your peers on their work.  Or vice versa.  I remember a time when I had just turned in a paper in an undergraduate English class.  The professor then informed us that we would be grading each other’s papers!  The first thought that went through my mind was, “This is just busy work!”  Actually, peer assessment can be a very effective learning tool. 

Peer assessment is the process whereby students receive a critical evaluation and feedback of their work from a similarly experienced individual, peer, or colleague.  This practice is commonly used in various settings, including pharmacy education.  For example, peer assessment can be used to evaluate a patient counseling session conducted by a student pharmacist or a pharmacotherapy presentation by a pharmacy resident.  Peer assessment plays a vital role in a pharmacist’s professional development, whether during school, experiential rotations, postgraduate training, or career. Furthermore, the practice of peer assessment promotes active learning, group work, and complex problem solving.

In addition to promoting these great aspects of learning, peer assessment has other distinct advantages.  Peer assessment enables faster and more detailed feedback.1  How many times have you turned in an assignment and waited for weeks for the professor to grade it and provide feedback?  Most likely, you forgot your thought process through that assignment and thus, the feedback is no longer useful to you.  Instead, having peers grade each other’s assignments provides more timely feedback, which is more useful because the assignment and the students’ thoughts are still fresh in their minds.  In addition, since assignments are being reviewed simultaneously by multiple graders, there is the potential for more detailed and in-depth feedback.

Peer assessment might have some advantages from a teaching and learning point of view, but what are students’ attitudes towards it?  In a study conducted by Wu and colleagues, 91.9% of PharmD students surveyed believed that peer assessment is a skill that they will use in their pharmacy career.  In terms of student-to-student peer evaluation, 80% of students were comfortable providing an honest assessment to their partner and 95.7% of students were comfortable receiving it. Furthermore, only 34.4% of the students believed that the assessment of students is solely the responsibility of faculty and not students.2  In another study, Basheti and colleagues demonstrated that anonymous peer feedback in a pharmacy course is an effective means of providing constructive feedback on performance.  The study found that 78.1% of students felt that their participation in the peer assessment process helped them to deepen their understanding of the course content and 78% of students would endorse the use of this practice in other courses.3  Thus, students felt comfortable with peer assessment and perceived it as a valuable tool in their education.

Peer assessment is consistent with the principles of andragogy.  In other words, peer assessment takes evaluation from “teacher-driven” to “learner-driven”.  By taking assessment out of the teacher’s hands, students have yet another learning opportunity.1 Peer assessment can lead to a deeper understanding of a topic by evaluating the work of others.3  For example, when I evaluated the English paper of an undergraduate peer, I was pleasantly surprised what I learned just from reading it!  We had written on the same topic, but we had different views and opinions.  By practicing peer assessment, students can discover other perspectives on a topic which can broaden their understanding.

Finally, peer assessment fosters metacognition, which is a knowledge or awareness of one’s own learning processes.1  By participating in peer assessment, students are in a better position to understand the grading criteria.  Thus, they can then internalize this understanding and apply it to their future work and to improve their own performance.  For example, in a practice patient counseling session, a pharmacy student grading a peer would develop a better understanding of best practices and can then apply these criteria to his/her future counseling sessions.

While peer assessment has many great qualities, there are some concerns.  Can peer assessment truly serve as a substitute for the teacher’s assessment?  Are these assessments valid?  Falchikov and colleagues attempted to answer these questions by performing a meta-analysis comparing peer and teacher assessments in higher education.  The meta-analysis showed a mean correlation over all the studies to be r = 0.69, indicating reasonably good agreement between peer and teacher assessments.4  Similarly, Sadler and colleagues conducted a study to determine the agreement between the grades given by a teacher and those given by a peer.  This study showed that peer-grades were highly correlated with teacher grades (r =905)!1

Assessment and evaluation are essential components of instructional design and peer assessment is a good way of engaging students in the classroom. Studies have identified ways to implement peer assessment by educators.   It’s important to provide training on the evaluation process to students and to provide clear criteria for peer feedback in order to avoid superficial comments.  In addition, professors should blind the reviews in order to reduce bias, since friendships may affect the accuracy of peer assessment.1

When educators implement structured, unbiased approachs to peer assessment, it can play an exceptional role.  Not only is it an effective learning tool, but peer assessment can foster team work, active learning, and metacognition.  Students realize the importance of peer assessment and are comfortable participating in such a process.  So the next time your professor announces that you’ll be grading your peers, embrace it and further your learning!

References
1.   Sadler PM, Good E. The impact of self- and peer-grading on student learning. Educational Assessment. 2006; 11(1):1-31.
3.   Basheti IA, Ryan G, Woulfe J, Bartimote-Aufflick K. Anonymous peer assessment of medication management reviews. Am J Pharm Educ. 2010; 74(5):77.  
4.   Falchikov N, Goldfinch J. Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks. Review of Educational Research. 2000; 70(3):287-322.

360-Degree Feedback


by Andrea Passarelli, Pharm.D., PGY1 Pharmacy Practice Resident, The Johns Hopkins Hospital

Preceptors play an integral role in the development of pharmacists.  Most schools of pharmacy provide formalized training and education to their preceptors in order to help them master effective precepting techniques, but literature surrounding this topic is scarce.  Preceptors should use a variety of teaching strategies throughout a rotation and these should be tailored to the learner’s stage or professional development.1  As learners grow in their ability and confidence, they should be performing more tasks independently, such as medication reconciliation, discharge counseling, and "rounding" with the medical team.  In these situations, the preceptor might use an assessment technique known as 360-degree feedback to evaluate the student’s performance.  The 360-degree feedback technique generally involves the preceptor asking other healthcare professionals as well as patients about the student’s performance.  This evaluation technique is gaining popularity in healthcare and other sectors.  I wanted to research this technique further so that I could use this assessment strategy during my career.

One paper published by Joshi et al. in Academic Medicine described the successful implementation of 360-degree feedback for obstetrics and gynecology residents at Monmouth Medical Center.2 Residents were assessed on their interpersonal and communication skills by nurses, faculty members, allied health professional staff, medical students, patients, and co-residents.  In addition, each resident completed a self-assessment.  The researchers found good correlation between evaluations within each group of evaluators as well reasonably strong agreement among evaluators regarding each resident's rank among the peer group. Interestingly, there was a negative correlation between the rankings by faculty, staff, and medical students with the rankings given by peers.  The highest-rated residents (based on faculty, staff, and student evaluations) received low marks from their co-residents and vice versa.  This may perhaps be due to some perceived competition or a desire to “get ahead” by rating high-achieving residents poorly.  On self-assessment, junior residents typically rated themselves highly while senior residents rated themselves average or low.  This may have been because senior residents were setting higher standards for themselves or have increased self-awareness later in the curriculum. A potential advantage of 360-degree feedback employed at Monmouth Medical Center: evaluators were eager participants because their feedback was anonymous. The 360-feedback technique has been widely described in medical education journals, but has been most often been used in residency training rather than student education.3-6  360-degree feedback can be an extremely effective tool for a preceptor, as learners will often communicate differently in the presence of a preceptor and when independently communicating with peers, staff (who may be perceived to be at different social ranks within the organization), and patients.

When I was a student I had one experience with 360-degree feedback during my first acute care rotation.  On the last day of the rotation, my preceptor allowed me to round independently with the team.  After the team finished rounding, my preceptor asked the medical interns, attending physicians, and nurse case-manager about my performance.  In subsequent advanced practice experiences, both as a student and resident, preceptors have occasionally sought feedback from physicians regarding my performance.  But most based my evaluation solely on their direct observations.

While my experience with the 360-degree technique from the perspective of a learner is limited, I believe there are some important points that should be considered prior to implementation.  First, it is important to inform the learner that this technique will be used.  Although the goal is to assess the learner’s performance without influence from the preceptor, learners might be taken aback if their performance is discussed with other individuals without their knowledge.  Further, it is important for preceptors to remember that in order to effectively implement this technique they must ask a breadth of individuals (with different roles) to participate in the process.  This was done during my first rotation, but not in subsequent rotations as preceptors only asked physicians for their input.  I have never had a preceptor ask one of my patients about their perception of my abilities.  And with the notable exception of my first rotation, my preceptors haven’t asked medical interns, medical students, or nurses for feedback.

360-degree feedback appears to be an incredibly valuable assessment tool that can enhance the quality of evaluations provided to both pharmacy students and residents.  This technique allows preceptors to more accurately assess a learner’s communication skills, especially in the absence of preceptor supervision.  Research has shown that this is an effective and accurate evaluation technique when used in medical residency training, but its use has not (yet) been described in the pharmacy education literature.  Preceptors utilizing this technique should be familiar with its fundamentals, and should ask individuals in multiple roles, including patients, for feedback regarding the learner’s performance.  When used appropriately, 360-degree feedback allows the preceptor a unique opportunity to obtain a complete picture of the learner’s strengths and can help them identify areas for improvement.

References
1.    McDonough RP, Bennett MS. Improving communication skills of pharmacy students through effective precepting. Am J Pharm Educ. 2006 ;70(3): Article 58.
5.    Sorg JC, Wilson RD, Perzynski AT et al. Simplifying the 360-degree peer evaluation in a physical medicine andrehabilitation residency program. Am J Phys Med Rehabil 2012; 91(9):797-803.