GW SON SAIL Center Evaluation Policy

Policy Statement 

The George Washington (GW) Simulation and Innovation Learning (SAIL) Center is used for educational simulation experiences to enhance learning of individuals and groups. In order to enhance the performance of the SAIL Center, it is important to evaluate the SAIL Center activities in accordance with this policy. Evaluation of the SAIL Center aligns with the NLN Jeffries Simulation Theory and is guided by the 2021 INACSL Healthcare Simulation Standards of Best Practice™.

Reason for Policy

The George Washington University (GWU) School of Nursing (SON) Evaluation Policy exists to ensure SAIL Center evaluation processes align with best practice  standards and contribute to the expectations of regulatory and accrediting bodies.

Who is Governed by this Policy 

This policy pertains to participants of simulation-based education events of the GWU SON community, including but not limited to simulation learners, staff,  faculty, standardized patients, and visitors. This policy does not pertain to non-GWU SON simulation events that take place in the Sail Center.

Policy 

Based on the NLN Jeffries Simulation Theory, the SAIL Center requires the evaluation of simulation-based experiences, including facilitator/debriefer,  standardized patients/simulated patients, simulation center operations, and simulation-based experiences (Jeffries, 2016). Evaluation outcomes will be collected and shared on an annual basis.  

All persons providing peer-review data will receive training in regard to the standards of best practice.  

Evaluation data will be collected for each simulation event at both the undergraduate and graduate level. Evaluations will be collected to inform future simulation-based experiences and ensure operational efficiencies of the SAIL  Center.  

All evaluations for a simulation-based experience will be developed prior to the simulation event. Separate evaluations may be used for different simulation-based  experiences, including undergraduate versus graduate simulations experiences.  Faculty and staff may consult with the Director of Simulation and Experiential  Learning or the Assistant Director of Simulation to ensure the identified evaluation tools are appropriate for the simulation experience. Examples of evaluation tools  that can be used include but are not limited to: 

  • Center for Medical Simulation (DASH)
  • Facilitator Competency Rubric (Leighton)
  • SET-M (Leighton)
  • Lasater Clinical Judgment Tool

Evaluation data for simulation experiences should be shared with the faculty and staff involved in the learning event, the Director of Simulation and Experiential Learning, and the Assistant Director of Simulation. Evaluations will be analyzed by the appropriate party and an action plan will be developed and implemented to maintain and improve SAIL Center programs. Information in regard to any personal evaluations will be kept confidential and used for the purpose of SAIL Center program improvement.

Definitions 

Simulation Center Operations: Simulation center operations encompasses the infrastructure, people,  processes, finances, supplies, and equipment necessary for the implementation of high-quality, high-fidelity simulation-based experiences  (INACSL Standards Committee et al., 2021a). 

Evaluation:

  • Formative: A determination of learner knowledge and comprehension prior to,  during, and/or after instructions that involves the measurement of progression towards stated objectives. Formative assessment includes constructive feedback that should be provided regularly throughout the simulation activities  (Sando et al., 2013).
  • Summative: A determination of learner knowledge and comprehension after instruction that involves the measurement of established endpoints and/or outcomes of stated objectives (Sando et al., 2013).

Facilitator: A person who assists and supports simulation-based education (SBE) before, during, and after the learning experience. “Guides the simulation-based learning experience to optimize opportunities for participants to meet expected outcomes” (Boese et al., 2013, p. S23). 

Debriefing: An activity in which facilitators guide participant reflection upon the performance, their feelings, and future practice assimilation post-simulation and at designated points during a simulation activity (INACSL Standards  Committee et al., 2021b).

Procedures 

The SAIL Center requires the GWU SON community who use the simulation center to use evidence-based evaluation tools. Some of the evidence-based tools that can  be used include:

Area Simulation         Evaluation Standard 
Facilitator / Debriefer Center for Medical Simulation (DASH); Facilitator Competency Rubric (Leighton)
Standardized Patients / Simulated Patients Association for Standardized Patient Educators
Simulation Center Operations                                                                                INACSL Standard of Best Practice: Operations™
Simulation-based Experiences

SET-M (Leighton); Lasater Clinical Judgment Tool

When possible, evaluation data will be collected electronically. Evaluations of student performance, staff and faculty performance (including standardized patients), simulation center operations, pre-briefing and/or debriefing will be conducted.  

The Simulation Advisory Committee will review simulation evaluation data on an annual basis, and assist with the development of an action plan for changes based on outcomes of data.  

Forms/Related Information

Several evidence-based simulation evaluation tools include: 

  • DASH (Debriefing Assessment for Simulation in Healthcare): A six-element valid and reliable tool that was developed to enhance the  effectiveness and quality of debriefing in simulation (Brett-Fleegler et al., 2012).
  • SET-M (modified Simulation Effectiveness Tool): A 19-item validated tool used to assess the effectiveness of nursing  simulation (Leighton, Ravert, Mudra, & Macintosh, 2015). The original  SET tool (13-items) was modified to include areas such as pre-briefing  and debriefing which have been determined essential aspects of nursing  simulation experiences (Leighton et al., 2015). 

  • Lasater Clinical Judgment Tool: Based on Tanner’s (2006) Clinical Judgment model, the Lasater Clinical  Judgment tool is a validated rubric to be used in simulation to measure nursing clinical judgment in the areas of noticing, interpreting, responding, and reflecting (Lasater, 2007).  
  • Facilitator Competency Rubric: Based on Benner’s novice to expert model, the Facilitator Competency  Rubric was developed to assess the competency level of the facilitator within a simulation experience (Leighton, Mudra, & Gilbert, 2018).

References

Boese, T., Cato, M., Gonzalez, L., Jones, A., Kennedy, K., Reese, C., …, Borum, J. C. (2013). Standards of best practice: Simulation standard V: Facilitator. Clinical Simulation in Nursing, 9(6S), S22-S25. http://dx.doi.org/10.1016/j.ecns.2013.04.010

Brett-Fleegler, J., Rudolph, J., Eppich, W., Monuteaux, M., Fleegler, E., Cheng, A., & Simon, R. (2012). Debriefing assessment for simulation in healthcare. Simulation in Healthcare, 7(5), 288–294.

INACSL Standards Committee, Charnetski, M., & Jarvill, M. (2021a). Healthcare simulation standards of best practiceTM: Operations. Clinical Simulation in Nursing, 58, 33-39. https://doi.org/10.1016/j.ecns.2021.08.012

INACSL Standards Committee, Decker, S., Alinier, G., Crawford, S.B., Gordon, R.M., & Wilson, C. (2021b). Healthcare simulation standards of best practiceTM: The debriefing process. Clinical Simulation in Nursing, 58, 27-32. https://doi.org/10.1016/j.ecns.2021.08.011.

Jeffries, P.R. (2016). The NLN simulation theory. Philadelphia, PA: Wolters Kluwer.

Lasater, K. (2007). Clinical judgement development using simulation to create an assessment rubric. The Journal of Nursing Education. 46(11), 496-503.

Leighton, K., Mudra, M., & Gilbert, G.E. (2018). Development and psychometric evaluation of the facilitator competency rubric. Nurse Education Perspectives, 39(6), E3-E9.

Leighton, K., Ravert, P., Mudra, V., & Macintosh, C. (2015). Updating the simulation effectiveness tool item modifications and reevaluations of psychometric properties. Nurse Education Perspectives, 36(5), 317-323.

Sando, C.R., Coggins, R.M., Meakim, C., Franklin, A.E., Gloe, D., Boese, T., Decker, S., Lioce, L., & Borum, J.C. (2013). Standards of best practice: Simulation standard VII: Participant assessment and evaluation. Clinical Simulation in Nursing, 9(6S), S30-S32. http://dx.doi.org/10.1016/j.ecns.2013.04.007

Tanner, C.A. (2006). Thinking like a nurse: A research-based model of clinical judgment in nursing. Journal of Nursing Education, 45(6), 204-211.

Contacts

Contact Telephone Email
Director of Simulation and  Experiential Learning 571-553 0115

sonsimlabatgwu [dot] edu 
cc: cfarinaatgwu [dot] edu

 

Assistant Director of  Simulation and Experiential  Learning

571-553-0081

sonsimlabatgwu [dot] edu
cc: anicklasatgwu [dot] edu

 

Simulation Operations   sonsimlabatgwu [dot] edu
Sim IT Administrator 571-553-0086

sonsimlabatgwu [dot] edu 
cc: paulcollinsatgwu [dot] edu

 

Responsible University Official: 
Responsible Office: SAIL Center
Last Reviewed: March 10, 2021