Khojasteh Laleh, Kafipour Reza, Pakdel Farhad, Mukundan Jayakaran
Department of English Language, School of Paramedical Sciences, Shiraz University of Medical Sciences, Shiraz, Iran.
School of Education, Taylors University, Lakeside Campus, Malaysia.
BMC Med Educ. 2025 Jan 31;25(1):159. doi: 10.1186/s12909-025-06753-3.
Assessing and improving academic writing skills is a crucial component of higher education. To support students in this endeavor, a comprehensive self-assessment toolkit was developed to provide personalized feedback and guide their writing improvement. The current study aimed to rigorously evaluate the validity and reliability of this academic writing self-assessment toolkit.
The development and validation of the academic writing self-assessment toolkit involved several key steps. First, a thorough review of the literature was conducted to identify the essential criteria for authentic assessment. Next, an analysis of medical students' reflection papers was undertaken to gain insights into their experiences using AI-powered tools for writing feedback. Based on these initial steps, a preliminary version of the self-assessment toolkit was devised. An expert focus group discussion was then convened to refine the questions and content of the toolkit. To assess content validity, the toolkit was evaluated by a panel of 22 medical student participants. They were asked to review each item and provide feedback on the relevance and comprehensiveness of the toolkit for evaluating academic writing skills. Face validity was also examined, with the students assessing the clarity, wording, and appropriateness of the toolkit items.
The content validity evaluation revealed that 95% of the toolkit items were rated as highly relevant, and 88% were deemed comprehensive in assessing key aspects of academic writing. Minor wording changes were suggested by the students to enhance clarity and interpretability. The face validity assessment found that 92% of the items were rated as unambiguous, with 90% considered appropriate and relevant for self-assessment. Feedback from the students led to the refinement of a few items to improve their clarity in the context of the Persian language. The robust reliability testing demonstrated the consistency and stability of the academic writing self-assessment toolkit in measuring students' writing skills over time.
The comprehensive evaluation process has established the academic writing self-assessment toolkit as a robust and credible instrument for supporting students' writing improvement. The toolkit's strong psychometric properties and user-centered design make it a valuable resource for enhancing academic writing skills in higher education.
评估和提高学术写作技能是高等教育的重要组成部分。为了在这一过程中支持学生,开发了一个全面的自我评估工具包,以提供个性化反馈并指导他们提高写作水平。本研究旨在严格评估这个学术写作自我评估工具包的有效性和可靠性。
学术写作自我评估工具包的开发和验证涉及几个关键步骤。首先,对文献进行了全面回顾,以确定真实评估的基本标准。其次,对医学生的反思论文进行了分析,以深入了解他们使用人工智能工具获取写作反馈的经历。基于这些初步步骤,设计了自我评估工具包的初步版本。随后召集了一个专家焦点小组讨论,以完善工具包的问题和内容。为了评估内容效度,由22名医学生参与者组成的小组对该工具包进行了评估。他们被要求审查每个项目,并就该工具包在评估学术写作技能方面的相关性和全面性提供反馈。还检查了表面效度,让学生评估工具包项目的清晰度、措辞和适当性。
内容效度评估显示,95%的工具包项目被评为高度相关,88%的项目在评估学术写作的关键方面被认为是全面的。学生们提出了一些微小的措辞修改建议,以提高清晰度和可解释性。表面效度评估发现,92%的项目被评为明确无误,90%的项目被认为适合自我评估且相关。学生的反馈促使对一些项目进行了完善,以提高其在波斯语语境下的清晰度。强大的可靠性测试表明,学术写作自我评估工具包在长期测量学生写作技能方面具有一致性和稳定性。
全面的评估过程已将学术写作自我评估工具包确立为支持学生提高写作水平的强大且可靠的工具。该工具包强大的心理测量特性和以用户为中心的设计使其成为提高高等教育学术写作技能的宝贵资源。