1. INTRODUCTION
While people’s health needs and the complexity of health care systems continues to increase, the health care systems remain fragmented, posing great challenges to the education of health professionals [1]. The World Health Organization recommends interprofessional education (IPE) as an approach to enable a collaborative practice-ready workforce. IPE is defined as “when students from two or more professions learn about, from, and which each other to enable effective collaboration and improve people’s health outcomes” [2]. Ample evidence suggests that IPE has the potential to achieve these goals [2–4].
IPE may improve learners’ attitudes, knowledge, skills [3], and behaviors relevant to collaborative practice [4]. Among these outcomes, enhancing attitudes toward IPE has been a key educational goal of many interventions [3]. While attitude outcomes are classified as level 2 in Kirkpatrick’s evaluation framework [5], they are essential components of collaborative competences [6] and serve as effective outcome measures for program evaluation [3]. For educators and course designers, assessing learner’s attitudes helps gauge readiness [7], inform curriculum design [8], and evaluate program [3]. The Readiness for Interprofessional Learning Scale (RIPLS) [7,9] is notable as the most commonly used instrument to assess attitudes in IPE literature [3,10].
The RIPLS was originally designed by Parsell & Bligh (1999) to assess learners’ readiness for IPE [7]; however, its uses have expanded to assess attitudes toward IPE in general [10]. McFayden et al. (2005) recommended a four-factor structure for the RIPLS [9]. The RIPLS in its original English version has shown good validity and reliability [7,9,11]. Consequently, the RIPLS has been adapted into multiple languages, including German [12], Portuguese [13], Spanish [14], and Italian [15]; some translated versions have also been validated within Asian countries, such as Japanese [16] and Chinese [17].
While IPE is emerging in Vietnam, there has been no validated instrument in Vietnamese to assess attitudes toward IPE. In 2018, Wibåge & Södersten translated the RIPLS to Vietnamese and assessed medical and nursing students’ attitudes toward IPE at a university in Vietnam [18]. While forward and backward translation were completed, the authors did not validate the instrument. This translated version of RIPLS was subsequently adapted and used in a study on Vietnamese health students’ perception of the nursing profession [19]. However, the psychometric characteristics of this Vietnamese translation of the RIPLS remain unknown. Schmitz & Cullen (2015) advised researchers to carefully consider the validity, reliability, and utility of an instrument before fully integrating it into education and research practices [20], especially when the instrument is cross-culturally adapted and used in different populations.
Therefore, we aimed to assess the validity and reliability of the existing Vietnamese translation of RIPLS. This study also aims to determine the suitability of the RIPLS for assessing health students’ attitudes toward IPE in Vietnam. The findings of this study could provide evidence for educators to adapt RIPLS as the first validated attitude assessment instrument in Vietnamese for course design, program evaluation, and education research in IPE.
2. MATERIALS AND METHODS
In Vietnam, most health professions students are admitted to health science universities directly after graduating from high school. In Vietnamese undergraduate health sciences programs, the standard duration for the medical program is six years, for the pharmacy program is five years, and both the nursing and rehabilitation sciences programs span four years. Typically, the initial half of each program encompasses the pre-clinical phase, emphasizing basic sciences and clinical skills, while the latter half focuses on the clinical phase.
We conducted this validation study at the University of Medicine and Pharmacy at Ho Chi Minh City (UMP-HCMC), Vietnam. Situated in the southern region of Vietnam, this leading health sciences university plays a crucial role in supplying healthcare professionals not only for the local area but also various regions in the country. The university has seven faculties: Medicine, Pharmacy, Dentistry, Nursing and Medical Technology, Public Health, Traditional Medicine, and Basic Sciences. In 2019, UMP-HCMC introduced and delivered Vietnam’s first IPE module.
At UMP-HCMC, five sequential IPE courses are offered each academic year. Each course is formatted as a weekly 3.5-hour class over 8 weeks. The general course structure was outlined in a previous study by Nguyen et al. [19]. The course is mandatory for third-year students in nursing and rehabilitation therapy, as well as fourth-year medical and pharmacy students. By design, students who enter the IPE courses are in the early stages of their clinical phases within their respective programs, and are presumed to have limited prior exposure to IPE or shared learning experiences.
The RIPLS is in the public domain, allowing its use without requiring permission from the authors. The RIPLS consists of 19 items [7], with respondents evaluating each item based on a 5-point Likert scale. Each item is rated from 1 to 5, corresponding to “Strongly Disagree,” “Disagree,” “Neutral,” “Agree,” and “Strongly Agree.” According to McFadyen et al. [11], the RIPLS could be structured into four factors: “Teamwork and Collaboration” (Item 1–9, 9 items), “Negative Professional Identity” (Item 10–12, 3 items), “Positive Professional Identity” (Item 13–16, 4 items), and “Roles and Responsibilities” (Item 17–19, 3 items). Parsell & Bligh (1999) did not provide official interpretation guidelines for the RIPLS [7]. As in our previous study [19], we interpreted mean scores as follow: ≥4.0 was high; from 3.5 to 3.99 was mid-range, and ≤3.49 was low. These thresholds were appplied to item, factor, and total scores. We performed reverse coding for items within “Negative Professional Identity” and “Roles and Responsibilities” factors when calculating the total score; however, these items were otherwise presented in their original direction.
Wibåge & Södersten (2018) translated the original 19-item RIPLS into Vietnamese for use in a study involving medical and nursing students from UMP-HCMC [18]. They employed the translation processed as follow: one Vietnamese professional interpreter translated the instrument from English to Vietnamese (forward translation), five nursing faculty members of UMP-HCMC translated it back to English (backward translation). Subsequently, the authors reviewed and made minor modifications. After obtaining approval from Wibåge & Södersten, we further reviewed their Vietnamese adaptation of the RIPLS through expert panel assessment and a student comprehension survey. Subsequently, we incorporated feedback to refine the version, which was uilized in this validation study.
We invited a panel of seven expert instructors in IPE to participate in an online content validation process. The experts were explicitly queried regarding the instrument’s relevance to IPE to compute the Content Validity Index (CVI). Additionally, their input was sought on its appropriateness within the Vietnamese culture and clarity of language. Criteria for panel selection included being native Vietnamese speakers, possessing English language proficiency, having a minimum of two courses’ experience of teaching IPE, and representing the four professions of medicine, nursing, pharmacy and rehabilitation sciences. The expert panels recommended minor adaptations for four translated items. Revisions were made to items 3, 5, and 8 by reorganizing ideas and adjusting sentence structures to align with Vietnamese grammar, while preserving the original intended meaning. Additionally, an explanatory note in bracket was appended to item 19, providing examples to elucidate the meaning of the term “knowledge and skills”.
This refined Vietnamese version of RIPLS was pre-tested in a small group of students to assess the face validity, language comprehension and test-retest reliability. Thirty students were randomly selected from the fourth courses’ prospective student roster in the same academic year and sent invitations. Of the students invited for the pre-testing phase, a total of 24 students responded and completed both the test and retest surveys. Almost all students (23 out of 24) indicated a thorough understanding of all items. However, one student mentioned a lack of their IPE experience, leading to some uncertainty about fully grasping of the concept. We concluded that no additional corrections were necessary; nonetheless, we advised including an explanatory note about IPE in the survey instructions. The refined Vietnamese version of the RIPLS, named Viet-RIPLS, was then used in the main phase of this validation study.
We recruited students attending the first three IPE courses of the academic year 2020–2021. The first course was from August 31, 2020, to October 23, 2020; the second course was from October 26, 2020, to December 18, 2020; and the third course was from December 21, 2020, to March 5, 2021. All students who attended one of these courses and consented to participate in the study were included, no exclusion criteria were applied. One week before each course, we invited all students enrolled in these courses via email to participate in an online survey. Participants were given a one-week period to provide their responses.
Based on accepted practice and standard assumption to estimate sample size for confirmatory factor analysis (CFA), we assumed that for a 4-factor instrument with 19 items like RIPLS, with a ratio of sample size to the number of variables≥10, we would need at least 190 participants for the CFA.
The IPE module at the UMP-HCMC aimed to validate another attitude assessment instrument in addition to the RIPLS. Since there were about 200 students attending each IPE course, we opted half the number of students in the first three IPE courses of the academic year 2020–2021 for this RIPLS validation study. This sampling strategy gave us access to a pool of about 300 students, which satisfied the minimum required sample size. As each course was organized into groups of equivalent size and structure, we employed stratified random sampling. Twelve groups out of a total of 24 was randomly selected using the RAND function in Excel.
The self-administered online survey consisted of three parts: 1) demographic data, including age, gender, ethnicity, major and study year; 2) the Viet-RIPLS; and 3) the Student Stereotypes Rating Questionnaire (SSRQ) [21,22]. The analysis of SSRQ information was beyond the scope of this paper.
Data were managed and analyzed using Microsoft Excel and R version 4.3.2 software. A p-value<0.05 was statistically significant. Missing data were imputed by the median.
We computed both Item-CVIs and Scale-CVI to evaluate content validity. Face validity was examined using the proportions of students who agreed that the instrument was relevant. CFA was conducted to examine the structural validity. The goodness of fit was assessed using χ2, root mean square error of approximation (RMSEA), adjusted goodness-of-fit index (AGFI) and comparative fit index (CFI). A RMSEA value below 0.08 and a CFI value exceeding 0.90 indicated a good fit; an AGFI value above 0.85 was considered indicative of an adequate model fit [23].
Cronbach’s α coefficients were computed for individual factors to assess the internal consistency of the Viet-RIPLS. We examined test-retest reliability by analyzing intraclass-correlation coefficients (ICC) using data from the pre-testing phase. Subgroup analysis for the professions included was also performed. In reporting this study, we aimed to align with the Streiner and Kottner’s (2014) reporting guidelines for instrument and scale development and testing [24].
This study was approved by the Ethics Committee of UMP-HCMC (No. 447/HĐĐĐ-ĐHYD). Students gave informed consent before data was collected; the data were then de-identified before analysis. Whether the students participated did not affect their performance evaluation in the courses.
3. RESULTS
All seven invited experts completed the online content validation form; there was no missing data. The Item-CVI for each item, as well as the Scale-CVI by average, were found to be 1.0.
All 24 students in the pre-testing phase reported the instrument “relevant” or “very relevant” to IPE. Using the pre-testing phase data to assess test-retest reliability, our analysis revealed ICCs of 0.75, 0.70, 0.73, and 0.74 for four factors of “Teamwork and Collaboration,” “Negative Professional Identity,” “Positive Professional Identity,” and “Roles and Responsibilities,” respectively. Additionally, the ICC for the total scale score was 0.83.
Of the 302 eligible students invited, 275 (91.06%) completed the online survey. The age variable was missing in 7 responses and was imputed with the median; no other missing data were identified. Socio-demographic characteristics of the study population are reported in Table 1. Due to the course design, the age, major, and study year variables were strongly correlated.
The CFA yielded the following fit indices while analyzing the fitness of the four-factor structure of the Viet-RIPLS to the observed data: χ2=413.588, df=146, p<0.001; CFI=0.907; RMSEA=0.082 [90% CI: 0.072 to 0.091]; AGFI=0.813. Factor loadings of each item within its domain were all above 0.50 (Fig. 1).

Assessing the internal consistency of the Viet-RIPLS, the Cronbach’s α vaues of the factors of “Teamwork and Collaboration,” “Negative Professional Identity,” “Positive Professional Identity,” and “Roles and Responsibilities” were 0.9, 0.92, 0.82, and 0.68, respectively. Table 2 displays the factor items, the mean scores of each item and factor, along with the corresponding Cronbach α for each factor. Results from the sub-group analysis for different professions are presented in the Supplement, where nursing and rehabilitation sciences students were grouped together due to smaller numbers. Notably, the Cronbach’s α for the “Roles and Responsibilities” factor was 0.54 in the medical student subgroup, in comparison to 0.70 in the pharmacy subgroup and 0.81 in the combined subgroup of nursing and rehabilitation students.
4. DISCUSSION
This study demonstrated that the Viet-RIPLS was an instrument with strong reliability and satisfactory validity for use among Vietnamese healthcare students to assess their attitudes towards IPE.
The fit indices showed that a four-factor structure was appropriate for identifying the components of Vietnamese healthcare students’ attitudes toward IPE. The χ² value of the model was 413.588 with 146 degrees of freedom (df), yielding a result of p<0.001. This indicates a statistically significant difference between the model and the observed data. However, it is important to note that the χ² value is highly sensitive to sample size, particularly larger samples. In this case, the sample size was 275, potentially contributing to the large χ² value. To provide a broader assessment of model fit, we also utilized other fit indices. The CFI value is 0.907, indicating strong model fit (CFI>0.90). The RMSEA value is 0.082, which falls within a marginally acceptable range (RMSEA<0.08). Additionally, the AGFI value is 0.813, suggesting a reasonably adequate fit (AGFI>0.80). These indices collectively indicate that the model achieves an acceptable fit, despite the sensitivity of the χ² value to larger sample sizes.
The Viet-RIPLS demonstrated good content validity and face validity based on the expert panel and the student survey in the pre-testing phase. We employed a slightly more flexible approach to evaluate the model fit of the CFA model, considering the substantial sample size and the number of items. The acceptable fit indices suggest that the four-factor model from the McFadyen’s version of the RIPLS [9] could be applied to the Viet-RIPLS in Vietnamese settings. Most items demonstrated a strong goodness-of-fit with factor loadings above 0.80, except for item 19 (factor loading=0.68). Our findings also indicate that reverse coding should be used for the factors “Negative Professional Identity” and “Roles and Responsibilities” because they were found to be inversly correlated with the factors “Teamwork and Collaboration” and “Positive Professional Identity.”
The whole instrument demonstrated good internal consistency as well as good test-retest reliability. All four factors had acceptable test-retest reliability in our small sample of pre-testing sample. While the first three factors (“Teamwork and Collaboration,” “Negative Professional Identity” and “Positive Professional Identity”) exhibited good internal consistency (Cronbach’s α ranging from 0.82 to 0.92), the “Roles and Responsibilities” factor showed a Cronbach’s α of 0.68. While the coefficient was near the conventional cut-off of 0.7, the lower result aligns with other reliability studies in China (α=0.216) [17], the UK (α=0.32) [7], and Germany (α=0.65) [12]. In addition to the fewer number of items within this factor, various explanations have been proposed, yet definitive reasons have not been identified [11,12,17]. We speculate that this may be attributed to the potential ambiguity of Item 19 – “I have to acquire much more knowledge and skills than other health care students”. Learners might have interpreted the item in two distinct ways: either as an encouragement to exert effort in enhancing their knowledge and skills (conveying a positive message), or as an indication of competition with other professions in acquiring more knowledge and skills (possibly conveying negativity). This interpretation should be situated in the specific cultural context of the society and healthcare system. Our sub-group analysis revealed that Cronbach’s α for the factor ‘Roles and Responsibilities’ was significantly lower for medical students compared to other sub-groups (Supplement S1), which might indicate the potential effect of interprofessional cultural differences over instrument’s reliability. This finding highlights the need for a re-examination and possibly modifying the “Roles and Responsibilities” factor in future studies, particularly with analysis through different cultural lenses.
There were a few limitations to this study. Despite a thorough cross-cultural translation and adaptation process, we recognize that the chosen terminology may not resonate equally with all potential users. For instance, the term “identity” has various Vietnamese equivalents, such as ‘căn tính,’ ‘bản sắc,’ or ‘bản dạng.’ In this study, ‘căn tính’ was selected for its prevalent use in psychology and education. We advise tailoring the translated tool to fit specific context, particularly for users from diverse Vietnamese regions. An expedited expert panel could facilitate this customization while preserving the tool’s validity. While the study sample consists of four different professions, pharmacy and medical students were the majority, which reflects the varied numbers of students admitted to each profession at UMP-HCMC. This could limit the generalizability of the Viet-RIPLS when used in nursing or rehabilitation sciences, as well as additional professions. The students were novice to IPE in our study, which could have limited their comprehension of the instrument. Further studies examining different levels of education, varying experiences of IPE, and different professional groups are needed to confirm the stability of the structure of the Viet-RIPLS.
5. CONCLUSION
The Vietnamese adaptation of the RIPLS (Viet-RIPLS) demonstrated acceptable validity and reliability as an instrument for assessing the attitudes of Vietnamese healthcare students towards IPE. The analysis found that “Negative Professional Identity” and “Roles and Responsibilities” factors were negatively correlated with other factors, and thus should require reverse interpretation. Further examination of the “Roles and Responsibilities” factor is warranted due to its lower internal consistency in the Vietnamese context. In particular, Item 19 requires further scrutiny given its lower factor loading and potential ambiguity in meaning.