
Esta obra está bajo licencia internacional Creative Commons Reconocimiento-NoComercial-CompartirIgual 4.0.Artificial Intelligence (AI) stands as one of the most transformative technological advancements impacting the education sector. Within the realm of AI, Natural Language Processing (NLP) has facilitated the development of chatbots powered by Large Language Models (LLMs), enabling them to comprehend and generate human-like responses (Albadarin et al., 2024). These AI-driven chatbots have the potential to enrich student learning by offering self-directed study recommendations and personalized guidance (Shoufan, 2023). Among them, Chat Generative Pre-Trained Transformer (ChatGPT), launched by OpenAI in late 2022, has emerged as a particularly influential tool in educational contexts (Onyekwere & Enamul, 2025; Vázquez-Cano et al., 2023). ChatGPT has gained considerable popularity among students because of its capabilities in providing human-like assistance in idea generation and writing samples (Dai et al., 2023); producing well-structured, informative and contextually appropriate responses (Farhi et al., 2023; Rudolph et al., 2023; Singh et.al., 2025); offering advice on the structural aspects of a text (Imran & Almusharraf, 2023); and efficiently performing the tasks related to natural language processing such as translation, guided writing, summarizing, and so on (King, 2023; Kohnke et al., 2023). Because of the impressive abilities of ChatGPT, scholars believe that ChatGPT will become an integral part of writing, as computers and calculators have become essential parts of science and mathematics (McMurtrie, 2023). Hence, some researchers recommend that students should be involved with ChatGPT in order to facilitate learning, rather than prohibiting its usage (Sharples, 2022).
Highlighting the learning opportunities provided by ChatGPT for all levels of education, Kasneci et al. (2023) opine that ChatGPT can not only develop reading, writing, thinking, and problem-solving skills in students, but can also empower learners with disabilities. Further, some studies have also highlighted the strengths of ChatGPT in knowledge management in higher education. For example, Farrokhnia et al. (2023) argued that ChatGPT serves as a natural language model by generating plausible answers and providing personalized responses in real-time. Similarly, Lee (2023) opined that ChatGPT serves as a virtual teaching assistant by providing detailed and relevant information to students. Rasul et al. (2023) highlighted the benefits of using ChatGPT in higher education which include personalized feedback, facilitating adaptive learning, supporting research and data analysis, helping in automated administrative services and developing innovative assessments (Padilla-Hernández et.al., 2019). Scholars have also indicated that ChatGPT helps in fostering students’ performance, engagement and motivation (Albadarin et al., 2024). Furthermore, Firat (2023) investigated the implications of ChatGPT by qualitatively analyzing the perceptions of students using thematic content analysis. The author concluded that ChatGPT enhances learning experiences of students by providing personalized instructions. Mahapatra (2024) indicated that ChatGPT has a positive impact on academic writing skills of a student by using the mixed-methods intervention study. Shoufan (2023) qualitatively analyzed the perceptions of students and concluded that they perceive ChatGPT to be interesting, motivating, and helpful for their studies. Based on the university students’ perspectives, Jo (2024) concluded that knowledge acquisition, knowledge application, personalization and novelty value contribute towards the benefits of using ChatGPT. Ngo (2023) conducted a quantitative survey among university students and identified several benefits of ChatGPT, including time efficiency, access to information across various domains, personalized tutoring, and assistance in idea generation for writing tasks.
To gain a comprehensive understanding of the learning benefits of ChatGPT, this study adopts a quantitative approach to explore students’ perspectives. Although both qualitative and quantitative approaches provide important insights, a quantitative lens helps in systematic measurement and validation of the perceived benefits of ChatGPT to the students. Further, analysing students’ perspective is crucial for designing user friendly educational technologies that align with their needs and preferences (Chiu & Churchill, 2016; Ngo, 2023; Veletsianos & Moe, 2017). The initial studies have offered valuable insights into students’ perceptions of ChatGPT (Godsk & Elving, 2024; Hojeij et al., 2024; Ngo, 2023; Shoufan, 2023; Zhang & Tur, 2024), and there is a growing body of literature that highlights the need to systematically investigate the educational benefits and challenges associated with these technologies (Baidoo-Anu & Owusu Ansah, 2023; Kasneci et al., 2023; Rudolph et al., 2023). However, most of the existing studies are qualitative in nature, that rely on thematic analyses, open-ended responses, or simple rating items (Ilić et.al, 2024; Wang & Fan, 2025). Also, several recent studies have developed scales or survey items to assess specific aspects of students’ experiences with ChatGPT —such as its perceived usefulness, academic support (Farhi et.al., 2023; Jo, 2024; Mohapatra, 2024; Ngo, 2023; Shoufan, 2023), but there remains a lack of a comprehensive, validated instrument that captures the full range of educational benefits perceived by students across multiple dimensions (Hoffmann et.al., 2024). Recent researches have also emphasized the importance of understanding the student engagement with AI for purposes such as content creation, academic writing support, and personalized learning (Cotton et al., 2023; Dwivedi et al., 2023). Quantitative studies can complement qualitative inquiries by measuring the extent and perceived effectiveness of AI-assisted learning tools across diverse educational dimensions.
Therefore, the present study attempts to quantitatively analyze the perceived benefits of using ChatGPT among students. As ChatGPT continues to evolve and become more integrated in various aspects of education (Van Dis et al., 2023), the present study is imperative to maximize its benefits. To uncover the underlying structure of perceived benefits of using ChatGPT, the study employs exploratory factor analysis (EFA). Further, confirmatory factor analysis (CFA) is employed to confirm the factor structure and ensure the reliability and validity of the identified categories of benefits. By quantitatively analyzing the benefits of ChatGPT from students’ perspectives, the study seeks to provide an insightful and comprehensive understanding that can enhance its development, implementation, and utilization in the field of education.
The structure of the paper is as follows. Section 2 provides a review of relevant literature and outlines the study’s conceptual framework. Section 3 details the research methodology adopted, while section 4 presents the data analysis and key findings. Finally, section 5 concludes the paper by summarizing the results, discussing their implications, and highlighting the study’s limitations.
To develop a comprehensive scale for measuring the perceived benefits of using ChatGPT for students, we followed a structured process grounded in recommended best practices in scale development (Boateng et al., 2018; DeVellis, 2016). We began with an extensive review of empirical and conceptual literature on the educational use of ChatGPT and other AI-powered tools, focusing on themes such as personalized learning, knowledge acquisition, academic writing support, efficiency, and intelligent tutoring (Cotton et al., 2023; Dwivedi et al., 2023). Based on this review, we extracted commonly cited benefits and translated them into measurable statements aligned with our target population of students. In line with the established item development practices, we ensured clarity, singularity of meaning, and the use of age-appropriate language to suit our target student population (DeVellis, 2016; Hinkin, 1995). This process resulted in an initial pool of 41 items (see Table 1).
To establish face validity, and enhance the clarity and relevance of the items, we conducted in-depth interviews with 10 domain experts, including 5 university teachers and 5 researchers, whose areas of specialization were related to the adoption and impact of educational technologies. Although the final survey was administered to students, these experts were consulted due to their practical understanding of how students interact with educational technologies. Their insights ensured that the item phrasing was age-appropriate and clearly interpretable, and aligned with learning contexts. Based on their feedback, 11 items that were observed to be redundant were dropped, and few items were reworded to enhance the understandability. Hence the initial pool was reduced to 30 items.
To validate the benefits quantitatively, we conducted a survey of students studying in higher educational institutions (HEIs) of Delhi – National Capital Region (NCR). 5 different HEIs were selected using convenience sampling, and self-administered questionnaires were distributed to the students of those HEIs. The questionnaire comprised 30 items assessing the perceived benefits of using ChatGPT (see Table 1), measured on a 5-point Likert scale ranging from 1 (strongly disagree) to 5 (strongly agree). Additionally, it included demographic questions related to students’ gender, level of education, academic discipline, and frequency of ChatGPT usage.
TABLE 1. Items on ChatGPT’s benefits
| Items | Reference |
|
Jo (2024) |
|
Farhi et al. (2023) |
|
Mahapatra (2024) |
|
Shoufan (2023) |
|
Ngo (2023) |
Out of the 600 distributed questionnaires, a total of 528 responses were received, indicating a response rate of 88%. Prior to analysis, rigorous data cleaning process was conducted to ensure data quality and reliability. First, responses were checked for missing data. Questionnaires with substantial missing values (e.g., more than 10% unanswered questions) were excluded, as incomplete data can bias results (Hair et al., 2019). Second, responses were examined for vague or inconsistent answers, such as uniform use of a single response option (e.g., marking all 3s or all 5s), which is indicative of straight-lining or inattentive responding (Meade & Craig, 2012). Third, entries were screened for contradictory responses across related items. In total, 13 responses were excluded based on these criteria, resulting in a final valid sample of 515 responses used for further analysis.
Table 2 indicates the profile of the respondents.
TABLE 2. Sample Profile
| Variable | Categories | No. of respondents | Response % |
| Gender | Female | 271 | 52.6 |
| Male | 244 | 47.4 | |
| Level of education | Under-graduate | 312 | 60.6 |
| Post-graduate | 147 | 28.5 | |
| Doctoral | 56 | 10.9 | |
| Educational discipline | Management | 218 | 42.3 |
| Engineering | 96 | 18.6 | |
| Science | 75 | 14.6 | |
| Commerce | 62 | 12.0 | |
| Arts and humanities | 23 | 4.5 | |
| Media and communication | 41 | 8.0 | |
| Frequency of using ChatGPT | Daily | 225 | 43.7 |
| Weekly | 215 | 41.7 | |
| Occasional | 75 | 14.6 |
We employed EFA and CFA techniques to identify various categories of ChatGPT’s benefits. Using the cross-validation approach recommended in the literature (Murphy, 1984), we randomly divided the sample into two parts: part 1 which was considered as the derivation sample, and part 2 which was considered as the calibration sample. The derivation sample comprising 300 responses was used to explore the underlying factor structure through EFA. This sample size was deemed adequate as it satisfies the widely accepted guideline of at least 5 to 10 responses per item (Hair et al., 2019) and meets Comrey and Lee’s (1992) classification of 300 as a “good” sample for factor analysis. The calibration sample comprising 215 responses was used to confirm the dimensions of the benefits and ensure their reliability and validity through CFA. A sample of over 200 is considered sufficient for CFA, especially for models of moderate complexity with a reasonable number of constructs (Hair et al., 2019; Kline, 2016). Therefore, both sample sizes meet established psychometric standards for robust factor analytic procedures.
To assess the presence of common method variance in the self-reported data, Harman’s single-factor test was conducted on the derivation sample (n = 300) prior to EFA. An unrotated principal component analysis was conducted on all observed items. The results revealed that the first factor accounted for only 33.8% of the total variance, which is less than the 50% threshold recommended by Podsakoff et al. (2003). This indicates that common method bias is not a major concern in the present study.
EFA was conducted using the derivation sample with 30 items. To check the feasibility of employing EFA, we checked the results of Barlett’s test of sphericity, which were found to be significant (chi-square = 7551.105, df = 351, p<0.01). Further, the Kaiser–Meyer–Olkin (KMO) measure was found to be 0.877 which indicated the relevance of sample size (Hair et al., 2013). Factors were extracted using Principal Component Analysis (PCA) with varimax rotation. PCA was chosen as a data reduction technique to identify key components that account for the maximum variance in the observed variables without assuming any specific underlying latent structure (Hair et al., 2019; Tabachnick & Fidell, 2013). Given the exploratory nature of this study and the focus on identifying dimensions of ChatGPT’s perceived benefits, PCA served as a robust method to extract the most interpretable factor solution. Moreover, varimax rotation, which is an orthogonal rotation method, was applied to maximize the variance of squared loadings within each factor and enhance interpretability by minimizing cross-loadings (Field, 2018). This is particularly useful in early-stage scale development where the assumption is that extracted dimensions are relatively independent. Unlike other methods such as Maximum Likelihood (ML) estimation which are more suitable for confirmatory purposes (Fabrigar et al., 1999), PCA is widely recommended when the objective is to summarize data and explore its underlying structure. Hence, PCA with varimax rotation was considered appropriate and adequate for identifying preliminary factor structure prior to model confirmation through CFA.
Following the recommendations of Hair et al. (2013), we deleted 3 items (i.e., I10, I15, and I20) that had factor loadings < 0.5. This resulted in 27 items. The eigenvalue criterion (Hair et al., 2013) was applied to determine the number of factors to retain, with factors having eigenvalues greater than 1 being considered for inclusion. Using this criterion, six factors were extracted, which explained 78.239% variance in the data. Table 3 indicates the eigen values of the extracted factors.
TABLE 3. Eigen Values
| Factor | Eigen value | % of variance | Cumulative % of variance |
| 1 | 9.148 | 33.882 | 33.882 |
| 2 | 4.019 | 14.886 | 48.768 |
| 3 | 2.887 | 10.691 | 59.459 |
| 4 | 2.509 | 9.292 | 68.752 |
| 5 | 1.431 | 5.299 | 74.050 |
| 6 | 1.131 | 4.188 | 78.239 |
Table 4 shows the loadings of items on the extracted factors. As can be observed from the table, the items I21, I22, I23, I24, I29, I8 and I9 loaded together on one factor. As these items encompass the various aspects of how ChatGPT aids in the learning process, we labelled this factor as “Learning support” (LS). The next factor was labelled as “Content generation” (CG) as the items loaded on this factor (i.e. I11, I12, I13 and I14) indicate how ChatGPT can facilitate content generation by providing writing ideas to students. The factor related to the items I1, I2, I27, and I19 was labelled as “Knowledge acquisition” (KA), as these items involve ChatGPT’s usage for expanding the existing knowledge and generating new insights. The next factor was named as “Personalized learning” (PL) as the items loaded on this factor (i.e. I3, I4, I25 and I30) are related to ChatGPT’s ability to adapt to the unique requirements of students. The next factor was found to be related to the items I16, I17, I18 and I26, which indicate how ChatGPT assists students in refining and structuring their ideas, thereby enhancing the clarity of their written work. Hence, we named this factor as “Academic writing” (AW). Finally, the items I5, I6, I7 and I28 loaded together which indicate how ChatGPT enables students to complete their tasks more effectively and efficiently. Hence the factor underlying these items was labelled as “Efficiency” (EF).
TABLE 4. Factor loadings
| Factor | Item code | Item | Item loadings | |||||
| 1 | 2 | 3 | 4 | 5 | 6 | |||
| Learning support (LS) | I21 | ChatGPT is a helpful and effective tool for learning | 0.762 | |||||
| I22 | ChatGPT is good as a complimentary learning resource | 0.750 | ||||||
| I23 | ChatGPT provides explanations that are effective | 0.747 | ||||||
| I24 | ChatGPT responses are well-structured | 0.733 | ||||||
| I29 | ChatGPT can assist students understand theories and concepts more effectively | 0.727 | ||||||
| I8 | ChatGPT is an effective learning and educational tool | 0.709 | ||||||
| I9 | ChatGPT helps me in meeting my learning objectives | 0.686 | ||||||
| Content generation (CG) | I11 | ChatGPT assists me to write articles and essays | 0.924 | |||||
| I12 | ChatGPT provides me with the best possible writing ideas | 0.909 | ||||||
| I13 | ChatGPT provides creative ideas to share with students | 0.889 | ||||||
| I14 | ChatGPT assists in language translation | 0.838 | ||||||
| Knowledge acquisition (KA) | I1 | ChatGPT enables me to build new knowledge basis existing knowledge | 0.850 | |||||
| I2 | ChatGPT helps me gather varied information from various resources | 0.824 | ||||||
| I27 | ChatGPT can provide information in diverse fields | 0.778 | ||||||
| I19 | ChatGPT provides particular and specific information on the topic | 0.777 | ||||||
| Personalized learning (PL) | I3 | ChatGPT can give me specific and tailored answers to my questions | 0.894 | |||||
| I4 | ChatGPT can create personalized answers based on my input | 0.879 | ||||||
| I25 | ChatGPT can provide personalized tutoring and feedback according to my learning needs and progress | 0.878 | ||||||
| I30 | ChatGPT is capable of personalizing learning for students by delivering adaptive and individualized learning experiences. | 0.844 | ||||||
| Academic writing (AW) | I16 | ChatGPT enhances accuracy in sentence structure | 0.849 | |||||
| I17 | ChatGPT provides explanations for language errors | 0.817 | ||||||
| I18 | ChatGPT helps in improving accuracy of word choice | 0.787 | ||||||
| I26 | ChatGPT can illuminate ideas in writing | 0.747 | ||||||
| Efficiency (EF) | I5 | ChatGPT enhances productivity in task completion | 0.924 | |||||
| I6 | ChatGPT simplifies task completion | 0.908 | ||||||
| I7 | ChatGPT increases my productivity | 0.872 | ||||||
| I28 | ChatGPT can help students save time | 0.772 | ||||||
The internal consistencies of the 6 factors extracted through EFA, was assessed using Cronbach’s alpha coefficients (see Table 5). All factors exhibited high internal consistency, with Cronbach’s alpha coefficients exceeding the 0.70 threshold (Nunnally & Bernstein, 1994).
TABLE 5. Internal consistency
| Factor | No. of Items | Cronbach’s alpha coefficient |
| Learning support | 7 | 0.917 |
| Content generation | 4 | 0.941 |
| Knowledge acquisition | 4 | 0.950 |
| Personalized learning | 4 | 0.905 |
| Academic writing | 4 | 0.889 |
| Efficiency | 4 | 0.900 |
To confirm the factorial structure and validate the reliability and validity of the factors emerged from EFA, we performed CFA using the calibration sample of 215 responses. CFA was conducted using two software packages i.e. WarpPLS and AMOS. The dual use of these platforms was adopted to enhance the robustness and credibility of the model evaluation. WarpPLS employs a Partial Least Squares Structural Equation Modeling (PLS-SEM) approach, which is variance-based and is primarily used for prediction-oriented analysis with relatively smaller sample sizes (Hair et al., 2019; Kock, 2017). On the other hand, AMOS employs a Covariance-Based Structural Equation Modeling (CB-SEM) approach, which is used for theory testing (Hair et al., 2019; Kline, 2016). The use of both methods ensured a comprehensive validation of the measurement model by addressing both predictive relevance (via PLS-SEM) and model fit adequacy (via CB-SEM).
Firstly, CFA was applied through WarpPLS software (Kock, 2017). We assessed a measurement model comprising 27 items related to 6 factors that were extracted through EFA. Table 6 indicates the fit indices for the measurement model. As can be noted from table 6, the indices indicated a good fit to the data (Kock, 2015).
TABLE 6. Goodness of fit indices
| Model fit index | Cut off criterion | Value |
| Average path coefficient (APC) | Significant | 0.203, P<0.001 |
| Average R-squared (ARS) | Significant | 0.302, P<0.001 |
| Average adjusted R-squared (AARS) | Significant | 0.285, P<0.001 |
| Average block VIF (AVIF) | <= 3.3 | 1.306 |
| Average full collinearity VIF (AFVIF) | <= 3.3 | 1.528 |
| Tenenhaus GoF (GoF) | >= 0.36 | 0.492 |
| Sympson’s paradox ratio (SPR) | >= 0.7 | 1 |
| R-squared contribution ratio (RSCR) | >= 0.9 | 1 |
| Statistical suppression ratio (SSR) | >= 0.7 | 0.800 |
| Nonlinear bivariate causality direction ratio (NLBCDR) | >= 0.7 | 0.800 |
Table 7 illustrates the results of CFA. As can be observed from the table, the standardized loadings of all items were significant (p<0.001) and greater than 0.50, thereby providing support for indicator reliability (Hair et al., 2013). Further, the values of composite reliability (CR), as well as Cronbach’s alpha (CA) for all 6 factors were above 0.70, which indicated their reliability (Hair et al., 2013). The values of average variance extracted (AVE) of all factors were greater than 0.50, which provided support for convergent validity (Fornell & Larcker, 1981).
TABLE 7. Confirmatory factor analysis (reliability and convergent validity) - WarpPLS
| Factor | Item code | Item | Item loading | CR | CA | AVE |
| Learning support (LS) | I21 | ChatGPT is a helpful and effective tool for learning | 0.908 | 0.954 | 0.944 | 0.749 |
| I22 | ChatGPT is good as a complimentary learning resource | 0.891 | ||||
| I23 | ChatGPT provides explanations that are effective | 0.878 | ||||
| I24 | ChatGPT responses are well-structured. | 0.883 | ||||
| I29 | ChatGPT can assist students understand theories and concepts more effectively | 0.814 | ||||
| I8 | ChatGPT is an effective learning and educational tool | 0.854 | ||||
| I9 | ChatGPT helps me in meeting my learning objectives | 0.825 | ||||
| Content generation (CG) | I11 | ChatGPT assists me to write articles and essays | 0.907 | 0.963 | 0.948 | 0.866 |
| I12 | ChatGPT provides me with the best possible writing ideas | 0.910 | ||||
| I13 | ChatGPT provides creative ideas to share with students | 0.962 | ||||
| I14 | ChatGPT assists in language translation | 0.942 | ||||
| Knowledge acquisition (KA) | I1 | ChatGPT enables me to build new knowledge basis existing knowledge | 0.957 | 0.971 | 0.960 | 0.893 |
| I2 | ChatGPT helps me gather varied information from various resources | 0.959 | ||||
| I27 | ChatGPT can provide information in diverse fields | 0.937 | ||||
| I19 | ChatGPT provides particular and specific information on the topic | 0.927 | ||||
| Personalized learning (PL) | I3 | ChatGPT can give me specific and tailored answers to my questions | 0.791 | 0.899 | 0.850 | 0.691 |
| I4 | ChatGPT can create personalized answers based on my input | 0.863 | ||||
| I25 | ChatGPT can provide personalized tutoring and feedback according to my learning needs and progress | 0.830 | ||||
| I30 | ChatGPT is capable of personalizing learning for students by delivering adaptive and individualized learning experiences | 0.839 | ||||
| Academic writing (AW | I16 | ChatGPT enhances accuracy in sentence structure | 0.885 | 0.899 | 0.850 | 0.691 |
| I17 | ChatGPT provides explanations for language errors | 0.888 | ||||
| I18 | ChatGPT helps in improving accuracy of word choice | 0.881 | ||||
| I26 | ChatGPT can illuminate ideas in writing | 0.853 | ||||
| Efficiency (EF) | I5 | ChatGPT enhances productivity in task completion | 0.934 | 0.954 | 0.934 | 0.839 |
| I6 | ChatGPT simplifies task completion | 0.956 | ||||
| I7 | ChatGPT increases my productivity | 0.813 | ||||
| I28 | ChatGPT can help students save time | 0.952 |
To establish discriminant validity, the correlation coefficients between constructs were compared with the square roots of the Average Variance Extracted (AVE) (see Table 8 in next page). Since all inter-construct correlations (off-diagonal) were lower than the corresponding square roots of AVE (reported on diagonal), discriminant validity was supported in accordance with Fornell and Larcker (1981). Additionally, the heterotrait–monotrait (HTMT) ratio was employed as a complementary method. As shown in Table 9, all HTMT values were below the recommended threshold of 0.85 (Henseler et al., 2015), further confirming discriminant validity.
TABLE 8. Discriminant validity - WarpPLS
| LS | CG | KA | PL | AW | EF | |
| LS | 0.865 | |||||
| CG | 0.207 | 0.93 | ||||
| KA | 0.631 | 0.128 | 0.945 | |||
| PL | -0.034 | 0.137 | 0.114 | 0.831 | ||
| AW | 0.515 | 0.033 | 0.582 | 0.132 | 0.877 | |
| EF | 0.346 | -0.045 | 0.177 | 0.034 | 0.319 | 0.916 |
TABLE 9. Discriminant validity - WarpPLS (HTMT Ratios)
| LS | CG | KA | PL | AW | EF | |
| LS | ||||||
| CG | 0.219 | |||||
| KA | 0.664 | 0.134 | ||||
| PL | 0.071 | 0.152 | 0.125 | |||
| AW | 0.559 | 0.047 | 0.625 | 0.152 | ||
| EF | 0.370 | 0.050 | 0.189 | 0.075 | 0.349 |
To further validate the measurement model, CFA was performed with AMOS. During the model estimation, three items i.e. LS2, EF4, and PL4 were removed due to low standardized factor loadings (< 0.70). This threshold is commonly used to ensure adequate indicator reliability and convergent validity (Byrne, 2016; Hair et al., 2019). Moreover, based on the modification indices, the error terms of items KA2 and KA4 were allowed to covary. This adjustment was theoretically justified as both items shared conceptual similarity (Byrne, 2016). Allowing their errors to covary led to a better-fitting model without undermining construct validity. The final model was assessed using well-established fit indices, including the χ²/df, Comparative Fit Index (CFI), Tucker-Lewis Index (TLI), Root Mean Square Error of Approximation (RMSEA) and Standardized Root Mean Square Residual (SRMR). The model demonstrated acceptable fit with the following results: χ²/df = 2.27, CFI = 0.944, TLI = 0.935, RMSEA = 0.077, and SRMR = 0.053. These values fall within the commonly recommended thresholds (CFI, TLI > 0.90; RMSEA, SRMR < 0.08), indicating an adequate fit between the data and the hypothesized model (Byrne, 2016; Hu & Bentler, 1999).
To assess the reliability and validity of the model constructs, we examined the standardized item loadings, omega reliability coefficients, and AVE values (see Table 10 in next page). All standardized loadings exceeded the recommended threshold of 0.70 (Hair et al., 2019), thereby indicating strong indicator reliability. The omega coefficients for all constructs were above 0.80, suggesting high internal consistency (McNeish, 2018). Moreover, AVE values for all constructs exceeded the recommended threshold of 0.50, thereby supporting convergent validity (Fornell & Larcker, 1981).
Discriminant validity was evaluated using the Fornell–Larcker (1981) criterion (see Table 11 in next page). For all the constructs, the square roots of the AVEs were greater than the inter-construct correlations. Hence, discriminant validity was ensured for the model constructs (Fornell & Larcker, 1981; Hair et al., 2019).
TABLE 10. Confirmatory factor analysis (reliability and convergent validity) – AMOS
| ITEM | STANDARDIZED LOADING | OMEGA INDEX | AVE | |
| AW | AW1 | .835 | 0.900 | 0.692 |
| AW2 | .863 | |||
| AW3 | .838 | |||
| AW4 | .790 | |||
| PL | PL1 | .806 | 0.833 | 0.623 |
| PL2 | .781 | |||
| PL3 | .781 | |||
| PL4 | .699 | —dropped— | ||
| CG | CG1 | .990 | 0.945 | 0.813 |
| CG2 | .958 | |||
| CG3 | .825 | |||
| CG4 | .820 | |||
| EF | EF1 | .989 | 0.961 | 0.893 |
| EF2 | .974 | |||
| EF3 | .867 | |||
| EF4 | .676 | —dropped— | ||
| KA | KA1 | .984 | 0.952 | 0.833 |
| KA2 | .884 | |||
| KA3 | .952 | |||
| KA4 | .822 | |||
| LS | LS1 | .976 | 0.932 | 0.699 |
| LS3 | .960 | |||
| LS4 | .818 | |||
| LS5 | .719 | |||
| LS6 | .796 | |||
| LS7 | .706 | |||
| LS2 | .669 | —dropped— | ||
TABLE 11. Discriminant validity – AMOS
| Construct | AW | PL | CG | EF | KA | LS |
| AW | 0.832 | |||||
| PL | 0.141 | 0.789 | ||||
| CG | 0.044 | 0.173 | 0.902 | |||
| EF | 0.313 | 0.006 | –0.042 | 0.945 | ||
| KA | 0.594 | 0.136 | 0.169 | 0.161 | 0.913 | |
| LS | 0.551 | –0.043 | 0.184 | 0.336 | 0.601 | 0.836 |
| Note: Diagonal elements indicate square roots of AVE | ||||||
The study quantitatively analyzed the benefits of using ChatGPT from students’ perspectives. The results indicate six key categories, namely Learning Support, Content Generation, Knowledge Acquisition, Personalized Learning, Academic Writing, and Efficiency. Each category reflects different aspects of how ChatGPT enhances the educational experience for students.
Learning support encompasses the various aspects of how ChatGPT aids in the learning process of students. As per the above findings, it can be observed that students consider ChatGPT to be a highly beneficial tool for their learning process. ChatGPT supports learning by providing clear and well-structured explanations, and making complex concepts more understandable. ChatGPT can break down complex topics into more understandable segments, thereby reducing cognitive load and enhancing comprehension. The ability of ChatGPT to simplify complex theories and concepts is particularly valuable in fields that require deep analytical thinking, such as science, technology, engineering, and mathematics (STEM). This finding aligns with those of Shoufan (2023) and Ngo (2023), who noted that ChatGPT makes challenging topics more understandable to students by using simpler language and providing relatable examples. This simplification not only aids in understanding but also boosts students’ confidence in tackling difficult subjects, which can lead to increased engagement and improved academic performance. Moreover, ChatGPT can also serve as an excellent complementary learning resource, filling in gaps that may exist in students’ understanding after traditional instruction. By offering diverse resources and supplementary information, ChatGPT enhances students’ overall learning experience, ensuring they have access to a wide array of knowledge (Javaid et al., 2023).
The study also highlighted content generation as one of the key benefits of using ChatGPT. Content generation encompasses the ways in which ChatGPT assists students in creating written content. Our findings indicate that ChatGPT helps students in writing reports, essays and articles, which is a fundamental part of their academic work. Such support is particularly valuable in the early stages of content creation, where generating ideas and organizing thoughts can be challenging for students. ChatGPT can streamline this process by offering structured outlines and guiding students through the development of their arguments (Haleem et al., 2022; Vázquez-Cano et al., 2023). Moreover, ChatGPT’s ability to provide a wide range of ideas, based on specific queries can help students explore different perspectives for their assignments and projects. Our finding is in line with that of Vázquez-Cano et al. (2023) who demonstrated that the summaries obtained through ChatGPT are outstanding in terms of content and style.
Knowledge acquisition emerged as another benefit of using ChatGPT. The findings reveal that students perceive ChatGPT as a valuable tool for expanding their knowledge base. ChatGPT not only enables them to build on their existing knowledge, but also help them acquire new information across various subjects. This benefit is crucial in today’s educational environment, where the ability to rapidly acquire and integrate new knowledge across various disciplines is highly valued. Such cross-disciplinary knowledge not only gives an impetus to the learning process of students but also enables them to deal with the complexities of real-world problem-solving. ChatGPT’s access to various learning resources allows students to explore topics in depth and from multiple perspectives, thus enriching their overall educational experience. This aligns with research by Jo (2024), who opine that ChatGPT contributes to knowledge acquisition by providing information that is immediate, accurate and personalized. By expanding students’ knowledge base and providing them access to diverse information, ChatGPT serves as an invaluable tool in modern education, where knowledge acquisition and application are key to academic success.
The findings of our study suggest personalized learning as another impactful benefit of using ChatGPT. This benefit is characterized by ChatGPT’s ability to customize its responses according to the personalized needs of individual students. Such customization allows students to receive relevant information that is tailored according to their current understanding and learning goals. Prior studies (e.g. Firat, 2023) indicate that personalized responses from ChatGPT can lead to more effective learning outcomes, by removing barriers such as shyness and hesitation in asking questions. ChatGPT also enhances personalized learning by offering adaptive learning experiences. It can adjust its level of complexity and the nature of its responses according to the student’s progress, thereby providing a learning experience that evolves with the student. This adaptability is crucial for keeping students engaged and motivated (Rasul et al., 2023).
The study identified academic writing as another benefit of ChatGPT. Our findings indicate that students perceive that ChatGPT can help improve sentence structure, vocabulary accuracy, and overall writing quality. ChatGPT plays a crucial role in refining and editing text. After drafting their content, students can use ChatGPT to refine their work by improving grammar, sentence structure, and overall clarity. ChatGPT’s abilities to suggest grammatical corrections and alternative phrasing of the text, help students enhance their writing proficiency and produce more professional reports (Vázquez-Cano et al., 2023). Our finding resonates with those of Mahapatra (2024) who observed that ChatGPT has a positive impact on academic writing skills. ChatGPT provides real-time feedback on grammatical errors, which is particularly beneficial for students who may struggle with nuances of academic writing in English language (Mahapatra, 2024).
Finally, the study that ChatGPT is considered as a valuable aid for increasing efficiency and productivity by enabling faster task completion. In today’s dynamic educational environment, time management and productivity are key to academic success. ChatGPT serves as a very useful tool for students by reducing the time spent on information retrieval. Instead of spending hours searching for reliable information from various sources, students can use ChatGPT to quickly gather and organize the information they need (Jo, 2024; Ngo, 2023). This efficiency in accessing and utilizing information not only saves students’ time but also enhances the depth and quality of their academic work. ChatGPT also boosts students’ productivity by helping them manage their workload more effectively. By offering tools for idea generation, content organization and content refinement, ChatGPT enables students to work more efficiently and produce higher-quality work in less time (Albadarinet al., 2024).
On the basis of our findings, it can be concluded that ChatGPT can serve as an important tool in modern learning ecosystem. By providing learning support to students, and enabling them to acquire knowledge, generate content, improve academic writing, and efficiency, ChatGPT can significantly contribute to their academic success. By presenting empirical evidence of ChatGPT’s benefits, this study contributes to the growing research on AI in education. It extends existing research by categorizing these benefits into distinct dimensions based on empirical evidence. Practically, the study provides valuable insights for educators, academic institutions, and policymakers on how to effectively integrate ChatGPT and similar AI tools into the curriculum. Educators can leverage the identified benefits of using ChatGPT, to improve student engagement and performance. They should explore ways how to integrate ChatGPT and other AI tools with their teaching pedagogies and also teach students to use it ethically and beneficially. Considering the interest and inclination of students towards using ChatGPT, academic institutions should consider investing in AI tools such as ChatGPT. AI tools can be integrated into teaching and learning processes to enrich students’ academic experiences. Further, the policymakers should support the adoption of AI tools in education through funding, training, and development of guidelines/policies that ensure the ethical use of such tools. As the educational landscape continues to evolve with rapid technological advancements, understanding the practical benefits of AI tools such as ChatGPT becomes essential for shaping the future of learning.
Although the study highlights several key benefits associated with ChatGPT, it has a few limitations that should be addressed in future studies. Firstly, the study relies on a convenience sample drawn from a single region in India, which may restrict the broader applicability of its findings. Future research could enhance the validity of the proposed framework on ChatGPT’s benefits by employing more diverse and representative samples. Secondly, the findings of the study are based on students’ perspectives only. Future studies could explore the perspectives of teachers to have a more holistic idea of the benefits of ChatGPT. Thirdly, the study hasn’t explored the interrelationships of the identified categories of benefits. Future studies could explore those relationships and can also examine the impact of these benefits on educational outcomes such as student engagement and motivation. Fourth, the study did not consider potential cultural or national differences in how students perceive and use ChatGPT. Future research could investigate cross-cultural perspectives to deepen the relevance and generalizability of the findings. Finally, the study did not perform comparative analyses between the eliminated and retained questionnaires. While responses were excluded based on criteria such as incompleteness or vague answering patterns, the study didn’t explore whether these excluded participants differed systematically from the retained ones. Future studies are encouraged to test for such differences.
Conception and Design – Author 1.
Acquisition of data – Author 2.
Analysis and interpretation of Data – Author 2.
Drafting the article- Author 1.
Revising it for intellectual content – Author 1, Author 2.
Final approval of the completed article – Author 1, Author 2.
No funding was received.
Albadarin, Y., Saqr, M., Pope, N., & Tukiainen, M. (2024). A systematic literature review of empirical research on ChatGPT in education. Discover Education, 3(1), 60. https://doi.org/10.1007/s44217-024-00138-2
Baidoo-Anu, D., & Ansah, L. O. (2023). Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. Journal of AI, 7(1), 52-62. http://dx.doi.org/10.2139/ssrn.4337484
Boateng, G. O., Neilands, T. B., Frongillo, E. A., Melgar-Quiñonez, H. R., & Young, S. L. (2018). Best practices for developing and validating scales for health, social, and behavioral research: a primer. Frontiers in public health, 6, 149. https://doi.org/10.3389/fpubh.2018.00149
Byrne, B. M. (2016). Adaptation of assessment scales in cross-national research: Issues, guidelines, and caveats. International Perspectives in Psychology, 5(1), 51-65. https://doi.org/10.1037/ipp0000042
Chiu, T. K., & Churchill, D. (2016). Design of learning objects for concept learning: Effects of multimedia learning principles and an instructional approach. Interactive Learning Environments, 24(6), 1355-1370. https://doi.org/10.1080/10494820.2015.1006237
Cope, A. B., Ramirez, C., DeVellis, R. F., Agans, R., Schoenbach, V. J., & Adimora, A. A. (2016). Measuring concurrency attitudes: Development and validation of a vignette-based scale. PloS one, 11(10), e0163947. https://doi.org/10.1371/journal.pone. 0163947.
Comrey, A. L., & Lee, H. B. (1992). Interpretation and Application of Factor Analytic Results. In A. L. Comrey, & H. B. Lee (Eds.), A First Course in Factor Analysis (p. 2). Lawrence Eribaum Associates. https://doi.org/10.4324/9781315827506
Cotton, D. R., Cotton, P. A., & Shipway, J. R. (2024). Chatting and cheating: Ensuring academic integrity in the era of ChatGPT. Innovations in education and teaching international, 61(2), 228-239. https://doi.org/10.1080/14703297.2023.2190148
Dai, Y., Liu, A., & Lim, C. P. (2023). Reconceptualizing ChatGPT and generative AI as a student-driven innovation in higher education. Procedia CIRP, 119, 84-90. https://doi.org/10.1016/j.procir.2023.05.002
Dwivedi, Y. K., Kshetri, N., Hughes, L., Slade, E. L., Jeyaraj, A., Kar, A. K., Baabdullah, A. M., Koohang, A., Raghavan, V., Ahuja, M., Albanna, H., Albashrawi, M. A., Al-Busaidi, A. S., Balakrishnan, J., Barlette, Y., Basu, S., Bose, I., Brooks, L., Buhalis, D., ... Wright, R. (2023). “So what if ChatGPT wrote it?” Multidisciplinary perspectives on opportunities, challenges and implications of generative conversational AI for research, practice and policy. International Journal of Information Management, 71, 102642. https://doi.org/10.1016/j.ijinfomgt.2023.102642
Fabrigar, L. R., Wegener, D. T., MacCallum, R. C., & Strahan, E. J. (1999). Evaluating the use of exploratory factor analysis in psychological research. Psychological methods, 4(3), 272. https://doi.org/10.1037//1082-989X.4.3.272
Farhi, F., Jeljeli, R., Aburezeq, I., Dweikat, F. F., Al-shami, S. A., & Slamene, R. (2023). Analyzing the students’ views, concerns, and perceived ethics about chat GPT usage. Computers and Education: Artificial Intelligence, 5, 100180. https://doi.org/10.1016/j.caeai.2023.100180
Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2024). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International, 61(3), 460-474. https://doi.org/10.1080/14703297.2023.2195846
Field, A.P. (2018) Discovering Statistics Using IBM SPSS Statistics. Sage.
Firat, M. (2023). What ChatGPT means for universities: Perceptions of scholars and students. Journal of Applied Learning and Teaching, 6(1), 57-63. https://doi.org/10.37074/jalt.2023.6.1.22
Fornell, C., & Larcker, D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of marketing research, 18(1), 39-50. https://doi.org/10.2307/3151312
Godsk, M., & Elving, P. R. (2024). ChatGPT for Learning in Higher Education: Benefits, Downsides, and Implications. Journal of Interactive Learning Research, 35(1), 31-69. https://doi.org/10.70725/583781fusvis
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2013). Multivariate data analysis: Pearson new international edition PDF eBook. Pearson Higher Ed.
Hair, J. F., Risher, J. J., Sarstedt, M., & Ringle, C. M. (2019). When to use and how to report the results of PLS-SEM. European business review, 31(1), 2-24. https://doi.org/10.1108/EBR-11-2018-0203
Haleem, A., Javaid, M., & Singh, R. P. (2022). An era of ChatGPT as a significant futuristic support tool: A study on features, abilities, and challenges. BenchCouncil transactions on benchmarks, standards and evaluations, 2(4), 100089. https://doi.org/10.1016/j.tbench.2023.100089
Henseler, J., Ringle, C. M., & Sarstedt, M. (2015). A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the academy of marketing science, 43, 115-135. https://doi.org/10.1007/s11747-014-0403-8
Hinkin, T. R. (1995). A review of scale development practices in the study of organizations. Journal of management, 21(5), 967-988. https://doi.org/10.1371/journal.pone.0163947
Hoffmann, S., Lasarov, W., & Dwivedi, Y. K. (2024). AI-empowered scale development: Testing the potential of ChatGPT. Technological Forecasting and Social Change, 205, 123488. https://doi.org/10.1016/j.techfore.2024.123488
Hojeij, Z., Kuhail, M. A., & ElSayary, A. (2024). Investigating in-service teachers’ views on ChatGPT integration. Interactive Technology and Smart Education. https://doi.org/10.1108/ITSE-04-2024-0094
Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural equation modeling: a multidisciplinary journal, 6(1), 1-55. https://doi.org/10.1080/10705519909540118
Ilić, J., Ivanović, M., & Klašnja-Milićević, A. (2024, November). The Impact of ChatGPT on Student Learning Experience in Higher STEM Education: A Systematic Literature Review. In 2024 21st International Conference on Information Technology Based Higher Education and Training (ITHET) (pp. 1-9). IEEE. https://doi.org/10.1109/ITHET61869.2024.10837649
Imran, M., & Almusharraf, N. (2023). Analyzing the role of ChatGPT as a writing assistant at higher education level: A systematic review of the literature. Contemporary Educational Technology, 15(4), ep464. https://doi.org/10.30935/cedtech/13605
Javaid, M., Haleem, A., Singh, R. P., Khan, S., & Khan, I. H. (2023). Unlocking the opportunities through ChatGPT Tool towards ameliorating the education system. BenchCouncil Transactions on Benchmarks, Standards and Evaluations, 3(2), 100115. https://doi.org/10.1016/j.tbench.2023.100115
Jo, H. (2024). From concerns to benefits: a comprehensive study of ChatGPT usage in education. International Journal of Educational Technology in Higher Education, 21(1), 35. https://doi.org/10.1186/s41239-024-00471-4
Kasneci, E., Seßler, K., Küchemann, S., Bannert, M., Dementieva, D., Fischer, F., ... & Kasneci, G. (2023). ChatGPT for good? On opportunities and challenges of large language models for education. Learning and individual differences, 103, 102274. https://doi.org/10.1016/j.lindif.2023.102274
King, M. R. (2023). Outsourcing Your Faculty Application to ChatGPT: Would this Work? Should this Work? Cellular and Molecular Bioengineering, 16(4), 423-426. https://doi.org/10.1007/s12195-023-00777-9
Kline, B., & Tamer, E. (2016). Bayesian inference in a class of partially identified models. Quantitative Economics, 7(2), 329-366. https://doi.org/10.25336/csp29418
Kock, N. (2015). WarpPLS 5.0 user manual. Laredo, TX: Script Warp Systems.
Kock, N. (2017). Going beyond composites: Conducting a factor-based PLS-SEM analysis. Partial least squares path modeling: Basic concepts, methodological issues and applications, 41-53. https://doi.org/10.1007/978-3-319-64069-3_3
Kohnke, L., Moorhouse, B. L., & Zou, D. (2023). ChatGPT for language teaching and learning. Relc Journal, 54(2), 537-550. https://doi.org/10.1177/00336882231162868
Lee, H. (2024). The rise of ChatGPT: Exploring its potential in medical education. Anatomical sciences education, 17(5), 926-931. https://doi.org/10.1002/ase.2270
Mahapatra, S. (2024). Impact of ChatGPT on ESL students’ academic writing skills: a mixed methods intervention study. Smart Learning Environments, 11(1), 9. https://doi.org/10.1186/s40561-024-00295-9
McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological methods, 23(3), 412. http://dx.doi.org/10.1037/met0000144
Meade, A. W., & Craig, S. B. (2012). Identifying careless responses in survey data. Psychological methods, 17(3), 437. https://doi.org/10.1037/a0028085
McMurtrie, B. (2023). ChatGPT Is Everywhere: Love it or hate it, academics can’t ignore the already pervasive technology. The Chronicle of Higher Education, 69(15), 32-38.
Murphy, K. R. (1984). Cost-benefit considerations in choosing among cross-validation methods. Personnel Psychology, 37(1), 15-22. https://doi.org/10.1111/j.1744-6570.1984.tb02244.x
Ngo, T. T. A. (2023). The perception by university students of the use of ChatGPT in education. International Journal of Emerging Technologies in Learning (Online), 18(17), 4-19. https://doi.org/10.3991/ijet.v18i17.39019
Nunnally, J. C., & Bernstein, I. H. (1994). The Assessment of Reliability. Psychometric Theory, 3, 248-292.
Nyhan, RC, and HA Marlowe. (1997). Development And Psychometric Properties Of The Organizational Trust Inventory. Evaluation Review, 21(5), 614-635.
Onyekwere, J., & Enamul Hoque, K. (2023). Relationship between technological change, digitization, and students’ attitudes toward distance learning in Lagos Higher Education institutes. Innoeduca. International Journal of Technology and Educational Innovation, 9(1), 126–142. https://doi.org/10.24310/innoeduca.2023.v9i1.15286
Padilla-Hernández, A. L., Gámiz-Sánchez, V. M., & Romero-López, M. A. (2019). Proficiency levels of teachers’ digital competence: A review of recent international frameworks. Innoeduca. International Journal of Technology and Educational Innovation, 5(2), 140–150. https://doi.org/10.24310/innoeduca.2019.v5i2.5600
Rasul, T., Nair, S., Kalendra, D., Robin, M., de Oliveira Santini, F., Ladeira, W. J., ... & Heathcote, L. (2023). The role of ChatGPT in higher education: Benefits, challenges, and future research directions. Journal of Applied Learning and Teaching, 6(1), 41-56. https://doi.org/10.37074/jalt.2023.6.1.29
Rudolph, J., Tan, S., & Tan, S. (2023). War of the chatbots: Bard, Bing Chat, ChatGPT, Ernie and beyond. The new AI gold rush and its impact on higher education. Journal of Applied Learning and Teaching, 6(1), 364-389. https://doi.org/10.37074/jalt.2023.6.1.23
Sharples, M. (2022). Automated essay writing: An AIED opinion. International journal of artificial intelligence in education, 32(4), 1119-1126. https://doi.org/10.1007/s40593-022-00300-7
Shoufan, A. (2023). Exploring students’ perceptions of ChatGPT: Thematic analysis and follow-up survey. IEEE Access, 11, 38805-38818. https://doi.org/10.1109/ACCESS.2023.3268224
Singh, S., Singh, P., & Kaur, V. (2025). Understanding ChatGPT adoption among higher education students in Punjab, India: an application of UTAUT2 model. Innoeduca: international journal of technology and educational innovation, 11(1), 5-28. https://doi.org/10.24310/ijtei.111.2025.20219
Tabachnick, B. G., Fidell, L. S., & Ullman, J. B. (2007). Using multivariate statistics (Vol. 5, No. 7). Pearson.
Van Dis, E. A., Bollen, J., Zuidema, W., Van Rooij, R., & Bockting, C. L. (2023). ChatGPT: five priorities for research. Nature, 614(7947), 224-226. https://doi.org/10.1038/d41586-023-00288-7
Vázquez-Cano, E., Ramirez-Hurtado, J. M., Saez-Lopez, J. M., & Lopez-Meneses, E. (2023). ChatGPT: The brightest student in the class. Thinking Skills and Creativity, 49, 101380 https://doi.org/10.1016/j.tsc.2023.101380
Veletsianos, G., & Moe, R. (2017). The rise of educational technology as a sociocultural and ideological phenomenon. EDUCAUSE Review (Online).
Wang, J., & Fan, W. (2025). The effect of ChatGPT on students’ learning performance, learning perception, and higher-order thinking: insights from a meta-analysis. Humanities and Social Sciences Communications, 12, 621. https://doi.org/10.1057/s41599-025-04787-y
Zhang, P., & Tur, G. (2024). A systematic review of ChatGPT use in K-12 education. European Journal of Education, 59(2), e12599. https://doi.org/10.1111/ejed.12599