Propuesta de un nuevo cuestionario de evaluación de los profesores de la Universidad del País Vasco. Estudio psicométrico, dimensional y diferencial

  1. Lizasoain-Hernández, Luis 1
  2. Etxeberria-Murgiondo, Juan 1
  3. Lukas-Mujika, José Francisco 1
  1. 1 Universidad del País Vasco/Euskal Herriko Unibertsitatea
    info

    Universidad del País Vasco/Euskal Herriko Unibertsitatea

    Lejona, España

    ROR https://ror.org/000xsnr85

Revista:
Relieve: Revista ELectrónica de Investigación y EValuación Educativa

ISSN: 1134-4032

Año de publicación: 2017

Volumen: 23

Número: 2

Tipo: Artículo

DOI: 10.7203/RELIEVE.23.2.10436 DIALNET GOOGLE SCHOLAR lock_openAcceso abierto editor

Otras publicaciones en: Relieve: Revista ELectrónica de Investigación y EValuación Educativa

Resumen

El objetivo de este artículo es analizar el proyecto de nuevo cuestionario diseñado por la Universidad del País Vasco (UPV/EHU) para realizar la evaluación de sus docentes (SET). Se analizan las respuestas de una muestra de 941 estudiantes y se estudia la fiabilidad del cuestionario, la dimensionalidad, la validez de constructo y criterial, finalizando con un estudio diferencial tomando una cuenta variables como el género, el campo disciplinar, el nivel percibido de dificultad o el interés de las materias. Los resultados permiten afirmar que se trata de un instrumento de alta consistencia interna que se ajusta a las dimensiones teóricas usadas para su diseño y construcción: planificación, proceso y resultados, lo que posibilita un uso formativo de la información.

Información de financiación

Este trabajo se ha realizado con el apoyo del Vicerrectorado de Calidad e Innovación Docente de la Universidad del País Vasco (UPV/EHU).

Financiadores

Referencias bibliográficas

  • Adams, Meredith J.D., & Umbach, Paul D. (2012). Non response and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53(5), 576-91. doi: https://doi.org/10.1007/s11162-0119240-5
  • Addison, W. E., Best, J. & Warrington, J. D. (2006). Students’ Perceptions of Course Difficulty and Their Ratings of the Instructor. College student journal, 40(2), 409-16.
  • Alvarado, Elías, Morales Dionicio & Aguayo, Ernesto (2016). Percepción de la calidad educativa: caso aplicado a estudiantes de la Universidad Autónoma de Nuevo León y del Instituto Tecnológico de Estudios Superiores de Monterrey. Revista de la Educación Superior, 45(180), 55-74. doi: https://doi.org/10.1016/j.resu.2016.06. 006
  • Apodaca, Pedro & Grad, Héctor (2002). Análisis dimensional de las opiniones de los alumnos universitarios sobre sus profesores: comparación entre técnicas paramétricas y noparamétricas. Revista de Investigación Educativa, 20(2), 385-409.
  • Apodaca, Pedro & Grad, Héctor (2005). The dimensionality of student ratings of teaching: integration of uni-and multidimensional models. Studies in Higher Education, 30(6), 723-48. doi: https://doi.org/10.1080/030750705003 40101
  • Basto, Mário & Pereira, José Manuel. (2012). An SPSS R-Menu for ordinal factor analysis. Journal of Statistical Software, 46(4), 1-29. doi: https://doi.org/10.18637/jss.v046.i04
  • Berk, Ronald A. (2013). Should Global Ítems on Student Rating Scales Be Used for Summative Decisions?. The Journal of Faculty Development, 27(1), 63-68.
  • Burdsal, Charles A. & Harrison, Paul D. (2008). Further evidence supporting the validity of both a multidimensional profile and an overall evaluation of teaching effectiveness. Assessment & Evaluation in Higher Education, 33(5), 567-76. doi: https://doi.org/10.1080/026029307016 99049
  • Caldera, Juan F., Carranza, María del R., Jiménez, Alma A. & Pérez, Ignacio (2015). Actitudes de los estudiantes universitarios ante la tutoría. Diseño de una escala de medición. Revista de la Educación Superior, 1(173), 103-124. doi: https://doi.org/10.1016/j.resu.2015.04. 004
  • Casero, Antonio (2010). Factores moduladores de la percepción de la calidad docente. RELIEVE, 16(2). doi: http://doi.org/10.7203/relieve.16.2.4135
  • Cattell, Raymond B. (1966). The Scree Test For The Number Of Factors. Multivariate Behavioral Research, 1, 245-276. doi: https://doi.org/10.1207/s15327906mbr 0102_10
  • Chen, Guo-Hai & Watkins, David. (2010). Stability and correlates of student evaluations of teaching at a Chinese university. Assessment & Evaluation in Higher Education, 35(6), 675-85. doi: https://doi.org/10.1080/026029309029 77715
  • Choi, Bo-Keum & Kim, Jae-Woong. (2014). The Influence of Student and Course Characteristics on Monotonic Response Patterns in Student Evaluation of Teaching in South Korea. Asia Pacific Education Review, may, 1-10. doi: https://doi.org/10.1007/s12564-0149332-y
  • Darby, Jenny A. (2007). Are course evaluations subject to a halo effect? Research in Education, 77(1), 46-55. doi: https://doi.org/10.7227/RIE.77.4
  • De Juanas Oliva, Angel & Beltrán Llera, Jesús A. (2014). Valoraciones de los estudiantes de ciencias de la educación sobre la calidad de la docencia universitaria. Educación XX1, 17(1), 59-82. doi: https://doi.org/10.5944/educxx1.17.1.1 0705
  • Fernández Rico, J. Esteban, Fernández Fernández, Samuel, Álvarez Suárez, Alberto & Martínez Camblor, Pablo (2007). Éxito académico y satisfacción de estudiantes con la enseñanza universitaria. RELIEVE, 13(2). doi: http://doi.org/10.7203/relieve.13.2.4207
  • Ginns, Paul, Prosser, Michael & Barrie, Simon. (2007). Students’ perceptions of teaching quality in higher education: The perspective of currently enrolled students. Studies in Higher Education, 32(5), 603-615. doi: https://doi.org/10.1080/030750707015 73773
  • Glorfeld, Louis W. (1995). An Improvement on Horn's Parallel Analysis Methodology for Selecting the Correct Number of Factors to Retain. Educational & Psychological Measurement, 55, 377-393. doi: https://doi.org/10.1177/001316449505 5003002
  • González López, Ignacio (2003). Determinación de los elementos que condicionan la calidad de la universidad: aplicación práctica de un análisis factorial. RELIEVE, 9(1). doi: http://doi.org/10.7203/relieve.9.1.4351
  • Haarala-Muhonen, Anne, Ruohoniemi, Mirja, Katajavuori, Nina & LindblomYlänne, Sari. (2011). Comparison of students’ perceptions of their teaching–learning environments in three professional academic disciplines: A valuable tool for quality enhancement. Learning Environments Research, 14(2), 155-69. doi: https://doi.org/10.1007/s10984-0119087-x
  • Hooper, Daire, Coughlan, Joseph & Mullen, Michael (2008). Structural Equation Modelling: Guidelines for Determining Model Fit. The Electronic Journal of Business Research Methods, 6(1), 53-60.
  • Hoyuelos, F. J.; Ibáñez, J; Jerónimo, E.; San Martín, S. & Santamaría, M. (2014). Variables definitorias del perfil del profesor/a universitario/a ideal desde la perspectiva de los estudiantes pre-universitarios/as. Educación XX1, 17(2), 193-215. doi: https://doi.org/10.5944/educxx1.17.2.1 1486
  • Kaplan, David (2009). Structural Equation Modeling (2nd ed.): Foundations and Extensions. Thousand Oaks, USA: Sage.
  • Kember, David & Leung, Doris YP. (2011). Disciplinary differences in student ratings of teaching quality. Research in Higher Educatio, 52(3), 278-99. doi: https://doi.org/10.1007/s11162-0109194-z
  • Kline, Rex (2011). Principles and practice of structural equation modeling (3rd ed.). New York, London: The Guilford Press.
  • Lance, C., Butts, M. & Michels, L.. (2006). The Sources of Four Commonly Reported Cutoff Criteria What Did They Really Say? Organizational Research Methods, 9(2), 202-20. doi: https://doi.org/10.1177/109442810528 4919
  • Ledesma, Rubén Daniel & ValeroMora, Pedro (2007). Determining the Number of Factors to Retain in EFA: An Easy-to-Use Computer Program for Carrying Out Parallel Analysis. Practical Assessment, Research & Evaluation, 12(2), 1-11.
  • Lemos, M. S., Queirós, C., Teixeira, P.M. & Menezes, I. (2011). Development and validation of a theoretically based, multidimensional questionnaire of student evaluation of university teaching. Assessment & evaluation in higher education, 36(7), 843-64. doi: https://doi.org/10.1080/02602938.201 0.493969
  • Lukas, José Francisco, Santiago, Karlos, Etxeberria, Juan & Lizasoain, Luis (2014). Adaptación al Espacio Europeo de Educación Superior de un cuestionario de opinión del alumnado sobre la docencia de su profesorado. RELIEVE, 20(1), art. 2. doi: http://doi.org/10.7203/relieve.20.1.3812
  • Molero López Barajas, David (2007). Rendimiento académico y opinión sobre la docencia del alumnado participante en experiencias piloto de implantación del Espacio Europeo de Educación Superior. RELIEVE, 13(2), art. 2. doi: http://doi.org/10.7203/relieve.13.2.4205
  • Mortelmans, D. & Spooren, P. (2009). A revalidation of the SET37 questionnaire for student evaluations of teaching. Educational Studies, 35(5), 547-552. doi: https://doi.org/10.1080/030556909028 80299
  • Muñoz Cantero, J.M., Ríos de Deus, M.P & Abalde, E. (2002). Evaluación Docente vs. Evaluación de la Calidad. RELIEVE, 8(2), art. 4. doi: http://doi.org/10.7203/relieve.8.2.4362
  • Otani, Koichiro, B., Joon Kim & JeongIL Cho (2012). Student evaluation of teaching (SET) in higher education: How to use SET more effectively and efficiently in public affairs education. Journal of Public Affairs Education, 18(3), 531-544.
  • Palmer, Stuart (2012). The performance of a student evaluation of teaching system. Assessment & Evaluation in Higher Education, 37(8), 975-985. doi: https://doi.org/10.1080/02602938.201 1.592935
  • Pascual Gómez, Isabel (2007). Análisis de la Satisfacción del Alumno con la Docencia Recibida: Un Estudio con Modelos Jerárquicos Lineales. RELIEVE, 13(1), art. 6. doi: http://doi.org/10.7203/relieve.13.1.4216
  • Pepe, Julie W., & Wang, Morgan C. (2012). What Instructor Qualities Do Students Reward. College Student Journal, 46(3), 603-14.
  • Peres-Neto, Pedro R., Jackson, Donald A. & Somers, Keith M. (2005). How Many Principal Components? Stopping Rules for Determining the Number of Non-Trivial Axes Revisited. Computational Statistics & Data Analysis, 49, 974-997. doi: https://doi.org/10.1016/j.csda.2004.06. 015
  • Rantanen, Pekka (2013). The number of feedbacks needed for reliable evaluation. A multilevel analysis of the reliability, stability and generalisability of students' evaluation of teaching. Assessment & Evaluation in Higher Education, 38(2), 224-239. doi: https://doi.org/10.1080/02602938.201 1.625471
  • Revelle, William & Rocklin, Thomas (1979). Very Simple Structure. Alternative Procedure for Estimating the Optimal Number of Interpretable Factors. Multivariate Behavioral Research, 14(4), 403-414. doi: https://doi.org/10.1207/s15327906mbr 1404_2
  • Schreider, J., Stage, F., King, J., Nora, A. & Barlow, E. (2006). Reporting structural equation modeling and confirmatory factor analysis results: a review. The Journal of Education Research, 99(6), 323-337. doi: https://doi.org/10.3200/JOER.99.6.323 -338
  • Spooren, Pieter, Brockx, Bert & Mortelmans, Dimitri. (2013). On the Validity of Student Evaluation of Teaching The State of the Art. Review of Educational Research, 83(4), 598642. doi: https://doi.org/10.3102/003465431349 6870
  • Stark-Wroblewski, Kimberly, Ahlering, Robert F. & Brill, Flannery M. (2007). Toward a more comprehensive approach to evaluating teaching effectiveness: Supplementing student evaluations of teaching with pre–post learning measures. Assessment & Evaluation in Higher Education, 32(4), 403-15. doi: https://doi.org/10.1080/026029306008 98536
  • Stout, William F. (1990). A new ítem response theory modeling approach with applications to unidimensionality assessment and ability estimation. Psychometrika, 55(2), 293-325. doi: https://doi.org/10.1007/BF02295289
  • Stout, William F. (1987). A nonparametric approach for assessing latent trait unidimensionality. Psychometrika, 52(4), 589-617. doi: https://doi.org/10.1007/BF02294821
  • Tomkiewicz, Joseph & Bass, Kenneth. (2008). Differences between Male Students’ and Female Students’ Perception of Professors. College Student Journal, 42(2), 422-430.
  • Uttl, B., White, C. A. & Gonzalez, D. W. (2016). Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42. doi: http://dx.doi.org/10.1016/j.stueduc.20 16.08.007
  • Velicer, Wayne F., Eaton, Cheryl A. & Fava, Joseph L. (2000). Construct explication through factor or component analysis: A review and evaluation of alternative procedures for determining the number of factors or components. En Problems and solutions in human assessment, 41-71. Springer. doi: https://doi.org/10.1007/978-1-46154397-8_3
  • Wood, James M., Tataryn, Douglas J. & Gorsuch, Richard L. (1996). Effects of under-and overextraction on principal axis factor analysis with varimax rotation. Psychological methods, 1(4), 354. doi: https://doi.org/10.1037/1082989X.1.4.354
  • Zerihun, Zenawi, Beishuizen, Jos & Van Os, Willem. (2012). Student learning experience as indicator of teaching quality. Educational Assessment, Evaluation and Accountability, 24(2), 99-111. doi: https://doi.org/10.1007/s11092-0119140-4
  • Zhao, Jing & Gallant, Dorinda J. (2012). Student evaluation of instruction in higher education: Exploring issues of validity and reliability. Assessment & Evaluation in Higher Education, 37(2), 227-35. doi: https://doi.org/10.1080/02602938.201 0.523819
  • Zwick, William R. & Velicer, Wayne F. (1986). Comparison of five rules for determining the number of components to retain. Psychological bulletin, 99(3), 432-442. doi: https://doi.org/10.1037/00332909.99.3.432