Current pedagogical theory emphasizes the use of new computer-based instructional technologies for convergence, collaborative and participative learning, and in-class feedback. However, it is necessary to evaluate these technologies, especially to identify any student factors that might foster digital divides or differential outcomes. This study analyzes the influences on the student evaluation of a wireless course feedback system in two Master’s classes, using a baseline influence survey, two later evaluation surveys, system data about answering review questions, and ratings and open-ended comments on the final course evaluation. Influences studied include demographics, variety of computer usage, web expertise, computer-email-web fluency, computer-mediated competency, levels of exposure to the system, and use of the system for in-class reviews and discussions. Fluency involved three dimensions, competency involved eight dimensions, and evaluation of the system included four dimensions. Different evaluation dimensions (training, easy to use, validity, fun, overall) were predicted (from 25% to 51%) by different combinations of prior web use, computer classes, exposure to the system, and different dimensions of computer-mediated competency (such as medium factors, interaction management, efficacy and overall CMC competency).
This study looks to understand students’ experiences with discussion enhancing technology within a master-level course. Researchers used a sequence of surveys to accurately evaluate and measure students’ experiences throughout the semester. Overall, students had a positive experience with the technology and found its use helpful.
In course evaluation, the instructor was rated as more effective compared to the mean score of other instructors that semester, however, due to the small course sample size, it is difficult to say if the increased effectiveness is related to the technology. Students found the technology most helpful when its ease of use was high. Further, students who had a high technology literacy are more likely to have found this technology useful.
Rice, R. E., & Bunz, U. (2006). Evaluating a wireless course feedback system: The role of demographics, expertise, fluency, competency, and usage. Studies in Media & Information Literacy Education, 6(3), 1-32.
|Links to Article||https://scholar.google.com/scholar?hl=en&as_sdt=0%2C50&q=Evaluating+a+wireless+course+feedback+system%3A+The+role+of+demographics%2C+expertise%2C+fluency%2C+competency%2C+and+usage+&btnG=|
|Mode||Technology-enhanced, Blended or Hybrid, Online|
|Publication Type||Journal Article|
|In Publication||Studies in Media & Information Literacy Education|
|Type of Research||Quantitative|
|Research Design||Survey research (qualitative or quantitative), Text analysis|
|Intervention/Areas of Study||Course design, Course organization, Student motivation|
|Level of Analysis||Student-level, Instructor-level|
|Specific Populations Examined||Graduates|
|Specific Institutional Characteristics of Interest||4-year Institution, Masters-granting, Doctorate-granting|
|Specific Course or Program Characteristics|
|Outcome Variables of Interest||Academic achievement or performance, including assessment scores and course grades, Learning effectiveness, Satisfaction|
|Student Sample Size||0-99|