Tecnologías de la Información y de Redes

Learning Technologies

Available thesis proposals:

 

Thesis proposals

Researchers

Research Group

Conversational agents and learning analytics for online higher education

Online higher education introduces a way of transcending formal higher education by realizing technology-enhanced formats of learning and instruction and by granting access to an audience beyond students enrolled in a conventional higher education institution. However, although online higher education has been reported as an efficient and important educational context, there are a number of issues and problems related to the educational aspect. More specifically, there are higher dropout rates during a course, little participation and a lack of student motivation and engagement overall. This may be due to one-size-fits-all instructional approaches and very limited commitment to student-student and teacher-student collaboration.

This thesis aims to enhance the online higher Education experience by integrating:

• collaborative settings based on conversational agents (CAs) both in synchronous and asynchronous collaboration conditions; and

• screening methods based on learning analytics (LA) to support both students and teachers during an online higher education course.

CAs guide and support student dialogue using natural language both in individual and collaborative settings. Moreover, LA techniques can support teachers' orchestration and students' learning during online higher education by evaluating students' interaction and participation. Integrating CAs and LA into online higher education can both trigger peer interaction in discussion groups and considerably increase the engagement and commitment of online students (and, consequently, lower dropout rates).

Dr Santi Caballé

Mail: scaballe@uoc.edu

Dr Jordi Conesa

Mail: jconesac@uoc.edu

SMARTLEARN

Enhancing educational support through an adaptive virtual educational advisor

Nowadays, many systems help students to learn. Some of them aid students in finding learning resources or recommending exercises. Others aim to help the student in the assessment phase by giving feedback. Furthermore, others monitor the student's progress during the instructional process to recommend the best learning path to succeed in the course. Depending on the objectives/competencies of the subject, some features are more suitable than others.
 
This research line proposes to work in intelligent learning systems based on artificial intelligence techniques focusing on the following topics:
 
Predictive analytics based on machine learning algorithms
Early warning systems able to detect at-risk students 
Automatic feedback and nudging based on generative artificial intelligence
Ethical issues (fairness, transparency and explainability)
Data visualization and dashboards
Gamification
Virtual educational advisor (chatbots)
 

Dr David Bañeres

Mail: dbaneres@uoc.edu

Dr M. Elena Rodríguez

Mail: mrodriguezgo@uoc.edu

SOM Research Lab

Boardgames for education

During the last years the board game field have experimented a great expansion in the means of the quantity of boardgames available, if the variety of them, of the broad coverage of topics they address and the variety of mechanics they provide. They have great potential to become a great tool for learning, as many research studies show. 

In this research line, we would like to address the latest innovations of using boardgames for learning and to explore the potential of using boardgames in the eLearning context and the mechanisms that appear in them.

Dr Jordi Conesa

Mail: jconesac@uoc.edu

Dr Antoni Pérez Navarro

Mail: aperezn@uoc.edu

 

Generative AI in introductory programming courses

Generative AI (GenAI) tools based on Large Language Models (LLMs) have demonstrated impressive performance in myriad types of programming tasks. As a result, their impact on introductory programming (CS1) courses should be studied in depth. This research line aims to explore the present realities and the future possibilities in how Generative AI is impacting, and may further impact on introductory programming courses, including learning goals/outcomes, assessment, emerging pedagogies, and educational resources. Some research questions may be:

 
* When and how should GenIA tools be introduced in introductory courses?
 
* What types of impact GenIA-based activities or tools have on the CS1 students (i.e. performance, behaviour, understaning, etc.)? 
 
* What kinds of GenAI-based assignments could CS1 students do? For example, Kerslake et al. (2024) proposed two activities: (1) the first one involved students solving computational tasks by writing prompts to generate code; (2) the second one involved showing students a code fragment and asking them to demonstrate their understanding of the code by crafting a prompt that generates equivalent code.
 
* How can GenAI tools support CS1 students? (e.g. instant feedback, high-level problem solving advice, automated assessment, etc.)
 
* How can GenAI tools support CS1 instructors? (e.g. writing feedback, creation of assignments, etc.)
 
* Might pair programming evolve from two students working together into ""me and my AI""?
Dr David García Solórzano

Mail: dgarciaso@uoc.edu
LAIKA

Assessment in programming education

Many factors have been cited for poor performance of students in programming courses, especially in CS1. One of them is the assessment process.  The goal of this research line is to explore different aspects related to assessment in programming courses. Some research questions may be:

* How may assessment mechanisms impact on students' performance?
 
* How to design exams that really assess students' skills and knowledge? This includes to analyze aspects such as the delivery mode (f2f or online), the format (paper, computer, IDE), the duration, and so on.
 
* What impact do the different assessment strategies (e.g. contract grading, mastery-learning, second-change testing, etc.) in students' performance and satisfaction?
 
* What instruments are best for assessing CS1 students? (e.g. rubrics, tests, etc.)
 
* What aspects should be evaluated in programming courses? (e.g. output/behaviour, code quality, etc.) How should they be evaluated?
 
In addition to the previous questions, the design and development of tools that support the assessment process in programming courses, such as automated grading (i.e. online judges or automatic exam generators) are welcomed as well.
Dr David García Solórzano

Mail: dgarciaso@uoc.edu
LAIKA

Research on techno-pedagogical models using student-centred personal spaces 

Using humanistic HCI methodologies, with the possibility to prototype, develop or analyse real data from a growing network of students using Folio (folio.uoc.edu) and Agoras, this line studies autonomy, identity and collective learning in open, ethical and participatory digital ecosystems bridging institutional (LMS) and personal (CMS/WordPress) spaces.

Dr Quelic Berga-Carreras

Mail: qberga@uoc.edu
DARTS