Algorithmic Transparency, User Control, and Adolescent Self-Efficacy: A Review and Research Agenda (2020–2025)
DOI:
https://doi.org/10.61173/7p375g87Keywords:
Recommendation algorithms, adolescents, self-efficacy, metacognitionAbstract
Personalized recommendation algorithms structure what adolescents see and when they see it, potentially shaping self-regulatory outcomes. This review synthesizes recent empirical work (2020–2025) on how algorithmic transparency and user controls relate to adolescents’ task specific self-efficacy and metacognitive processes (planning, monitoring, evaluation). Evidence from large surveys links higher general self-efficacy to fewer emotional symptoms even when negative social media experiences are considered. Mixed methods and experimental interface studies indicate that explanations (e.g., “why am I seeing this?”) and steerable controls (e.g., topic filters, reset/diversify) are associated with greater perceived transparency, trust, and reflective engagement, while low transparency, engagement optimized ranking is associated with diminished perceived control. Converging across educational and open platform contexts, these patterns suggest that clarity and controllability may help preserve adolescents’ efficacy beliefs and support metacognitive regulation. However, most studies rely on cross-sectional designs or proximal indicators (e.g., trust, connectedness) rather than direct metacognition/efficacy measures, and field experiments on live platforms remain scarce. This review outline design implications (plain language explanations; visible impact views; low friction steering) and educational directions (algorithmic literacy instruction), and propose a research agenda emphasizing preregistered field studies, validated instruments for efficacy/metacognition, and transparent analytic reporting. Together, these steps can link algorithmic design choices to measurable benefits for adolescent users across cultural and platform contexts.