By Amelie Wagner
Following Donald Trump’s inauguration in January 2025, search results on Instagram for the hashtags “Democrat” or “Democrats” were suddenly no longer visible. Meta, the owner of Instagram, denied accusations of hiding related posts, blaming a technical error instead (Gerken, 2025). On an algorithm-driven platform like Instagram, this triggers a broader societal concern: algorithmic opacity. This term describes the intransparency of algorithmic processes, leading to algorithmic decisions that cannot be understood by users (Eslami et al., 2019, p.1). Young users are particularly affected by this, since their frequent access to social media, as emphasised by Weber (2022), constantly exposes them to recommender systems that provide limitless information based on their data. Since youths rely on social media as a primary tool for communication (Nissenbaum, 2011, p.33), this poses significant privacy risks as their control over data is jeopardised.
This contribution examines the privacy risks of personalised algorithmic systems for youth on social media platforms in the context of media and digital governance. It argues that algorithmic opacity and data-driven personalisation make it difficult for young users to understand how their data is used and protected. While algorithm-driven platforms are often presented as neutral tools that enhance user experience, this analysis shows that they also produce significant asymmetries of knowledge and power between platforms and young users. This raises broader questions about how communication systems are governed, particularly in relation to platform accountability, user rights, and youth protection.
Privacy risks through lack of algorithmic literacy among youths
On social media, algorithms have become gatekeepers, structuring which content users see, engage with, and ultimately what they believe. Instagram uses complex machine learning algorithms to curate personalised media for users, collecting large amounts of personal data to target as accurately as possible (Meta Transparency Centre, 2024).While empirical evidence is currently limited, it points toward youths’ unawareness of the effects of these algorithmic processes on their digital behaviour (Livingstone et al., 2019, p.4).
Consequently, their vulnerability to privacy threats, like data breaches, filter bubbles, or manipulation through asymmetrical information, resulting from disclosing personal data on Instagram is increased. Since youths rely on experiences rather than active information seeking to develop algorithmic knowledge (Cotter & Reisdorf, 2020, p.750),Instagram’s attempts at transparency, like the Instagram Help Centre (2024), remain futile. Without an improved understanding of algorithmic processes, adolescents lack the ability to make informed decisions by weighing the risks and benefits of a digital privacy action, as described by the Privacy Calculus Theory (Laufer & Wolfe, 1977).
Beneficial or threatening: how do youths view algorithms?
Young individuals’ perceptions of algorithms are diverse. While they recognise the usefulness of personalisation to receive fitting content, concerns emerge about how this accuracy is attained (Nyathi et al., 2024, p.5). Youths appreciate algorithmic benefits like saving time while searching for relevant material (Creswick et al., 2019, p.174), improved social connectedness (Fast & Jago, 2020, p.45), or being presented with news (Swart, 2021,p.5).
However, algorithmic uncertainty unnerves them greatly. This results from their limited understanding of how algorithms function and how their data is used, particularly regarding the misuse of personal information by third parties (Creswick et al., 2019, p.176). Lack of diversity and algorithmic opacity leads to further discouragement (Nyathi et al., 2024, p.6). The perceived threats reported by youths underscore the need for improved algorithmic literacy to empower them to make knowledge-based decisions regarding the digital disclosure of their personal information.
Potential solutions to combat algorithmic privacy threats to youths
While there have been attempts to develop young users’ knowledge in this field, such as Fouquaert & Mechant’s (2022) “Instawareness”, the struggle against the lack of algorithmic transparency persists. This calls for initiatives to empower youths with the ability to critically evaluate algorithm-driven environments, enabling them to make informed privacy decisions. One approach is the integration of algorithmic literacy into school curricula. Through this, students can learn the functions and processes of algorithms and the associated privacy-protective behaviours in a professional, structured environment. Including parents in these programmes can build on this, creating safe surroundings for their children. Additionally, as data increasingly becomes a commodity, policies should be implemented to regulate social media platforms’ data collection methods. Policymakers must prioritise the security of children over corporate financial incentives that lead to the datafication of youths by demanding transparency from platforms like Instagram. For instance, simplified, userfriendly Terms and Conditions can explain algorithmic processes to help individuals understand the privacy implications of disclosing personal information. This should aim to reduce algorithmic opacity, allowing young users to comprehend how their data is used, and ultimately enabling them to act based on responsible, informed decisions.
Due to the rapid development of artificial intelligence, protecting youths from associated privacy risks is as urgent as ever. Through the teaching of algorithmic literacy, knowledge gaps can be reduced and opaque algorithms demystified on platforms like Instagram, enabling the safe navigation of digital environments for young users. By providing regulative measures for data collection, policymakers can further safeguard youths from invasive situations, like the manipulation of their data or digital surveillance. In an increasingly data-driven society, young users’ privacy is constantly at risk. However, by implementing the established solutions, future generations can improve their knowledge of algorithmic processes and privacy-protective behaviours, decreasing their vulnerability to data exploitation. Only through these changes can youths continue to rely on platforms like Instagram for communication and growth without becoming victims of manipulative data control.
References
Altmann, G. (2021, May 11). Ai Artificial Intelligence Artificial royalty-free stock illustration.Pixabay. https://pixabay.com/illustrations/ai-artificial-intelligence-6767497/
Cotter, K., & Reisdorf, B. C. (2020). Algorithmic Knowledge Gaps: A New Dimension of (Digital) Inequality. International Journal of Communication, 14, 745–766. https://ijoc.org/index.php/ijoc/article/view/12450
Creswick, H., Dowthwaite, L., Koene, A., Vallejos, E. P., Portillo, V., Cano, M., & Woodard, C. (2019). “… They don’t really listen to people”: Young people’s concerns and recommendations for improving online experiences. Journal of Information, Communication and Ethics in Society, 17(2), 167–182. https://doi.org/10.1108/JICES-11-2018-0090
Eslami, M., Vaccaro, K., Lee, M. K., Elazari Bar On, A., Gilbert, E., & Karahalios, K. (2019). User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3290605.3300724
Fast, N. J., & Jago, A. S. (2020). Privacy matters… or does It? Algorithms, rationalization, and the erosion of concern for privacy. Current Opinion in Psychology, 31, 44–48. https://doi.org/10.1016/j.copsyc.2019.07.011
Fouquaert, T., & Mechant, P. (2022). Making curation algorithms apparent: A case study of ‘Instawareness’ as a means to heighten awareness and understanding of Instagram’s algorithm. Information, Communication & Society, 25(12), 1769–1789. https://doi.org/10.1080/1369118X.2021.1883707
Gerken, T. (2025, January 21). Instagram hides search results for ‘Democrats’. BBC News. https://www.bbc.com/news/articles/c4g32yxpdz0o
Instagram Help Centre. (2024). How posts are chosen for Explore on Instagram. https://help.instagram.com/487224561296752
Laufer, R. S., & Wolfe, M. (1977). Privacy as a Concept and a Social Issue: A Multidimensional Developmental Theory. Journal of Social Issues, 33(3), 22–42. https://doi.org/10.1111/j.1540-4560.1977.tb01880.x
Livingstone, S., Stoilova, M., & Nandagiri, R. (2019). Children’s data and privacy online: Growing up in a digital age: an evidence review (pp. 1–57). London School of Economics and Political Science. http://www.lse.ac.uk/my-privacy-uk
Meta Transparency Centre. (2024, December 11). Instagram Explore AI System. https://transparency.meta.com/features/explaining-ranking/ig-explore/
Nissenbaum, H. (2011). A Contextual Approach to Privacy Online. Daedalus, 140(4), 32–48. https://doi.org/10.1162/DAED_a_00113
Nyathi, R. S., Mckenzie, S., Li, J., Gorur, R., & Mohan Doss, R. R. (2024). User perceptions of algorithmic persuasion in OTT platforms: A scoping review. 2024 IEEE International Symposium on Technology and Society (ISTAS), 1–7. https://doi.org/10.1109/ISTAS61960.2024.10732741
Swart, J. (2021). Experiencing Algorithms: How Young People Understand, Feel About, and Engage With Algorithmic News Selection on Social Media. Social Media + Society, 7(2), 1–11. https://doi.org/10.1177/20563051211008828
Weber, S. (2022, December 13). In the Age of Crises & Fake News: The Potential of Critical Media Literacy. Media Governance and Industries Lab Blog. https://univiennamedialab.wordpress.com/2022/12/13/in-the-age-of-crises-fake-newsthe-potential-of-critical-media-literacy/



Leave a Reply