Social Media’s AI Ethics, Digital Literacy, and AI Trust: Could These Lead to Positive Health Behavior?
Main Article Content
Abstract
Aim/Purpose: This study investigated the mediating roles of Artificial Intelligence (AI) ethics and AI trust in the relationship between digital literacy and positive health behavior among Thai working-age individuals. The research sought to address a gap in existing literature by integrating these constructs within the context of social media use for health-related purposes.
Introduction/Background: Social media is now a primary source of health information in Thailand, with AI-driven recommendation algorithms tailoring content to user profiles and behaviors. While such personalization can improve relevance, it also raises concerns about misinformation, selective exposure, and over-reliance on automated systems. Digital literacy, defined as the ability to locate, evaluate, and use digital content effectively, enables users to navigate such environments more critically. Similarly, AI ethics, which includes accountability, transparency, fairness, and security, can influence how individuals engage with AI-mediated platforms. AI trust, which refers to the willingness to rely on AI recommendations, may encourage adoption but could also reduce active health decision-making when it is excessive or uncritical. Despite substantial research on these constructs in other contexts, there is limited empirical evidence in Thailand. This study addresses this gap by examining their direct and indirect relationships with positive health behavior among Thai working-age adults.
Methodology: A quantitative, cross-sectional research design was employed. Data were obtained from 420 Thai working-age individuals through a structured online questionnaire administered via Google Forms. A multi-stage sampling procedure combining cluster and quota sampling was applied. First, Bangkok districts were stratified into three zones: inner, middle, and outer. Four districts were then randomly selected from each zone, followed by the recruitment of 35 participants from each selected district through quota sampling. Inclusion criteria required Thai nationality, current residence in one of the selected districts, and active use of social media.
Four instruments were used for measurement. Digital literacy was assessed using a 14-item scale. Perceptions of AI ethics were measured using a 16-item scale, comprising four dimensions: accountability, responsibility, explainability, and security. AI trust was unidimensionally evaluated with an 11-item scale, covering functionality, benefits, and credibility. Positive health behavior was measured using a scale comprised four domains: nutrition, physical activity, relaxation, and preventive behavior. All items were rated on a five-point Likert scale ranging from 1 to 5. Content validity was established through expert evaluation by five domain specialists, using Item–Objective Congruence. Confirmatory factor analysis was conducted to validate the measurement model for the three latent constructs: AI ethics, AI trust, and positive health behavior. Construct validity was confirmed prior to hypothesis testing. Structural equation modeling was then employed to examine the direct and indirect relationships among digital literacy, AI ethics, AI trust, and positive health behavior.
Findings: The structural equation model showed an acceptable fit to the empirical data (RMSEA=.07, SRMR=.06, TLI=.96, CFI=.98, PNFI=.63), satisfactory internal consistency (CR=.92–.95), and convergent validity (AVE=.75–.82). The structural model showed meaningful explanatory power across the endogenous constructs (R2=.64–.99). All hypothesized direct and indirect effects were statistically significant. Digital literacy and AI ethics both exhibited positive, statistically significant direct effects on positive health behavior. In contrast, AI trust had a statistically significant negative direct effect on positive health behavior (ß = -.49, p < .01), indicating that excessive reliance on AI systems may discourage proactive health engagement. Digital literacy was positively associated with AI ethics and AI trust; AI ethics was strongly and positively associated with AI trust (ß = .84, p < .01). Mediation analysis further revealed that AI trust significantly mediated the relationship between digital literacy and positive health behavior (ß = -.40, p < .01), as well as between AI ethics and positive health behavior (ß = -.41, p < .01), highlighting a paradoxical role of AI trust in health-related behaviors.
Contribution/Impact on Society: The findings highlight the importance of enhancing digital literacy and fostering perceptions of ethical AI in social media to support healthier behaviors in the workplace and beyond. The study suggests that over-reliance on AI systems, even when perceived as ethical, may lead to reduced active engagement in health-promoting behaviors. This underscores the need for balanced digital engagement and critical evaluation skills.
Recommendations: Governments and private organizations should collaborate to integrate digital literacy and AI ethics education into public health promotion initiatives. Health-related content on social media should be accompanied by transparency measures and user empowerment strategies to ensure informed decision-making.
Research Limitation: The study focused exclusively on Thai working-age individuals in the Bangkok Metropolitan Area, which may limit the generalizability of findings to other regions, age groups, or populations whose social, cultural, and digital environments differ substantially from those examined in this research.
Future Research: Future studies should examine additional mediators and moderators to better understand how trust in AI translates into positive health behavior, such as mindfulness, locus of control, and health literacy. Expanding the research to diverse populations and contexts would also enhance the applicability of findings.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
Copyright: Asia-Pacific International University reserve exclusive rights to publish, reproduce and distribute the manuscript and all contents therein.
References
Ameen, N., Tarhini, A., Reppel, A., & Anand, A. (2021). Customer experiences in the age of artificial intelligence. Computers in Human Behavior, 114, 106548. https://doi.org/10.1016/j.chb.2020.106548
Aoki, N. (2020). An experimental study of public trust in AI chatbots in the public sector. Government Information Quarterly, 37(4), 101490. https://doi.org/https://doi.org/10.1016/j.giq.2020.101490
Araujo, T., Helberger, N., Kruikemeier, S., & de Vreese, C. H. (2020). In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI & SOCIETY, 35(3), 611–623. https://doi.org/10.1007/s00146-019-00931-w
Ariyachandra, T., & Deokar, A. V. (2015). Introduction: Research-in-Progress Studies. In L. S. Iyer & D. J. Power (Eds.), Reshaping Society through Analytics, Collaboration, and Decision Support: Role of Business Intelligence and Social Media (pp. 157–160). Springer. https://doi.org/10.1007/978-3-319-11575-7_10
Becker, M. H., & Janz, N. K. (1985). The health belief model applied to understanding diabetes regimen compliance. The Science of Diabetes Self-Management and Care, 11(1), 41–47. https://doi.org/10.1177/014572178501100108
Benda, N. C., Novak, L. L., Reale, C., & Ancker, J. S. (2022). Trust in AI: Why we should be designing for APPROPRIATE reliance. Journal of the American Medical Informatics Association, 29(1), 207–212. https://doi.org/10.1093/jamia/ocab238
Bosnjak, M., Ajzen, I., & Schmidt, P. (2020). The theory of planned behavior: Selected recent advances and applications. Europe's Journal of Psychology, 16(3), 352–356. https://doi.org/10.5964/ejop.v16i3.3107
Brailovskaia, J., Cosci, F., Mansueto, G., & Margraf, J. (2021). The relationship between social media use, stress symptoms and burden caused by coronavirus (Covid-19) in Germany and Italy: A cross-sectional and longitudinal investigation. Journal of Affective Disorders Reports, 3, 100067. https://doi.org/10.1016/j.jadr.2020.100067
Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd ed.). The Guilford Press.
Buckingham, D. (2010). Defining digital literacy. In B. Bachmair (Ed.), Medienbildung in neuen Kulturräumen: Die deutschprachige und britische Diskussion (1st ed., pp. 59–71). VS Verlag für Sozialwissenschaften. https://doi.org/10.1007/978-3-531-92133-4_4
Chaiken, S., & Ledgerwood, A. (2012). A theory of heuristic and systematic information processing. In P. A. M. Van Lange , A. W. Kruglanski, & E. Tory Higgins (Ed.), Handbook of theories of social psychology (Vol. 1, pp. 246–266). Sage Publications Ltd. https://doi.org/10.4135/9781446249215.n13
Chi, O. H., Jia, S., Li, Y., & Gursoy, D. (2021). Developing a formative scale to measure consumers’ trust toward interaction with artificially intelligent (AI) social robots in service delivery. Computers in Human Behavior, 118, 106700. https://doi.org/https://doi.org/10.1016/j.chb.2021.106700
Conner, M., & Norman, P. (1996). Predicting health behaviour: Research and practice with social cognition models. Open University Press. https://archive.org/details/predictinghealth0000unse_t5c9
Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
Deuze, M., & Beckett, C. (2022). Imagination, algorithms and news: Developing AI literacy for Journalism. Digital Journalism, 10(10), 1913–1918. https://doi.org/10.1080/21670811.2022.2119152
Dignum, V. (2019). Responsible artificial intelligence: How to develop and use AI in a responsible way. Springer.
Ferrari, A., Punie, Y., & Redecker, C. (2012). Understanding digital competence in the 21st Century: An analysis of current frameworks. In A. Ravenscroft, S. Lindst, C. D. Kloos, & D. Hernandez-Leo (Eds.), Lecture notes in computer science (pp.79–92). Springer. https://doi.org/10.1007/978-3-642-33263-0_7
Gille, F., Jobin, A., & Ienca, M. (2020). What we talk about when we talk about trust: Theory of trust for AI in healthcare. Intelligence-Based Medicine, 1–2, 100001. https://doi.org/https://doi.org/10.1016/j.ibmed.2020.100001
Glikson, E., & Woolley, A. W. (2020). Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. https://doi.org/10.5465/annals.2018.0057
Gochman, D. S. (1997). Handbook of health behavior research II: Provider determinants. Plenum. https://doi.org/10.1007/978-1-4899-1760-7
Hair, J. F., Black, W. C., Babin, B. J., & Anderson, R. E. (2019). Multivariate data analysis (8th ed.). Cengage Learning.
Handayani, P. W., Zagatti, G. A., Kefi, H., & Bressan, S. (2023). Impact of social media usage on users’ COVID-19 protective behavior: Survey study in Indonesia. JMIR Formatve Research, 7, e46661. https://doi.org/10.2196/46661
Hartley, J. (2019). Communication, cultural and media studies: The key concepts (5th ed.). Routledge. https://doi.org/https://doi.org/10.4324/9781315225814
Hoff, K. A., & Bashir, M. (2015). Trust in automation:Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/0018720814547570
Hongladarom. (2020). The ethics of AI and robotics: A Buddhist viewpoint. Lexington Books.
Huang, K. T., & Ball, C. (2024). The influence of AI literacy on user's trust in AI in practical scenarios: A digital divide pilot study. Proceedings of the Association for Information Science and Technology, 61(1), 937–939. https://doi.org/10.1002/pra2.1146
Huo, J., Desai, R., Hong, Y. R., Turner, K., Mainous, A. G., & Bian, J. (2019). Use of social media in health communication: Findings from the health information national trends survey 2013, 2014, and 2017. Cancer Control, 26(1), 1073274819841442. https://doi.org/10.1177/1073274819841442
Baek, T. H., & Kim, M. (2023). Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telematics and Informatics, 83, 102030. https://doi.org/10.1016/j.tele.2023.102030
Kim, S.H., & Son, Y.J. (2017). Relationships between eHealth literacy and health behaviors in Korean adults. CIN: Computers, Informatics, Nursing, 35(2), 84–90. https://doi.org/10.1097/cin.0000000000000255
Kwak, Y., Ahn, J.W., & Seo, Y. H. (2022). Influence of AI ethics awareness, attitude, anxiety, and self-efficacy on nursing Students’ behavioral intentions. BMC Nursing, 21(1), 267. https://doi.org/10.1186/s12912-022-01048-0
LaRosa, E., & Danks, D. (2018, February 2–3). Impacts on trust of healthcare AI. Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA, USA. https://doi.org/10.1145/3278721.3278771
Levy, M., Pauzner, M., Rosenblum, S., & Peleg, M. (2023). Achieving trust in health-behavior-change artificial intelligence apps (HBC-AIApp) development: A multi-perspective guide. Journal of Biomedical Informatics, 143, 104414. https://doi.org/10.1016/j.jbi.2023.104414
Li, R., Shao, J., & Gao, D. (2025). The impact of digital literacy on the health behavior of rural older adults: Evidence from China. BMC Public Health, 25(1), 919. https://doi.org/10.1186/s12889-025-21964-5
Li, X., & Liu, Q. (2020). Social media use, eHealth literacy, disease knowledge, and preventive behaviors in the COVID-19 pandemic: Cross-sectional study on Chinese netizens. Journal of Medical Internet Research, 22(10), e19684. https://doi.org/10.2196/19684
Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm appreciation: People prefer algorithmic to human judgment. Organizational Behavior and Human Decision Processes, 151, 90–103. https://doi.org/10.1016/j.obhdp.2018.12.005
McKnight, H. D., Carter, M., Thatcher, J. B., & Clay, P. F. (2011). Trust in a specific technology: An investigation of its components and measures. ACM Transactions on Management Information Systems (TMIS), 2(2), 1–25. https://doi.org/10.1145/1985347.1985353
Ministry of Digital Economy and Society. (2021). Digital Thailand: AI ethics guildline. https://www.etda.or.th/getattachment/9d370f25-f37a-4b7c-b661-48d2d730651d/Digital-Thailand-AI-Ethics-Principle-and-Guideline.pdf.aspx
Mitsutake, S., Shibata, A., Ishii, K., & Oka, K. (2016). Associations of eHealth literacy with health behavior among adult internet users. Journal of Medical Internet Research, 18(7), e192. https://doi.org/10.2196/jmir.5413
National Statistical Office of Thailand. (2021). The labor force survey whole kingdom quarter 1: January – March 2021. https://www.nso.go.th/nsoweb/nso/survey_detail/9u?set_lang=en#gsc.tab=0
Niu, Z., Willoughby, J., & Zhou, R. (2021). Associations of health literacy, social media use, and self-efficacy with health information–seeking intentions among social media users in China: cross-sectional survey. Journal of Medical Internet Research, 23(2), e19134. https://doi.org/10.2196/19134
Okamura, K., & Yamada, S. (2020). Adaptive trust calibration for human-AI collaboration. PLOS ONE, 15(2), e0229132. https://doi.org/10.1371/journal.pone.0229132
Parasuraman, R., & Manzey, D. H. (2010). Complacency and bias in human use of automation: An attentional integration. Human Factors, 52(3), 381–410. https://doi.org/10.1177/0018720810376055
Poulsen, A., Song, Y. J., Fosch-Villaronga, E., LaMonica, H. M., Iannelli, O., Alam, M., & Hickie, I. B. (2024). Digital rights and mobile health in Southeast Asia: A scoping review. Digit Health, 10, 1–14. https://doi.org/10.1177/20552076241257058
Powers, T. M., & Ganascia, J.-G. (2020). The ethics of the ethics of AI. In M. D. Dubber, F. Pasquale, & S. Das (Eds.), The Oxford handbook of ethics of AI (pp. 26–51). Oxford University Press. https://doi.org/10.1093/oxfordhb/9780190067397.013.2
Riedl, R. (2022). Is trust in artificial intelligence systems related to user personality? Review of empirical evidence and future research directions. Electronic Markets, 32(4), 2021–2051. https://doi.org/10.1007/s12525-022-00594-4
Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002
Rosenstock, I. M. (1974). The health belief model and preventive health behavior. Health Education Monographs, 2(4), 354–386. https://doi.org/10.1177/109019817400200405
Rovinelli, R. J., & Hambleton, R. K. (1977). On the use of content specialists in the assessment of criterion-referenced test item validity. Tijdschrift voor Onderwijsresearch, 2(2), 49–60. https://psycnet.apa.org/record/1979-12368-001
Sagona, M., Dai, T., Macis, M., & Darden, M. (2025). Trust in AI-assisted health systems and AI’s trust in humans. npj Health Systems, 2(1), 10. https://doi.org/10.1038/s44401-025-00016-5
Schumacker, R. E., & Lomax, R. G. (2015). A beginner's guide to structural equation modeling (4th ed.). Routledge. https://doi.org/10.4324/9781315749105
Seiler, J., Libby, T. E., Jackson, E., Lingappa, J., & Evans, W. (2022). Social media–based interventions for health behavior change in low- and middle-income countries: Systematic review. Journal of Medical Internet Research, 24(4), e31889. https://doi.org/10.2196/31889
Sethumadhavan, A. (2019). Trust in artificial intelligence. Ergonomics in Design, 27(2), 34. https://doi.org/10.1177/1064804618818592
Shin, D. (2020). User perceptions of algorithmic decisions in the personalized AI system: Perceptual evaluation of fairness, accountability, transparency, and explainability. Journal of Broadcasting & Electronic Media, 64(4), 541–565. https://doi.org/10.1080/08838151.2020.1843357
Shin, D. (2021). The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI. International Journal of Human-Computer Studies, 146, 102551. https://doi.org/10.1016/j.ijhcs.2020.102551
Shin, D., & Biocca, F. (2017). Exploring immersive experience in journalism. New Media & Society, 20(8), 2800–2823. https://doi.org/10.1177/1461444817733133
Shu, S., Luo, Q., & Chen, Z. (2025). Proactive vs. passive algorithmic ethics practices in healthcare: The moderating role of healthcare engagement type in patients' responses. BMC Med Ethics, 26(1), 73. https://doi.org/10.1186/s12910-025-01236-y
Siau, K., & Wang, W. (2018). Building trust in artificial intelligence, machine learning, and robotics. Cutter Business Technology Journal, 31, 47–53.
Singhal, A., Neveditsin, N., Tanveer, H., & Mago, V. (2024). Toward fairness, accountability, transparency, and ethics in ai for social media and health care: Scoping review. JMIR Medical Informatics, 12, e50048. https://doi.org/10.2196/50048
Sirlin, N., Epstein, Z., Arechar, A. A., & Rand, D. G. (2021). Digital literacy is associated with more discerning accuracy judgments but not sharing intentions. Harvard Kennedy School Misinformation Review, 2(6), 1–13. https://doi.org/10.37016/mr-2020-83
Skitka, L. J., Mosier, K. L., Burdick, M., & Rosenblatt, B. (2000). Automation bias and errors: Are crews better than individuals? The International Journal of Aviation psychology, 10(1), 85–97. https://doi.org/10.1207/s15327108ijap1001_5
Smith, E. E., Kahlke, R., & Judd, T. (2020). Not just digital natives: Integrating technologies in professional education contexts. Australasian Journal of Educational Technology, 36(3), 1–14. https://doi.org/10.14742/ajet.5689
Smith, E. E., & Storrs, H. (2023). Digital literacies, social media, and undergraduate learning: what do students think they need to know? International Journal of Educational Technology in Higher Education, 20(1), 20–29. https://doi.org/10.1186/s41239-023-00398-2
Stewart, K. J. (2003). Trust transfer on the World Wide Web. Organization Science, 14(1), 5–17. https://doi.org/10.1287/orsc.14.1.5.12810
Sun, X., Ma, R., Wei, S., Cesar, P., Bosch, J. A., & El Ali, A. (2026). Understanding trust toward human versus AI-generated health information through behavioral and physiological sensing. International Journal of Human-Computer Studies, 209, 1-25. https://doi.org/10.1016/j.ijhcs.2025.103714
Sundar, S. S. (2008). The MAIN model: A heuristic approach to understanding technology effects on credibility. In M. J. Metzger & A. J. Flanagin. (Eds.), The John D. and Catherine T. MacArthur foundation series on digital media and learning. (pp. 73–100). The MIT Press. https://doi.org/10.1162/dmal.9780262562324.073
Švestková, A., Huang, Y., & Smahel, D. (2025). Factors that influence trust and willingness to use generative AI for health information: A cross-sectional study. Digit Health, 11, 1–13. https://doi.org/10.1177/20552076251360973
Thampanichvong, K. (2022). Using behavioral economics to nudge teenagers and working adults to increase exercise to prepare for an aged society. Thailand Development Research Institute.
Thatcher, J. B., McKnight, D. H., Baker, E. W., Arsal, R. E., & Roberts, N. H. (2011). The role of trust in postadoption IT exploration: An empirical examination of knowledge management systems. IEEE Transactions on Engineering Management, 58(1), 56–70. https://doi.org/10.1109/TEM.2009.2028320
UNESCO Institute for Information Technologies in Education. (2011). Digital literacy in education. https://unesdoc.unesco.org/ark:/48223/pf0000214485
Véliz, C. (2019). Three things digital ethics can learn from medical ethics. Nature Electronics, 2(8), 316–318. https://doi.org/10.1038/s41928-019-0294-2
Vieweg, S. H. (2021). Ethics for non-philosophers: Basis of ethics and ethical perspectives. In S. H. Vieweg (Ed.), AI for the good: Artificial intelligence and ethics (pp. 3–21). Springer.
Walderich, A. (2025, August 7). Social media in Thailand – Statistics and facts. https://www.statista.com/topics/8194/social-media-in-thailand/#topicOverview
West, S. G., Finch, J. F., & Curran, P. J. (1995). Structural equation models with nonnormal variables: Problems and Rremedies. In Structural equation modeling: Concepts, issues, and applications. (pp. 56–75). Sage Publications, Inc.
Wijaya, T. T., Yu, Q., Cao, Y., He, Y., & Leung, F. K. S. (2024). Latent profile analysis of AI literacy and trust in mathematics teachers and their relations with AI dependency and 21st-Century skills. Behavioral Sciences, 14(11), 1008. https://www.mdpi.com/2076-328X/14/11/1008
Woynarowska-Sołdan, M., Mariusz, P., Lucyna, I., Aleksander, Z., & and Gotlib, J. (2019). Validation of the positive health behaviours scale: A nationwide survey of nurses in Poland. International Journal of Occupational Safety and Ergonomics, 25(1), 76–85. https://doi.org/10.1080/10803548.2018.1436124
Yang, Y. (2024). Influences of digital literacy and moral sensitivity on artificial intelligence ethics awareness among nursing students. Healthcare, 12(21), 2172. https://doi: 10.3390/healthcare12212172
Yang, Y., Zhang, Y., Sun, D., He, W., & Wei, Y. (2025). Navigating the landscape of AI literacy education: Insights from a decade of research (2014–2024). Humanities and Social Sciences Communications, 12(1), 374. https://doi.org/10.1057/s41599-025-04583-8
Zhang, H., Lee, I., Ali, S., DiPaola, D., Cheng, Y., & Breazeal, C. (2023). Integrating ethics and career futures with technical learning to promote AI literacy for middle school students: An exploratory study. International Journal of Artificial Intelligence in Education, 33(2), 290–324. https://doi.org/10.1007/s40593-022-00293-3