Artificial Intelligence and Its Ethical Implications for Marketing

Ana Rita Gonçalves, Diego Costa Pinto, Paulo Rita, Tamara Pires

Abstract


Despite the recent developments in AI, ethical questions arise when consumers contemplate how their data is being treated. This paper develops a conceptual model building on the theory of acceptance, risk, trust, and attitudes towards AI to understand the drivers that lead consumers to accept AI, considering consumers' ethical concerns. The model was empirically tested with 200 consumers of AI marketing services. The findings reveal that perceived risk significantly impacts attitudes toward AI, ethical concerns, and perceived trust and suggest a significant association between perceived risk, ethical concerns, and social norms. This research provides important theoretical and managerial implications for the ethical aspects of AI in marketing by highlighting the ethical and moral questions surrounding AI's acceptance.

 

Doi: 10.28991/ESJ-2023-07-02-01

Full Text: PDF


Keywords


Artificial Intelligence; Risk; Trust; Attitude; Ethical Concerns; Social Norms.

References


Ameen, N., Tarhini, A., Reppel, A., & Anand, A. (2021). Customer experiences in the age of artificial intelligence. Computers in Human Behavior, 114, 106548. doi:10.1016/j.chb.2020.106548.

McLeay, F., Osburg, V. S., Yoganathan, V., & Patterson, A. (2021). Replaced by a Robot: Service Implications in the Age of the Machine. Journal of Service Research, 24(1), 104–121. doi:10.1177/1094670520933354.

Chattopadhyay, S., Shankar, S., Gangadhar, R. B., & Kasinathan, K. (2018). Applications of Artificial Intelligence in Assessment for Learning in Schools. Advances in Educational Technologies and Instructional Design, 185–206, IGI Global, Hershey, United States. doi:10.4018/978-1-5225-2953-8.ch010.

Davenport, T., Guha, A., Grewal, D., & Bressgott, T. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48(1), 24–42. doi:10.1007/s11747-019-00696-0.

Mikalef, P., Lemmer, K., Schaefer, C., Ylinen, M., Fjørtoft, S. O., Torvatn, H. Y., Gupta, M., & Niehaves, B. (2021). Enabling AI capabilities in government agencies: A study of determinants for European municipalities. Government Information Quarterly, 101596. doi:10.1016/j.giq.2021.101596.

Hoyer, W. D., Kroschke, M., Schmitt, B., Kraume, K., & Shankar, V. (2020). Transforming the Customer Experience through New Technologies. Journal of Interactive Marketing, 51(1), 57–71. doi:10.1016/j.intmar.2020.04.001.

Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to Medical Artificial Intelligence. Journal of Consumer Research, 46(4), 629–650. doi:10.1093/jcr/ucz013.

Guha, A., Grewal, D., Kopalle, P. K., Haenlein, M., Schneider, M. J., Jung, H., Moustafa, R., Hegde, D. R., & Hawkins, G. (2021). How artificial intelligence will affect the future of retailing. Journal of Retailing, 97(1), 28–41. doi:10.1016/j.jretai.2021.01.005.

Pelau, C., Dabija, D. C., & Ene, I. (2021). What makes an AI device human-like? The role of interaction quality, empathy and perceived psychological anthropomorphic characteristics in the acceptance of artificial intelligence in the service industry. Computers in Human Behavior, 122, 106855. doi:10.1016/j.chb.2021.106855.

Kumar, V., Rajan, B., Venkatesan, R., & Lecinski, J. (2019). Understanding the role of artificial intelligence in personalized engagement marketing. California Management Review, 61(4), 135–155. doi:10.1177/0008125619859317.

Cave, S., & Dihal, K. (2019). Hopes and fears for intelligent machines in fiction and reality. Nature Machine Intelligence, 1(2), 74–78. doi:10.1038/s42256-019-0020-9.

Granulo, A., Fuchs, C., & Puntoni, S. (2021). Preference for Human (vs. Robotic) Labor is Stronger in Symbolic Consumption Contexts. Journal of Consumer Psychology, 31(1), 72–80. doi:10.1002/jcpy.1181.

Leung, E., Paolacci, G., & Puntoni, S. (2018). Man versus Machine: Resisting Automation in Identity-Based Consumer Behavior. Journal of Marketing Research, 55(6), 818–831. doi:10.1177/0022243718818423.

Wertenbroch, K., Schrift, R. Y., Alba, J. W., Barasch, A., Bhattacharjee, A., Giesler, M., Knobe, J., Lehmann, D. R., Matz, S., Nave, G., Parker, J. R., Puntoni, S., Zheng, Y., & Zwebner, Y. (2020). Autonomy in consumer choice. Marketing Letters, 31(4), 429–439. doi:10.1007/s11002-020-09521-z.

Agrawal, A., Gans, J., & Goldfarb, A. (2017). How AI will change strategy: A thought experiment. Harvard Business Review, Harvard University, Massachusetts, United States.

André, Q., Carmon, Z., Wertenbroch, K., Crum, A., Frank, D., Goldstein, W., Huber, J., van Boven, L., Weber, B., & Yang, H. (2018). Consumer Choice and Autonomy in the Age of Artificial Intelligence and Big Data. Customer Needs and Solutions, 5(1–2), 28–37. doi:10.1007/s40547-017-0085-8.

Puntoni, S., Reczek, R. W., Giesler, M., & Botti, S. (2021). Consumers and Artificial Intelligence: An Experiential Perspective. Journal of Marketing, 85(1), 131–151. doi:10.1177/0022242920953847.

Smuha, N. A. (2021). From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for artificial intelligence. Law, Innovation and Technology, 13(1), 57–84. doi:10.1080/17579961.2021.1898300.

Martin, K., Shilton, K., & Smith, J. (2019). Business and the Ethical Implications of Technology: Introduction to the Symposium. Journal of Business Ethics, 160(2), 307–317. doi:10.1007/s10551-019-04213-9.

Momani, A. M., & Jamous, M. (2017). The evolution of technology acceptance theories. International Journal of Contemporary Computer Research (IJCCR), 1(1), 51-58.

Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. MIS Quarterly: Management Information Systems, 27(3), 425–478. doi:10.2307/30036540.

Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers acceptance of artificially intelligent (AI) device use in service delivery. International Journal of Information Management, 49, 157–169. doi:10.1016/j.ijinfomgt.2019.03.008.

Kim, J., Giroux, M., & Lee, J. C. (2021). When do you trust AI? The effect of number presentation detail on consumer trust and acceptance of AI recommendations. Psychology and Marketing, 38(7), 1140–1155. doi:10.1002/mar.21498.

Muller, V. C. (2016). Risks of artificial intelligence. Chapman and Hall/CRC, New York, United States. doi:10.1201/b19187.

Muller, V. C. (2020). Ethics of Artificial Intelligence and Robotics. Stanford Encyclopedia of Philosophy, 1–30, Department of Philosophy, Stanford University, Stanford, United States.

Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations. Minds and Machines, 28(4), 689–707. doi:10.1007/s11023-018-9482-5.

Colson, E. (2019). What AI-driven decision making looks like? Harvard Business Review, Harvard University, Massachusetts, United States.

Enriquez, J. (2021). Right/wrong: How technology transforms our ethics. MIT Press, Cambridge, Massachusetts, United States. doi:10.56315/pscf6-21enriquez.

Bryson, J. J. (2018). Patiency is not a virtue: the design of intelligent systems and systems of ethics. Ethics and Information Technology, 20(1), 15–26. doi:10.1007/s10676-018-9448-6.

Pavaloiu, A., & Kose, U. (2017). Ethical artificial intelligence-an open question. arXiv Preprint. arXiv:1706.03021. doi:10.48550/arXiv.1706.03021.

Stone, P., Brooks, R., Brynjolfsson, E., Calo, R., Etzioni, O., Hager, G., ... & Teller, A. (2022). Artificial intelligence and life in 2030: the one hundred year study on artificial intelligence. arXiv preprint arXiv:2211.06318. doi:10.48550/arXiv.2211.06318.

Shane, J. (2019). You look like a thing and I love you. Hachette, New York, United States.

Hale, J. L., Householder, B. J., & Greene, K. L. (2012). The Theory of Reasoned Action. The Persuasion Handbook: Developments in Theory and Practice, 14(2002), 259–286, SAGE Publication, London, United Kingdom. doi:10.4135/9781412976046.n14.

Lai, P. (2017). The Literature Review of Technology Adoption Models and Theories for the Novelty Technology. Journal of Information Systems and Technology Management, 14(1), 21-38. doi:10.4301/s1807-17752017000100002.

Cudjoe, D., Nketiah, E., Obuobi, B., Adjei, M., Zhu, B., & Adu-Gyamfi, G. (2022). Predicting waste sorting intention of residents of Jiangsu Province, China. Journal of Cleaner Production, 366, 132838. doi:10.1016/j.jclepro.2022.132838.

Davis, F. D. (1985). A technology acceptance model for empirically testing new end-user information systems: Theory and results. Ph.D. Thesis, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States.

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. doi:10.1111/j.1540-5915.2008.00192.x.

Cowan, K., Javornik, A., & Jiang, P. (2021). Privacy concerns when using augmented reality face filters? Explaining why and when use avoidance occurs. Psychology and Marketing, 38(10), 1799–1813. doi:10.1002/mar.21576.

Lau, J., Zimmerman, B., & Schaub, F. (2018). Alexa, Are You Listening? Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 1–31. doi:10.1145/3274371.

Price, W. N., & Cohen, I. G. (2019). Privacy in the age of medical big data. Nature Medicine, 25(1), 37–43. doi:10.1038/s41591-018-0272-7.

Armour, J., & Sako, M. (2020). AI-enabled business models in legal services: From traditional law firms to next-generation law companies? Journal of Professions and Organization, 7(1), 27–46. doi:10.1093/jpo/joaa001.

Sohn, K., & Kwon, O. (2020). Technology acceptance theories and factors influencing artificial Intelligence-based intelligent products. Telematics and Informatics, 47. doi:10.1016/j.tele.2019.101324.

Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of Big Data – evolution, challenges and research agenda. International Journal of Information Management, 48, 63–71. doi:10.1016/j.ijinfomgt.2019.01.021.

Jarrahi, M. H. (2018). Artificial intelligence and the future of work: Human-AI symbiosis in organizational decision making. Business Horizons, 61(4), 577–586. doi:10.1016/j.bushor.2018.03.007.

Metcalf, L., Askay, D. A., & Rosenberg, L. B. (2019). Keeping humans in the loop: Pooling knowledge through artificial swarm intelligence to improve business decision making. California Management Review, 61(4), 84–109. doi:10.1177/0008125619862256.

Hermann, E. (2022). Leveraging Artificial Intelligence in Marketing for Social Good—An Ethical Perspective. Journal of Business Ethics, 179(1), 43–61. doi:10.1007/s10551-021-04843-y.

Du, S., & Xie, C. (2021). Paradoxes of artificial intelligence in consumer markets: Ethical challenges and opportunities. Journal of Business Research, 129, 961–974. doi:10.1016/j.jbusres.2020.08.024.

Fortes, N., Rita, P., & Pagani, M. (2017). The effects of privacy concerns, perceived risk and trust on online purchasing behaviour. International Journal of Internet Marketing and Advertising, 11(4), 307–329. doi:10.1504/IJIMA.2017.087269.

Oliveira, T., Alhinho, M., Rita, P., & Dhillon, G. (2017). Modelling and testing consumer trust dimensions in e-commerce. Computers in Human Behavior, 71, 153–164. doi:10.1016/j.chb.2017.01.050.

Lee, N., Broderick, A. J., & Chamberlain, L. (2007). What is “Neuromarketing”? A discussion and agenda for future research. International Journal of Psychophysiology, 63(2), 199–204. doi:10.1016/j.ijpsycho.2006.03.007.

Struhl, S. (2017). Artificial Intelligence Marketing and Predicting Consumer Choice: An Overview of Tools and Techniques. Kogan Page, London, United Kingdom.

Hunt, S. D., & Vitell, S. J. (2006). The general theory of marketing ethics: A revision and three questions. Journal of Macromarketing, 26(2), 143–153. doi:10.1177/0276146706290923.

Treviño, L. K., Weaver, G. R., & Reynolds, S. J. (2006). Behavioral ethics in organizations: A review. Journal of Management, 32(6), 951–990. doi:10.1177/0149206306294258.

Dignum, V. (2017). Responsible Autonomy. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence. doi:10.24963/ijcai.2017/655.

Hasan, R., Shams, R., & Rahman, M. (2021). Consumer trust and perceived risk for voice-controlled artificial intelligence: The case of Siri. Journal of Business Research, 131, 591–597. doi:10.1016/j.jbusres.2020.12.012.

Schamp, C., Heitmann, M., & Katzenstein, R. (2019). Consideration of ethical attributes along the consumer decision-making journey. Journal of the Academy of Marketing Science, 47(2), 328–348. doi:10.1007/s11747-019-00629-x.

Wilson, H. J., & Daugherty, P. R. (2018). Collaborative intelligence: Humans and AI are joining forces. Harvard Business Review, 96(4), 114-123.

Grafanaki, S. (2017). Autonomy Challenges in the Age of Big Data. Fordham Intellectual Property, Media & Entertainment Law Journal, 27(4), 803–868.

Conn, A. (2016). Benefits and risks of artificial intelligence. Future of Life Institute, Massachusetts, United States. Available online: https://futureoflife.org/ai/benefits-risks-of-artificial-intelligence/?cn-reloaded=1 (accessed on January 2023).

Foxman, E. R., & Kilcoyne, P. (1993). Information Technology, Marketing Practice, and Consumer Privacy: Ethical Issues. Journal of Public Policy & Marketing, 12(1), 106–119. doi:10.1177/074391569501200111.

Wang, X., Tajvidi, M., Lin, X., & Hajli, N. (2020). Towards an Ethical and Trustworthy Social Commerce Community for Brand Value Co-creation: A trust-Commitment Perspective. Journal of Business Ethics, 167(1), 137–152. doi:10.1007/s10551-019-04182-z.

Verma, S., Sharma, R., Deb, S., & Maitra, D. (2021). Artificial intelligence in marketing: Systematic review and future research direction. International Journal of Information Management Data Insights, 1(1), 100002. doi:10.1016/j.jjimei.2020.100002.

Mayer, R. C., Davis, J. H., & Schoorman, F. D. (1995). An Integrative Model of Organizational Trust. Academy of Management Review, 20(3), 709–734. doi:10.5465/amr.1995.9508080335.

Söllner, M., & Leimeister, J. M. (2013). What we really know about antecedents of trust: A critical review of the empirical information systems literature on trust. Psychology of Trust: New Research, Nova Science Publishers, New York, United States.

Fernandes, T., & Oliveira, E. (2021). Understanding consumers’ acceptance of automated technologies in service encounters: Drivers of digital voice assistants adoption. Journal of Business Research, 122, 180–191. doi:10.1016/j.jbusres.2020.08.058.

Venkatesh, V., & Davis, F. D. (2000). A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies. Management Science, 46(2), 186–204. doi:10.1287/mnsc.46.2.186.11926.

Rahman, A., Khanam, T., & Pelkonen, P. (2017). People’s knowledge, perceptions, and attitudes towards stump harvesting for bioenergy production in Finland. Renewable and Sustainable Energy Reviews, 70, 107–116. doi:10.1016/j.rser.2016.11.228.

Yang, K., & Jolly, L. D. (2009). The effects of consumer perceived value and subjective norm on mobile data service adoption between American and Korean consumers. Journal of Retailing and Consumer Services, 16(6), 502–508. doi:10.1016/j.jretconser.2009.08.005.

Taylor, S., & Todd, P. (1995). Decomposition and crossover effects in the theory of planned behavior: A study of consumer adoption intentions. International Journal of Research in Marketing, 12(2), 137–155. doi:10.1016/0167-8116(94)00019-K.

Sirdeshmukh, D., Singh, J., & Sabol, B. (2002). Consumer trust, value, and loyalty in relational exchanges. Journal of Marketing, 66(1), 15–37. doi:10.1509/jmkg.66.1.15.18449.

Wong, K. K. K. (2013). Partial least squares structural equation modeling (PLS-SEM) techniques using SmartPLS. Marketing Bulletin, 24(1), 1-32.

Jung, S., & Park, J. (2018). Consistent Partial Least Squares Path Modeling via Regularization. Frontiers in Psychology, 9. doi:10.3389/fpsyg.2018.00174.

Hair Jr, J. F., Hult, G. T. M., Ringle, C. M., & Sarstedt, M. (2021). A primer on partial least squares structural equation modeling (PLS-SEM). Sage Publications, London, United Kingdom.

Middleton, F. (2022). Reliability vs. Validity in Research | Differences, Types and Examples. Scribbr. Available online: https://www.scribbr.com/methodology/reliability-vs-validity/ (accessed on January 2023).

Fornell, C., & Larcker, D. F. (1981). Evaluating Structural Equation Models with Unobservable Variables and Measurement Error. Journal of Marketing Research, 18(1), 39. doi:10.2307/3151312.

Alarcón, D., Sánchez, J. A., & De Olavide, U. (2015). Assessing convergent and discriminant validity in the ADHD-R IV rating scale: User-written commands for Average Variance Extracted (AVE), Composite Reliability (CR), and Heterotrait-Monotrait ratio of correlations (HTMT). Spanish STATA meeting, 22 October, 2015, Madrid, Spain.

Petter, S., Straub, D., & Rai, A. (2007). Specifying formative constructs in information systems research. MIS Quarterly: Management Information Systems, 31(4), 623–656. doi:10.2307/25148814.

Ringle, C. M., Wende, S., & Becker, J.M. (2022). SmartPLS 4. Oststeinbek: SmartPLS. Bootstrapping. Available online: https://www.smartpls.com/documentation/algorithms-and-techniques/bootstrapping (accessed on January 2023).

Chin, W. W. (1998). Issues and opinion on structural equation modeling. MIS Quarterly: Management Information Systems, 22(1), 7-16.

Solberg, E., Kaarstad, M., Eitrheim, M. H. R., Bisio, R., Reegård, K., & Bloch, M. (2022). A Conceptual Model of Trust, Perceived Risk, and Reliance on AI Decision Aids. Group & Organization Management, 47(2), 187–222. doi:10.1177/10596011221081238.


Full Text: PDF

DOI: 10.28991/ESJ-2023-07-02-01

Refbacks

  • There are currently no refbacks.


Copyright (c) 2023 Paulo Rita, Diego Costa Pinto, Tamara Pires