I&T Governance Framework for Artificial Intelligence in Marketing: A COBIT 2019 Expansion

I&T Governance Framework for Artificial Intelligence in Marketing
Author: Benjamin A. Witt, Sarah J. Wolanske and Jeffrey W. Merhout, Ph.D., CPA
Date Published: 10 March 2021
Related: COBIT 2019 Framework: Governance & Management Objectives | Digital | English

Information and technology (I&T) governance is a critical component of corporate governance, and it becomes even more crucial when organizations implement emerging technologies. Of the various emerging technologies that are currently transforming business processes, artificial intelligence (AI) has many tangible and realizable uses. Not only are the uses extremely varied, but they are potentially revolutionary in terms of business value. Marketers have begun to pioneer the possibilities of AI in different ways. Some examples of utilization in marketing include the enabling of AI to make decisions regarding which advertisements to display to a user. This particular example is currently employed by organizations such as Google and Amazon. These AI-chosen advertisements are displayed across different platforms including smartphones computers and now AI assistants such as Amazon Alexa, Google Home and Apple Siri. The use of AI assistants in marketing presents unique opportunities for marketers. As users begin to use their AI assistants to order products and even make recommendations (e.g., product reviews), enterprises will be able to capitalize on the new avenue of exposure for their products. Another frontier for AI in marketing is sentiment analysis. Sentiment analysis is the use of AI and neural networks to comb through text and images to uncover the underlying consumer sentiment and aggregate it. The ability of marketers to efficiently and effectively gauge consumer sentiment toward products or advertising campaigns presents significant opportunities and greatly improve an organization’s ability to react to the market.

As with every emerging technology, the numerous potential benefits are accompanied by risk. Some of the biggest concerns stem from the security, privacy and accuracy of information collected and analyzed by AI. Furthermore, AI algorithms learn based on the data that they are trained from; thus, if data collected in the past have underlying biases, then those biases will be present in any future predictions. AI systems need to be designed to produce accurate and transparent results while avoiding inequalities and biases. In addition, AI is a data-centered technology, and it collects those data from real people. Protecting the privacy of the users and assuring that data collection is compliant with regulations ensures a sense of security and trust between the organization and its consumers. That assurance will also aid in providing transparent and accurate information.

If the appropriate I&T governance and management frameworks are not employed, generating data about the consumer’s specific consumption patterns could lead to significant penalties for privacy violations, in addition to AI having access to personally identifiable information (PII) such as name, address and credit card information of the user. Utilizing the proper controls to properly protect these data must be a top concern for organizations. That is where frameworks come into play. The proposed I&T governance model intends to illustrate how any organization can extend an existing comprehensive, yet generic and adaptable, framework (i.e., COBIT® 20191) into an organizational structure tailored to address the additional governance goals and risk mitigation strategies when any specific emerging technology is being implemented into an organization.

I&T GOVERNANCE FRAMEWORKS AND THEIR ASSOCIATED VISUAL MODELS ARE INTENDED TO BE ADOPTED AND ADAPTED BY PRACTITIONERS IN A MANNER THAT MAKES SENSE FOR EACH INDIVIDUAL ORGANIZATIONAL SCENARIO.

Development of Framework

I&T governance is an organizational structure of information and technology controls aligned with the business strategy and overall corporate governance used to encourage behavior that mitigates risk, delivers measurable business value and enhances overall performance of the business while maintaining legal and ethical standards. To reap maximum benefits and mitigate risk from I&T, specific frameworks are developed to guide organizations through the design, implementation and analysis of the technology. The proposed I&T governance framework for AI is based on a suggested COBIT 2019 expansion and combines frameworks and new materials that provide essential, effective and efficient controls regarding AI, specifically, AI applications utilized for marketing purposes.

While not currently included in the COBIT 2019 framework, this new and expanded set of objectives and activities is meant to be used in conjunction with already existing COBIT® objectives or perhaps as a stand-alone framework for specific AI initiatives. The existing COBIT 2019 framework clearly includes many of the same or similar principles and ideas discussed here (such as privacy), but this framework aims to explore and then focus those principles on the specific topic of AI and provide users an example of how COBIT can be adopted and adapted (i.e., customized). The new objectives were developed by first acknowledging the critical importance of avoiding biases in AI algorithms, which became a key cornerstone of the core model. Then authoritative ethical frameworks and other reputable resources were searched to find additional objectives that would be pertinent for AI applications.

I&T governance frameworks and their associated visual models are intended to be adopted and adapted by practitioners in a manner that makes sense for each individual organizational scenario. These adaptations and associated metrics should take into consideration key factors, such as business models, risk tolerance, functional experience and technical expertise.2 As described by COBIT 2019, “[A] governance system should be tailored to the enterprise’s needs, using a set of design factors as parameters to customize and prioritize the governance system components.”3 Although COBIT could, arguably, be extended and combined to support a specific emerging technology (i.e., AI) in a specific organizational function, it is clear that I&T governance should be holistic in nature and cover the entire enterprise, including involving numerous stakeholders, such as those concerned with corporate governance (e.g., boards, executive management, assurance providers), in addition to business and I&T managers. As noted in COBIT,

A certain level of experience and a thorough understanding of the enterprise … allow users to customize core COBIT guidance—which is generic in nature—into tailored and focused guidance for the enterprise.4

One example of COBIT customization is Textron Inc., an industrial conglomerate made up of numerous subsidiaries in the aerospace, manufacturing and defense industries. Textron’s director of IT audit services explains that there is immense value in establishing a common framework to manage audit practices and procedures. In Textron’s case, he states, “The enterprisewide I&T security function already had a commonly accepted set of policies based on a variety of industry frameworks including COBIT.”5 An important factor in Textron’s decision to take COBIT and alter it to fit with the company’s culture is the fact that Textron could maintain its enterprise-specific policies. The customization of a generic framework was a key component in getting all stakeholders on board in adopting the governance policies. In addition he states, “By using their framework, which is based not only on COBIT but, more importantly, on IT security professionals’ work at other organizations, it is easier for them to accept the structures put in place.”6

THE ETHICAL DESIGN, DEVELOPMENT, AND IMPLEMENTATION OF THESE [AI] TECHNOLOGIES SHOULD BE GUIDED BY THE FOLLOWING GENERAL PRINCIPLES: HUMAN RIGHTS, WELL-BEING (I.E., PRIORITIZE METRICS OF WELL-BEING IN DESIGN AND USE), ACCOUNTABILITY, TRANSPARENCY, AND AWARENESS (OF MISUSE).

In this example, which shows how a real organization customized COBIT for its overall I&T governance and not specifically for AI, COBIT provides a strong foundation for Textron to build on and integrate ideas from IT security professionals. This enhances the governance framework because, when audit procedures are created by those being audited, it forces a new perspective that allows for both parties (the internal audit department and the I&T department) to be understanding and accepting of the necessary audit policies. COBIT’s ability to be completely customizable speaks to the usability of the framework and the idea that it can be used by any organization and molded to fit that organization’s specific needs. COBIT must be customized for Textron in this example because there was a desire for security controls from a security professional’s point of view. Based on its comprehensive nature, COBIT is likely able to exceed expectations for most organizations but, in this example, it was beneficial to add new enterprise-specific governance and management objectives to achieve a level of acceptance from all parties by asking those being audited to participate in creating the assurance policies. “The overall goal is to use a shared lexicon for discussing and managing controls.”7 COBIT is the ideal first step in achieving this goal because it provides outlined guidance for those who do not wish to reinvent the wheel, but it also provides guidance for those organizations that need a few stepping stones of encouragement on the way to building their own unique framework.

While researching possible components of this I&T governance framework, it became clear that governance over AI is necessary on multiple levels and it is of absolute importance that ethical considerations are a significant factor.8 In parallel, searches were made for authoritative ethical frameworks to leverage for this I&T governance framework as more and more literature suggesting the potential for bias in AI applications is released.9, 10, 11 Accordingly, the four cornerstones (figure 1) of the model that were chosen are ethics; alignment, because of the importance of strategy; accuracy, because information integrity is critical in all I&T applications; and understandability, because so much of what occurs in AI is the result of very complex mathematical algorithms.

Figure 1

Thus, the proposed framework includes four main principles and numerous controls meant to support organizations throughout their processes. The mission of this framework is to not only act as support for a strategy of implementing AI in marketing settings but to do so effectively and ethically. Moreover, the goals of this framework can be aligned with the ethics-focused goals of the Institute of Electrical and Electronics Engineers (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems. As stated by IEEE,

The ethical design, development, and implementation of these [AI] technologies should be guided by the following general principles: human rights, well-being (i.e., prioritize metrics of well-being in design and use), accountability, transparency, and awareness (of misuse).12

AI in Marketing Framework/Model

These goals can be generalized into the four pillars of the framework: alignment, accuracy, ethics, and understandability. As the foundation of the framework, these concepts are tied to existing COBIT 2019 domains, which are then broken down into a variety of detailed controls or an expanded COBIT control.

High-Level Illustration

Figure 1 illustrates a high-level model that ties the outlined principles to the existing five COBIT 2019 governance and management domains:

  1. Evaluate, Direct and Monitor (EDM)
  2. Align, Plan and Organize (APO)
  3. Build, Acquire and Implement (BAI)
  4. Deliver, Service and Support (DSS)
  5. Monitor, Evaluate and Assess (MEA)

Alignment
Within the context of marketing, alignment of AI utilization with business strategy, consumer needs and contractual agreements is imperative. This assures that stakeholder transparency is upheld and that the use of AI does not conflict with the trusting relationships that lead to effective outcomes.13

WHEN DEALING WITH THE PERSUASION AND INFLUENCING OF CONSUMERS, ORGANIZATIONS MUST MAKE SURE THAT HARM IS AVOIDED WHEN ALLOWING AI TO ACT AUTONOMOUSLY IN THE MARKETPLACE.

Accuracy
The value of AI in marketing relies almost entirely on the accuracy of assessments, predictions and judgments made by the tool. Ensuring that AI algorithms are properly devised, trained and implemented is essential to maintaining a high standard of accuracy that stakeholders will be able to trust.14

Ethics
With many emerging technologies, especially AI, ethical considerations are at the forefront of many stakeholders’ minds. Beyond the implicit ethics related to alignment, an organization must take steps to ensure that the use of AI does not callously override certain concerns. These concerns are especially heightened when it comes to marketing. When dealing with the persuasion and influencing of consumers, organizations must make sure that harm is avoided when allowing AI to act autonomously in the marketplace. Concerns such as privacy, disparate impact, discrimination and even sustainability should be the focus.

Understandability
The complexity of AI is both what enables it to perform independent tasks that historically require human interaction and what leads to the creation of a “black box” effect. A black box is created when one can see what goes into the box and what comes out, but is unable to visualize (much less comprehend) any of the events that occur within the box (i.e., algorithm). The avoidance of this phenomenon is imperative in allowing an enterprise’s executives and other stakeholders to make both proper and informed decisions. By creating controls that assure understandability and AI processes that are easily explainable, there is assurance that an enterprise’s key players can react to changes in the marketplace as well as any issues that may arise through the use of AI.

Governance Objectives Model

Figure 2 illustrates a lower level model that ties the existing COBIT domains to new or expanded-on objectives and process controls.

Figure 2

Detailed Overview of Framework

To provide a better understanding of how each of the four pillars relates to existing COBIT 2019 domains, there are a variety of control processes and activities under each category. In general, these new objectives apply to the whole enterprise, but some of them deserve special attention by the business and I&T teams when working on AI initiatives, where bias can be of concern.

There are a few possible perceived redundancies. Although the new DSS07 Managed Integrity is closely related to the existing APO14 Managed Data control, the accuracy of AI data is paramount because it affects so many critical decisions that its importance is stressed further with this new control. Similarly, although the new DSS08 Managed Communication might seem redundant since communication is such an important component of overall I&T governance, the possible information asymmetries between I&T and other stakeholders must be bridged via highlighting the key importance of clear communication.

New Proposed Objectives and Activities

APO15 Managed Values:

  • Establish a chief value officer (CVO)15 to provide leadership in the development and implementation of organizational values.
  • Encourage employees to voice ethical concerns or red flags.16
  • Allow various stakeholders to be involved in the maintenance of values.17

APO16 Managed Brand Image and Reputation:18

  • Manage the stakeholders’ perception of the organization.
  • Ensure that communication and interaction facing the public and external stakeholders is consistent with the organization’s mission, values and strategy.19
  • Manage consistency of brand messaging to stakeholders.

DSS07 Managed Integrity:

  • Provide assurance to stakeholders that outputs of AI systems are accurate and reliable.20
  • Address any issues of accuracy in a timely, transparent and effective manner.

DSS08 Managed Communication:

  • Communicate in a thorough, yet practical manner the functions and underlying workings of AI systems in use or being developed for potential use.
  • Implement mechanisms that diminish information asymmetries between developers and other stakeholders.21

MEA05 Managed Societal Impact:

  • Monitor the impact of AI outputs on societal stakeholders. These groups may include but are not limited to minorities, minors, older people, disabled people, low-income households and impoverished people.
  • Avoid disparate impact.
  • Monitor and evaluate evolving stakeholder expectations regarding social responsibility.22

BAI12 Managed Black Box:

  • Thoroughly document all aspects of AI systems in use or being developed for potential use.23
  • Manage knowledge of AI algorithms and underlying workings so that multiple individuals are considered subject-matter experts on a particular AI-based or enabled system.

Expanded COBIT 2019 Objectives

EDM03 Ensured Risk Optimization:

  • Ensure that risk associated with AI endeavors is appropriately measured and is within the organization’s risk appetite.

EDM05 Ensured Stakeholder Engagement:

  • Ensure transparency of vendor contracts to maintain alignment of consumer needs and business strategy.24

APO02 Managed Strategy:

  • Manage the alignment of AI system usage with the overall business strategy of the organization.
  • Ensure that AI systems are appropriately implemented in a way that assists the organization in meeting its obligations and commitments to stakeholders.

APO04 Managed Innovation:

  • Manage how the organization is approaching innovation regarding new AI capabilities and opportunities. The goal is to enable the organization to be an early mover25 with AI.

APO13 Managed Security:

  • Manage security challenges and vulnerabilities that accompany the usage of heavily data-dependent AI systems.
  • Maintain data security through defense-in-depth principles and practices such as encryption, intrusion detection systems and firewalls.
  • Manage physical security of AI documentation and physical hardware through the use of appropriate security measures.

MEA03 Managed Compliance With External Requirements:

  • Monitor the current regulatory landscape surrounding emerging technology (specifically, AI) including anticipated regulation on a state, federal and global level.26
  • Evaluate how the EU General Data Protection Regulation (GDPR) impacts the development and use of heavily data-dependent AI systems.

Conclusion

I&T governance frameworks are designed to provide organizations with a structured approach to designing, implementing and monitoring technologies that will align with organizational strategy to achieve business goals. The proposed framework and models herein are intended to illustrate how any organization can extend an existing comprehensive, yet generic, framework, COBIT 2019, into a framework tailored to address the additional governance goals and risk mitigation strategies when a specific emerging technology is being implemented into an organizational structure. The proposed framework is an example that could be applied to other emerging technologies in other processes and functions.

Endnotes

1 ISACA®, COBIT® 2019 Framework: Introduction and Methodology, USA, 2018, http://b1ti.veosonica.com/resources/cobit
2 Bakshi, S.; “Performance Measurement Metrics for I&T Governance,” ISACA® Journal, vol. 6, 2016, http://b1ti.veosonica.com/archives
3 Op cit ISACA, p.17
4 Ibid., p. 15
5 Packwood, S., Director of I&T Audit Services at Textron, phone call and email conversation, June 2020
6 Ibid.
7 Ibid.
8 Anonymous, managing director at International Management Consulting Firm, email conversation, 11 June 2020
9 Gow, G.; “How AI Can Go Terribly Wrong: 5 Biases That Create Failure,” Forbes, 9 November 2020, www.forbes.com/sites/glenngow/2020/11/09/how-ai-can-go-terribly-wrong-5-biases-that-create-failure/? sh=5d51b5525b87
10 Smith, C. S.; “Dealing With Bias in Artificial Intelligence,” The New York Times, 19 November 2019, http://www.nytimes.com/2019/11/19/technology/artificial-intelligence-bias.html
11 Uzzi, B.; “A Simple Tactic That Could Help Reduce Bias in AI,” Harvard Business Review, 4 November 2020, http://hbr.org/2020/11/a-simple-tactic-that-could-help-reduce-bias-in-ai
12 Institute of Electrical and Electronics Engineers (IEEE) Global Initiative on Ethics of Autonomous and Intelligent Systems, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being With Autonomous and Intelligent Systems, Version 2, USA, 2017, http://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead_v2.pdf
13 Dawar, N.; N. Bendle; “Marketing in the Age of Alexa,” Harvard Business Review, vol. 93, iss. 3, May–June 2018, p. 85, http://hbr.org/2018/05/marketing-in-the-age-of-alexa
14 Ibid., p. 85
15 Op cit IEEE, p. 61
16 Ibid., IEEE, p. 63
17 Ibid., IEEE, p. 65
18 Moore, M.; G. Tucker; “Part Four of a Four-Part Series: The Responsible Technology Firm of the Future: A Rapidly Changing and Unpredictable Landscape,” Protiviti, 2018, http://www.protiviti.com/sites/default/files/united_states/insights/responsible-tech-series-part4-corporate-social-responsibility-protiviti.pdf
19 Ibid., p. 6
20 Op cit Dawar, Bendle, p. 85
21 Gasser, U.; V. Almeida; “A Layered Model for AI Governance,” Institute of Electrical and Electronics Engineers (IEEE) Internet Computing, vol. 21, iss. 6, November 2017, http://dash.harvard.edu/handle/1/34390353
22 Op cit Moore, Tucker, p. 3
23 Op cit IEEE, p. 68
24 Op cit Dawar, Bendle p. 85
25 Op cit Moore, Tucker, p. 3
26 Ibid., p. 7–8

Benjamin A. Witt

Is an internal audit analyst for J.P. Morgan Chase & Co, specializing in technology auditing and performing analyses of processes and controls spanning multiple lines of business within the firm including the consumer and community bank, corporate investment bank, and commercial bank. As emerging technologies are introduced to various industries, he is interested in exploring the obstacles and opportunities for governance that they present.

Sarah J. Wolanske

Works for Textron in the two-year rotational Information Technology Leadership Development Program (LDP). Her first year was spent working at Textron’s world headquarters in Providence, Rhode Island, USA, on the enterprise business solutions team where, among other responsibilities, she acted as project manager on two system implementations and developed in PowerBI, PowerApps and QlikView to monitor and maintain business intelligence for various cross-functional teams. Her current rotation is on the cybersecurity team at Textron Information Services with a focus on incident response.

Jeffrey W. Merhout, Ph.D., CPA

Is an associate professor of information systems in the Farmer School of Business at Miami University, Ohio, USA, and a member of the ISACA® Greater Cincinnati Chapter (USA). He has two years of auditing experience in public accounting and another year as an operational internal auditor in the industry. His main teaching and research interests are IT strategy, IT governance, IT risk and control, cybersecurity, and IT assurance. He has presented and published his research at numerous IS conferences and in journals, including Communications of the ACM, Communications of the AIS, Information Technology and People, Journal of the Midwest Association for Information Systems, Journal of Information Systems Education, Journal of Computer Information Systems, International Journal of Accounting Information Systems, International Journal of Information Management, Information Systems Control Journal, and The Journal of Effective Teaching.