We’re using cookies, but you can turn them off in Privacy Settings.  Otherwise, you are agreeing to our use of cookies.  Accepting cookies does not mean that we are collecting personal data. Learn more in our Privacy Policy.



Recent advances in artificial intelligence will have wide-ranging implications for the investment industry. Individuals and institutions have a part to play in ensuring AI is developed and deployed responsibly.

Since the launch of ChatGPT in November 2022, the surge in enthusiasm for AI has been accompanied by rising concerns over the risks it poses. Even OpenAI, the developer of the wildly popular generative AI chatbot, has struggled to reconcile the ethical debate over the safety of AI and the commercialization of the technology. 

Policymakers have responded with measures to mitigate the potential technical, social, ethical and security risks, including the convening of an AI Safety Summit in the UK in November 2023 and an executive order on AI from US President Joe Biden. Yet comprehensive regulation remains a fair way off. One of the farthest along is the European Union, which is close to finalizing an AI Act that will come into effect in stages from 2026, encapsulating wide-ranging measures to protect citizens. 

A CFA Institute study in 2022 highlighted similar concerns in the investment industry. While the potential of advanced machine learning is exciting for professional investors, there are ethical considerations around how new tools will source, analyze and act on data – and what their use could mean for the financial markets. (See Figure 1)

Ethical decision framework for AI from CFA Institute

Personal Responsibility

Ahead of the enactment of binding AI rules for the industry, data scientists and investment professionals can take it upon themselves to act ethically. 

“People shouldn't wait to be told what to do. They should use a common-sense approach and try and make sure the things they do are for good,” said Sam Livingstone,  Head of Data Science, Jupiter Asset Management.

But rather than leaving employees to make these decisions on an ad-hoc basis, it is incumbent on firms to provide clear guidance in comprehensive and relevant AI governance frameworks.

Appropriate rules will vary from industry to industry. “If you’re trying to predict myeloma in children, people are probably going to allow you more leeway in your approach because what you are trying to solve for is so important,” said Livingstone. There could, for example, be a case for taking a more relaxed stance on data privacy.

The CFA Institute last year laid out a decision framework more suited to the development of responsible AI applications in investment management.

The framework covers three distinct steps: obtaining input data; building, training and evaluating the model; and deploying the model and monitoring it. (See Figure 3.)

Firms also need to formulate and adopt a broader framework to manage the risks and opportunities brought about by AI, encompassing organizational culture, risk management, skills, and competency. This begins with leadership establishing a clear vision for the development and use of AI in the firm’s business model, and entails establishing accountability within a risk management framework and providing relevant business units with sufficient knowledge, skills and capabilities in AI and data science. 

CFA Institute’s research also identified organization-level requisites for AI to be successfully used in a variety of applications, including investment analysis, portfolio management, risk management, trading, automated advice, and client onboarding.

“Instilling an ethical decision framework in AI-driven investment processes is critical to ensure applications serve the best interests of clients. Given the complexity of AI projects, senior leadership must establish a strategic vision and ethical culture for AI development within the organization,” said Rhodri Preece,  Senior Head, Research, CFA Institute at the time of the launch.

“While the use of AI in investment management is still relatively formative, it is appropriate that we examine the ethical aspects of AI implementation to guide future developments responsibly,” Preece added.  

Fiduciary Safeguards

Individual CFA charterholders are also obliged to ensure their use of AI in the investment process complies with the CFA Institute Code of Ethics and Standards of Professional Conduct. “You're constantly studying and reaffirming those,” said Julia Bonafede,  Co-Founder, Rosetta Analytics

The relevant principles and provisions include individual professionalism; integrity of the capital markets; duties to clients and to employers; investment analysis, recommendations, and actions; and conflicts of interest.

For instance, under Standard II: Integrity of Capital Markets, investment professionals must ensure that the data sourced for and processed by AI tools does not come from material non-public information. Standard II also calls for the periodic testing of AI models to ensure that trading decisions do not lead to market distortions or other outcomes that could be construed as manipulative. And Standard III: Duties to Clients requires respecting the confidentiality of clients’ data as well as disclosing how AI is incorporated into the investment process.

In addition to fulfilling professional responsibility, adhering to these standards can help establish trust among clients in both the technology and the overall efficacy of the investment approach. The alternative is “not making it clear what you stand for,” said Livingstone.

A Hard Path to Tread

Of course, adhering to the principles of data integrity, accuracy, transparency, interpretability, and accountability will be challenging in this fast-evolving area. Any number of subtle missteps could compromise data integrity and accuracy, which, could lead to financial losses and magnify risks. 

Interpretability is an especially hot topic. “There are many clients who can’t invest without a high level of explainability,” said Richard Fernand, CFA, Head, Certificate Management at CFA Institute. 
The trouble is, “it’s not really possible to pinpoint the decision process for a neural network,” said Bonafede.

Although Shapley and other line models can be used to gauge the contributions of various factors to a predicted outcome, “they’re not perfect frameworks,” added Bonafede (See Figure 2). In attempting to force an AI model into a more linear and explainable framework, “you potentially water down its success.”

Hypothetical Shapley Values for a Given Stock

Keeping Finance Safe

The dangers of failing to responsibly develop and deploy AI extend well beyond constituting a liability for individual firms. It could even magnify market volatility and instability, warned Gary Gensler, head of the US Securities and Exchange Commission.

Gensler has called on regulators to prevent the accelerating adoption of AI from jeopardizing financial stability, and has proposed rules to govern its use. Much of it comes down to protecting consumers, which, in turn, depends largely on the objectives assigned to a given AI system, according to Gensler. 

For instance, “When AI plays you in chess, do you not think they have an intention to beat you?” he asked. Rather, that objective was assigned by a human. Cognizant of this, Gensler’s proposed rules include stipulating that firms’ use of AI to boost their profit should not come at the expense of their clients. 

Of course, acting against a client’s interests runs counter to the fiduciary duty of a money manager – as enshrined in the CFA Institute Code of Ethics and Standards of Professional Conduct. Given the growing trend for regulators to adopt a “same activity, same risk,” approach to technologies disrupting the financial sector, a similar stipulation is likely to be incorporated in future laws governing the development and use of AI within the investment industry.

By proactively embracing credible and effective AI governance firms can not only ensure that they are ready for future regulation, but by building trust and curtailing risk, could also achieve a considerable competitive advantage. 

But it will not be enough to rely on frameworks alone. Because the technology is evolving so quickly, there will inevitably be gaps between new developments and rules to account for them. It is therefore imperative that “every individual should be responsible for their own behavior,” said Livingstone. “And not say ‘I can do what I like until someone tells me I can’t.’ That’s not the sort of world anyone wants to be in.”

CFA Institute Ethical Decision Framework for AI


View more Data & Technology stories

Want to learn more about data science?

The Data Science for Investment Professionals Certificate teaches you the key skills you need to become the investment professional of tomorrow.

Share on Facebook Share on Weibo Share on Twitter Share on LinkedIn