Thank you, Chair Gensler. The best thing I can say for this proposal is that it serves, perhaps unintentionally, as a mirror reflecting the Commission’s distorted thinking. In that mirror, you will see the Commission’s attitude toward technology, which is not neutral, but hostile. It reflects this Commission’s loss of faith in one of the pillars of our regulatory infrastructure: the power of disclosure and the corresponding belief that informed investors are able to think for themselves. Another glance through the looking glass will reveal the Commission’s continued degradation of a principles-based regulatory regime, replacing it once again with overly prescriptive rules. And a final look reveals the Commission’s indifference to operational feasibility. I dissent from this proposal and the thinking it embodies.
Despite protestations that “[t]he proposal is intended to be technology neutral” and does “not seek[] to identify which technologies a firm should or should not use,”[i] the proposal reflects a hostility toward technology. That antagonism is trained at predictive data analytics (or “PDA”) technologies, “such as [artificial intelligence (AI)], machine learning, or deep learning algorithms, neural networks, [natural language processing (NLP)], or large language models (including generative pre-trained transformers), as well as other technologies that make use of historical or real-time data, lookup tables, or correlation matrices among others.”[ii] To get at those technologies, the rule would define a “covered technology” as “an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes of an investor.”[iii] Given that broad language, spreadsheets,[iv] commonly used software, math formulas, statistical tools, and AI trained on all manner of datasets,[v] could fall within the ambit of this rulemaking. Once in this category, a technology would be subject to an intense review for conflicts of interest as specially defined for this rule,[vi] which would then have to be eliminated or neutralized. Requiring firms to subject certain types of technologies to a uniquely onerous review and conflict remediation process is not technology neutral. Let us be honest about what we are doing here: banning technologies we do not like. As the release admits, one consequence of this initiative is that “a firm might opt not to use an automated investment advice technology because of the costs associated with complying with the proposed rules.”[vii] We risk depriving investors of the benefits of technological advancement.
But this release does more than single out particular technologies for regulatory hazing, it also rejects one of our primary regulatory tools—disclosure. If a firm determines that the use (or potential use) of a covered technology involves a conflict of interest, then the firm has to eliminate or neutralize the conflict. Disclosure is not an option. In many ways, the discussion surrounding the inadequacy of disclosure is the most troubling aspect of the proposal. The long-term ramifications of the Commission’s rationale for dismissing the value of disclosure —namely, that disclosure is of no use to investors[viii] — cannot be exaggerated. The release explains that disclosure cannot work since investors are powerless pawns incapable of resisting psychological manipulation by technologies designed to play to their “proclivities.”[ix] The release even hints that certain investors might be particularly vulnerable because of their sex, age, how educated their parents are, and even height.[x] Is the next step going to be to make investment decisions for investors we deem incapable of making their own decisions? The whole premise of our disclosure regime is that investors can think for themselves.
In addition to rejecting disclosure, this proposal continues the Commission’s layering on of obligations. While some covered technologies may create unique challenges,[xi] advisers are bound by their obligations as fiduciaries, and broker-dealers are bound by Regulation Best Interest and FINRA rules.[xii] The Commission describes this proposed rulemaking as a “supplement . . . to existing regulatory obligations related to conflicts.”[xiii] As we make clear in the release, “[b]roker-dealers and investment advisers are currently subject to extensive obligations under Federal securities laws and regulations . . . that are designed to promote conduct that, among other things, protects investors . . . from conflicts of interest.”[xiv] Under these overarching standards, firms using covered technologies have to identify and mitigate conflicts of interest. We already have the ability to pursue bad actors. We should be considering issuing guidance or conducting a roundtable to discuss topics such as adaptive AI, but we do not need standalone rules. Today’s proposal joins a growing list of Commission rulemakings that are unnecessary.[xv]
Given that this rule is designed merely to supplement other rules, the Commission’s utter disregard for operational feasibility is inexplicable. For any covered technology, broker-dealers and investment advisers will have to conduct a conflict identification process that is itself bereft of discernible borders. Eye-wateringly detailed written policies and procedures cover every aspect of the evaluation and assessment of potential conflicts and how to handle them. The whole process would be capped off by a “review and written documentation of that review . . . of the adequacy of the [firm’s] policies and procedures” that would have to be conducted at least annually.[xvi] In a Through-the-Looking-Glass kind of way, we present these proposed obligations as principles-based, but that characterization melts against the description of our expectations. When establishing their evaluation methods, firms “may adopt an approach that is appropriate for [their] particular use of covered technology, provided”—there always seems to be a “provided” or a “however”—provided that the firm identifies conflicts “associated with how the technology operated in the past . . . and how it could operate once deployed,” as well as, in most instances, “other scenarios that are reasonably foreseeable.”[xvii]
The release offers a break for “a firm that only uses simpler covered technologies in investor interactions, such as basic financial models contained in spreadsheets or simple investment algorithms.” Such a firm “could take simpler steps to evaluate the technology and identify any conflicts of interest.”[xviii] Firms thinking of using “more advanced covered technologies” might have to “build ‘explainability’ features into the technology,” to describe why the program reached “a particular outcome, recommendation, or prediction.”[xix] If explainability features are not available, a firm might have to forgo using the technology or modify it to include explainability features and back-end controls.[xx] What firm, large or small, would feel confident that it has a handle on what to expect when Examinations or Enforcement comes knocking?[xxi] Will any but the largest firms have the personnel and resources needed to comply with the proposed evaluation and testing standards? Small firms will have to abandon worthwhile technologies that benefit investors and firms.[xxii] Get out your abacuses, I guess.
I hope that just as Alice did, we will wake up from this dream and find ourselves back on the other side of the looking glass. In the meantime, however, I am eager to hear what commenters say about the proposal and to see our reevaluation of the rule in light of those comments. As is always the case, however, my inability to vote in favor of a rulemaking should not be taken as a reflection of my views of the Commission staff. I maintain a deep appreciation for how hard they work under vexing conditions. Deadlines are more demanding than ever, and the marching orders even more challenging to implement, but the staff’s dedication and talent continue to shine. A special shoutout to Sirimal Mukerjee and Blair Burnett. I do have some questions:
- The definition of “covered technology” is quite broad. Do you mean to encompass Excel spreadsheets, for example, and mathematical formulas used to price securities?
- The rule claims to be technology neutral—and maybe it is because the definition of “covered technology” is so broad—tell me how I am wrong to think that we are creating an especially harsh rule for particular types of technology.
- The release suggests that a non-disclosure approach is warranted here saying “due to the scalability of these technologies and the potential for firms to reach a broad audience at a rapid speed … any resulting conflicts of interest could cause harm to investors in a more pronounced fashion and on a broader scale than previously possible.”[xxiii] Other technologies have likewise facilitated firms’ rapid expansion. Why are covered technologies being singled out?
- The release posits a situation in which “one conflicted factor among thousands in the algorithm or data set upon which a technology [causes] the covered technology to produce a result that places the interests of the firm ahead of the interests of investors, and the effect of considering that factor may not be immediately apparent without testing.”[xxiv] How could a firm get comfortable that it had done enough testing to spot that one conflicted factor in an algorithm or data set?
- The release seems to reject disclosure as an ineffective tool given people’s inability to resist technological prompts designed to play into their unique psychological make-up. What are limits to the argument that people, when faced with bespoke technological prompts, cannot think for themselves? In what other areas will regulation have to change to accommodate people’s inability to withstand technological nudges?
- Given the application of this rule to investor interactions, rather than merely recommendations, do we have the authority to apply it to broker-dealers? Is it a backdoor attempt to expand Regulation Best Interest?
- The economic analysis says that the proposed rules “could … act as barriers to entry or create economies of scale, potentially making it challenging for smaller firms to compete.” Why isn’t that “could” a “would”? It seems inevitable that a rule like this will prevent small firms from using technology that would enable them to serve their clients and compete with larger rivals.
- What length would the compliance period be for the rule if it were to be adopted?
- The release includes a helpful table identifying direct costs of the proposed rules. The estimate for firms with simple covered technology is 25 hours initially and 12.5 thereafter and for firms with complex covered technology is 350 hours annually and 175 thereafter. I found it hard to reconcile those estimates with the complexity of the processes the release describes. Can you provide me a window into how you arrived at those numbers by describing what a simple covered technology firm would look like and what it would have to do upon adoption of the rule as proposed?
- The rule appears assume that AI is so complex it needs special rules. Aren’t humans even more complex?
[i] Proposing Release at 43.
[ii] Proposing Release at 47.
[iii] Proposed 275.211(h)(2)-4(a) (“Covered technology means an analytical, technological, or computational function, algorithm, model, correlation matrix, or similar method or process that optimizes for, predicts, guides, forecasts, or directs investment-related behaviors or outcomes of an investor.”).
[iv] Proposing Release at 48-9 (“Similarly, if a firm utilizes a spreadsheet that implements financial modeling tools or calculations, such as correlation matrices, algorithms, or other computational functions, to reflect historical correlations between economic business cycles and the market returns of certain asset classes in order to optimize asset allocation recommendations to investors, the model contained in that spreadsheet would be a covered technology because the use of such financial modeling tool is directly intended to guide investment-related behavior.”).
[v] See, e.g., Proposing Release at Question 36 (discussing whether technologies “trained on all books in the English language” should be excluded from the rule’s coverage).
[vi] Proposed Rule § 275.211(h)(2)-4(a) (“Conflict of interest exists when an investment adviser uses a covered technology that takes into consideration an interest of the investment adviser, or a natural person who is a person associated with the investment adviser.”).
[vii] Proposing Release at 203 (“For example, a firm might opt not to use an automated investment advice technology because of the costs associated with complying with the proposed rules. In these types of situations, firms would lose the potential revenues that these technologies could have generated, and investors would lose the potential benefits of the use of these technologies. In addition, in the absence of these technologies, firms might raise the costs of their services, thus increasing the costs to investors.”).
[viii] See, e.g., Proposing Release at 161-2 (“A single, large disclosure at the beginning of the firm’s relationship with the investor might be too lengthy to be meaningful or actionable, or not specific enough to be effective, because it would have to capture the full set of conflicts of interest that could evolve dynamically, across investors, through the use of PDA-like technologies, especially if the technology rapidly adjusts in response to prior interactions with an investor.”); Proposing Release at 191 (“The scope and frequency of investor interactions with new technologies and the complex, dynamic nature of those technologies may make it difficult for investors to understand or contextualize disclosures of conflicts of interest to the extent that the investors interact with the technologies, with interfaces or communications which feature outputs of the technologies, or with associated persons who make use of outputs of the technologies.”).
[ix] Proposing Release at 189-90 (“[The features and design of covered technologies increase the risk through the constant presence enabled by automation, design practices which encourage habit formation, and the ability to collect data and individually and automatically tailor interventions to the proclivities of each investor. Elimination, or neutralization of the effect of, a conflict of interest could have greater investor protection benefits than disclosure to the extent that it could be difficult for a firm to accurately determine whether it has designed a disclosure that puts investors in a position to be able to understand the conflict of interest despite these psychological factors.”).
[x] Proposing Release at note 241 (“For example, attitudes toward risk and risk-taking behavior have been found to be meaningfully predicted by sex, age, height, and parental educational achievement.”) (citing Thomas Dohmen et al., Individual Risk Attitudes: Measurement, Determinants, and Behavioral Consequences, 9 J. EUR. ECON. ASS’N 522–550 (June 2011)).
[xi] See, e.g., Proposing Release at 10 (“These issues may render a firm’s identification of such conflicts for purposes of the firm’s compliance with applicable Federal securities laws more challenging without specific efforts both to fully understand the PDA-like technology it is using and to oversee conflicts that are created by or transmitted through its use of such technology.”). (Internal citation marking removed.)
[xii] Proposing Release at note 70.
[xiii] Proposing Release at 66 (“The proposed conflicts rules thus supplement, rather than supplant, existing regulatory obligations related to conflicts of interest, laying out particular steps a firm must take to address conflicts of interest arising specifically from the use of covered technologies in investor interactions.”).
[xiv] Proposing Release at 23 (“Broker-dealers and investment advisers are currently subject to extensive obligations under Federal securities laws and regulations, and, in the case of broker-dealers, rules of self-regulatory organizations, that are designed to promote conduct that, among other things, protects investors, including protecting investors from conflicts of interest.”). (Internal citation marking removed.). See, e.g., Comment Letters from Wealthfront at 1 (Oct. 8, 2021) (“The SEC’s regulation should be technology-neutral, and the current regulatory framework adequately serves the SEC’s mission to protect investors.”) https://www.sec.gov/comments/s7-10-21/s71021-9332518-260240.pdf; Morgan Stanley at 1 (Oct. 1, 2021) (“The Commission’s existing regulatory framework provides protections for retail investors against unjust and fraudulent practices, misleading communications and recommendations not in a client’s best interest.”) https://www.sec.gov/comments/s7-10-21/s71021-9315861-260059.pdf; and CATO at 6 (Oct. 1, 2021) (“The [regulatory] framework already in place by the Commission and FINRA is sufficient to address any issues that are presented by digital engagement.”) https://www.sec.gov/comments/s7-10-21/s71021-9315859-260057.pdf.
[xv] See, e.g., Outsourcing by Investment Advisers, SEC Rel. No. IA-6176 (Oct. 26, 2022) (Proposed); Amendments to Form PF to Require Event Reporting for Large Hedge Fund Advisers and Private Equity Fund Advisers and to Amend Reporting Requirements for Large Private Equity Advisers, SEC Rel. No. IA-6297 (May 3, 2023) (Final).
[xvi] Proposing Release at 45.
[xvii] Proposing Release at 67-8 (“The proposed conflicts rules do not mandate a particular means by which a firm is required to evaluate its particular use or potential use of a covered technology or identify a conflict of interest associated with that use or potential use. Instead, the firm may adopt an approach that is appropriate for its particular use of covered technology, provided that its evaluation approach is sufficient for the firm to identify the conflicts of interest that are associated with how the technology has operated in the past (for example, based on the firm’s experience in testing or based on research the firm conducts into other firms’ experience deploying the technology) and how it could operate once deployed by the firm. If a technology could be used in a variety of different scenarios, the firm should consider those scenarios in which it intends that the technology be used (and for which it is conducting the identification and evaluation process). It should also consider other scenarios that are reasonably foreseeable unless the firm has taken reasonable steps to prevent use of the technology in scenarios it has not approved (for example, by limiting the personnel who are able to access the technology”). (Emphasis added.)
[xviii] See, e.g., at 68 (“For example, a firm that only uses simpler covered technologies in investor interactions, such as basic financial models contained in spreadsheets or simple investment algorithms, could take simpler steps to evaluate the technology and identify any conflicts of interest, such as requiring a review of the covered technology to confirm whether it weights outcomes based on factors that are favorable for the adviser or broker-dealer, such as the revenue generated by a particular course of action.”).
[xix] Proposing Release at 69-70.
[xx] Proposing Release at 72-3 (“The Commission is aware that some more complex covered technologies lack explainability as to how the technology functions in practice, and how it reaches its conclusions (e.g., a “black box” algorithm where it is unclear exactly what inputs the technology is relying on and how it weights them). The proposed conflicts rules would apply to these covered technologies, and firms would only be able to continue using them where all requirements of the proposed conflicts rules are met, including the requirements of the evaluation, identification, testing, determination, and elimination or neutralization sections. For example, as a practical matter, firms that use such covered technologies likely may not meet the requirements of paragraph (b) of the proposed conflicts rules where they are unable to identify all conflicts of interest associated with the use of such covered technology. However, in such cases, firms may be able to modify these technologies, for example by embedding explainability features into their models and adopting back-end controls (such as limiting the personnel who can use a technology or the use cases in which it could be employed) in a manner that will enable firms to satisfy these requirements.”). (Internal citation marking removed.)
[xxi] Proposing Release at 88 (“One factor among three under consideration by the technology may be highly likely to cause the technology to place the interests of the firm ahead of investors, and the effect of considering that factor may be readily apparent. On the other hand, one conflicted factor among thousands in the algorithm or data set upon which a technology is based may, or may not, cause the covered technology to produce a result that places the interests of the firm ahead of the interests of investors, and the effect of considering that factor may not be immediately apparent without testing (as discussed above).”).
[xxii] See, e.g., Proposing Release at 203-4 (“For example, a firm might opt not to use an automated investment advice technology because of the costs associated with complying with the proposed rules. In these types of situations, firms would lose the potential revenues that these technologies could have generated, and investors would lose the potential benefits of the use of these technologies. In addition, in the absence of these technologies, firms might raise the costs of their services, thus increasing the costs to investors.”). (Emphasis added.)
[xxiv] Proposing Release at 88.
GIPHY App Key not set. Please check settings