When AI trading bots go rogue, think outside the (black) box

When AI trading bots go rogue, think outside the (black) box
Photo by Possessed Photography / Unsplash

This article was first published in the Butterworths Journal of International Banking and Financial Law (July 2024).

ABSTRACT

Chatbots have been dominating the headlines with some wildly entertaining reminders of the importance of understanding AI’s limitations. Trading bots deserve some attention too, not least because a lot of trading in financial markets is automated through software programs which could (and in some cases already do) use AI. We’ve seen cases where deterministic bots have concluded trades in the middle of the night at bizarre prices, or have gone shopping on the dark web and been arrested (confiscated) for doing so. This article swaps out the deterministic bots for AI bots and considers whether conventional legal principles still work.

KEY POINTS

  • The Singapore Court of Appeal decision in Quoine considered how knowledge of a mistake is to be ascertained when a smart contract is entered into between deterministic programs.
  • The analysis is far more complex if AI systems are involved instead.
  • If an AI system acts unpredictably should that be treated as a mistake or is that a risk assumed by the person who deployed it?
  • If an AI system takes advantage of the other party’s mistake, should the inquiry focus on the knowledge of its creators or its own knowledge or the unfair outcome?

INTRODUCTION

Lawyers interested in emerging technologies may all agree on one thing: the Singaporean case Quoine v B2C2 [2020] SGCA(I) 02 almost had it all. Is cryptocurrency property? How is knowledge of a mistake to be ascertained when a smart contract is entered into between deterministic algorithms?

But what if that contract was instead concluded by artificial intelligence (AI) systems? Whilst that question did not arise in Quoine, the court at first instance acknowledged the challenge:

“… the law in relation to the way in which ascertainment of knowledge in cases where computers have replaced human actions is to be determined will, no doubt, develop as legal disputes arise as a result of such actions. This will particularly be the case where the computer in question is creating artificial intelligence and could therefore be said to have a mind of its own.[1]

The algorithms in Quoine were deterministic: they followed pre-determined rules and therefore did not have minds of their own. This article will consider a counterfactual of Quoine in which AI systems were in play. It will attempt to answer the following question: do conventional legal principles work, or may they need to adapt, when traders hand their affairs to AI systems?

QUOINE: A RECAP

1) The Quoine Deterministic Scenario

The facts in Quoine were complex. This article will use a simplified scenario based on the Law Commission’s version[2]:

  • Alice deploys a deterministic program on a cryptocurrency exchange platform. The program is coded to purchase the cryptocurrency ETH at the best available price on the platform.
  • Bob also deploys a deterministic program on the platform, which is designed to sell ETH. Bob’s program is coded so that (i) it looks at other offers to sell ETH on the platform to determine the price at which it sells ETH, and (ii) if there are no other offers to sell ETH, then it will offer ETH at an extremely inflated price.
  • A major system error occurs on the platform in the middle of the night, which dramatically reduces the number of offers to sell ETH on the platform. As a result, Bob’s program automatically offers ETH at the extremely inflated price, and Alice’s program automatically accepts that offer, it being the best available price for ETH on the platform.

Many people’s instinctive reaction is that something has clearly gone wrong, and it would be unfair to hold Alice to the contract. The relevant legal doctrine here is unilateral mistake. There are differences between Singaporean and English law but at a general level: if at the time of entry into a contract, a party was mistaken as to a term of the contract, and the other party knew of this mistake, then the contract is void.[3] For example, if a seller of goods mistakenly misquotes the price of the goods to the buyer (who knows that the seller is mistaken).

However, the Quoine Deterministic Scenario raises two issues for the conventional doctrine of unilateral mistake. First, is Alice’s mistake the right kind of mistake to engage the doctrine? Second, since the contract was entered into automatically in the middle of the night, how can it be said that Bob knew of the mistake at the time of entry into the contract?

2) What is the right kind of mistake?

Under Singaporean law: (i) only a mistake as to a term of the contract will engage the common law doctrine of unilateral mistake (under which the contract will be void) but (ii) there is also an equitable doctrine of unilateral mistake (under which the contract will be voidable) which is not necessarily restricted to mistakes as to terms but may also extend to ‘fundamental mistakes’. Under English law, there is only a common law doctrine of unilateral mistake and it is only engaged when there is a mistake as to a term.

The Singapore Court of Appeal majority in Quoine considered the relevant mistake as a mistaken belief by Alice that she was buying ETH at a price which did not deviate significantly from its true market price. That was not a mistake as to a term because the price was arrived at by operation of the parties’ algorithms, and the algorithms had operated exactly as they had been programmed to. Instead, Alice had made a mistake about the circumstances under which the contract was concluded. The Court of Appeal majority said it was willing to proceed on the assumption that this type of mistake was sufficient to engage the equitable doctrine of unilateral mistake. [paras 115-116]

Mance IJ (dissenting in Quoine) agreed that Alice’s mistake was not a mistake as to a term and so only the equitable doctrine could be relevant. However, he characterised the mistake as follows:  there was a fundamental mistake, in that the exchange platform’s system operated (and led to the purchase of ETH on terms) in a way that was not conceived as possible and would never have been accepted by Alice in the prevailing circumstances [para 183]. Nik Yeo observes that this characterisation of the mistake appears to be referring to a fundamental computer error rather than a mistake actually made by Alice. Mance IJ’s approach therefore focuses more on the parties’ expectations about the functioning of the system than on mistakes as they have conventionally been understood.[4]

Matthew Oliver argues that a mistake made by a trading algorithm is not capable of engaging the doctrine of unilateral mistake.[5] Oliver notes the uncertainty as to whether trades concluded automatically by algorithms are valid contracts where the contracting parties are unaware of the trades at the time. Oliver says that this uncertainty can be overcome by regarding a trading algorithm as a tool used to create an undetermined open offer to contract on whatever terms the algorithm agrees. If Alice’s trading algorithm agrees to a bad offer, that does not engage the doctrine of unilateral mistake because the consent of Alice is not undermined. Oliver’s argument is premised on the idea that the basis for the doctrine of unilateral mistake is vitiated consent. Whilst that may be a basis, it is not necessarily the only one: Mance IJ in Quoine said that the “underlying rationale is not a lack of correspondence between offer and acceptance, but a principle of justice” [para 181].

Since the English law doctrine of unilateral mistake only operates when there is a mistake as to a term, the Singapore Court of Appeal majority’s approach and Mance IJ’s broader approach would not currently be available under English law. The Law Commission’s view was that the scope of unilateral mistake for smart contracts under English law should remain confined to mistakes as to terms because smart contracts should not be treated differently to natural language contracts.[6] However, that begs the question as to whether the category of relevant mistakes for all types of contract should be expanded under English law. This would likely require the creation of an equitable doctrine of unilateral mistake.

3) What knowledge of the non-mistaken party is necessary?

Under Singaporean law: (i) the common law doctrine requires actual knowledge of the mistake; and (ii) the equitable doctrine requires actual or constructive knowledge and also unconscionability. Under English law, actual or constructive knowledge will be sufficient to engage the common law doctrine.   

The Singapore Court of Appeal majority said that since the contract was entered into by deterministic algorithms, the focus of the inquiry should be the programmer of Bob’s algorithm [para 98]. They formulated the inquiry as follows [para 103]: when programming Bob's algorithm, was the programmer doing so with actual or constructive knowledge of the fact that the extremely inflated offer would only ever be accepted by a party operating under a mistake, and was the programmer acting to take advantage of such a mistake? The time frame for assessing the programmer’s state of mind would extend up to the point that the contract was formed [para 99]. It was found that the programmer did not have a sinister motive in including the inflated offers in Bob’s algorithm and so the test for unilateral mistake was not satisfied. The Law Commission considers that the approach of the Singapore Court of Appeal majority serves as a useful reference point for the English courts when dealing with deterministic programs.[7]

Yeo considers that the approach of the majority may be overly stringent. He suggests modifying the relevant inquiry (in a way possible under English law) such that the fact known by the programmer is not (i) that the inflated offer would only ever be accepted by a party operating under a mistake but instead (ii) that there is a real possibility of the inflated offer being accepted by a party operating under a mistake.[8]

Both of the above approaches are process-focussed as they look at how Bob’s algorithm was created. An alternative approach, taken by Mance IJ (dissenting), is to focus on the outcome. Mance IJ formulated his test as follows: whether any reasonable person knowing of the relevant market circumstances would have known that there was a fundamental mistake [para 200]. Applying that test: any reasonable trader (with knowledge of the contract and the true market price of ETH at the time) would have known that this contract could not be anything other than a consequence of a major system error. Mance IJ did not consider that unconscionability should be a separate requirement in this situation but, if it was, then it could be demonstrated by Bob retaining the benefit of the contract after learning of the mistake [para 205]. Since the relief was equitable, it was also important to consider whether third party interests were involved or if there had been a change of position by Bob [para 195].  Mance IJ’s approach would not currently be available under English law since it allows for consideration of Bob’s state of mind after the contract has been entered into.[9]

Before this article considers the AI counterfactual of Quoine, it is necessary to explain what is meant by AI.

AI: A BRIEF EXPLANATION

1) What is AI?

Almost every article about AI will say that it has no universal definition. This article adopts the functional definition proposed by Jacob Turner: AI is the ability of a non-natural entity to make choices by an evaluative process.[10]

This definition distinguishes deterministic algorithms from AI systems. Deterministic algorithms follow logical decision trees comprised of fixed rules, and all of their decisions can be traced back to decisions made by the programmer.[11] It is the programmer rather than the deterministic algorithm that engages in the evaluative process. On the other hand, machine learning systems can be initially taught certain basic principles or objectives and these systems then gradually adapt and refine their approach. The decisions these systems make may be based on information from a variety of sources including their own experience.[12] These systems do engage in an evaluative process.   

2) What makes AI different?

Turner identifies two unique features of AI that present unprecedented challenges to existing legal rules.[13] First, AI can make independent moral decisions. Second, AI can develop independently from human input. Both features may make it difficult to link (through causation) the decisions of an AI system to the humans who created or operated it.

Furthermore, many AI systems are black boxes. Because of the way these systems develop and become more complex over time, it may be impossible (even for the creators) to explain what led to a particular output.[14]

3) Use of AI in algorithmic trading 

A huge portion of trading in financial markets is done through algorithms. For example, in 2022 the share of trading in the spot FX market that involved algorithms was 75%.[15]  This is what makes decisions like Quoine potentially very important. 

Algorithmic trading can make use of AI. For ‘black box trading systems’, designers set the objectives but the system itself then autonomously determines the best way to achieve that objective.[16] Certain hedge funds have been making use of AI in algorithmic trading for a long time: in which the systems not only decide what to pick but also why to pick it.[17] For example, Castle Ridge Asset Management uses W.A.L.L.A.C.E. (a proprietary AI technology) to “create, maintain and evolve investment portfolios”.[18] A recent case in England (which settled before trial) related to the disappointing performance of an AI trading fund which made use of a supercomputer.[19] There are also crypto trading bots powered by AI which automate transactions, predict market trends and analyse market sentiment.[20]

It is also possible to use generative AI to create trading algorithms. The Commodities Futures and Trading Commission in the USA has issued warnings about claims by scammers that AI-created algorithms can generate huge returns.[21]

QUOINE REIMAGINED

1) The Quoine AI Scenario

Imagine a counterfactual of Quoine in which Alice and Bob both use AI systems (also known as AI trading bots). The AI trading bots have not been explicitly programmed to act in a certain way. Instead, they trade based on strategies they have developed and refined over time. A major system error occurs on the platform in the middle of the night, and Bob’s AI bot sells Alice’s AI bot ETH at an extremely inflated price.

2) What is the right kind of mistake?

In the Quoine Deterministic Scenario, the Singapore Court of Appeal majority held that there was no mistake as to a term of the contract because Alice’s algorithm had operated exactly as it had been programmed to act. However, that approach may not be applicable in the Quoine AI Scenario since AI systems can independently develop and make their own decisions:

  • If Alice’s AI bot operated in accordance with the objectives formally set by Alice, then it could be argued that the reasoning in Quoine applies and there was no mistake as to a term.
  • Conversely, if Alice’s AI bot does not operate in accordance with Alice’s formally set objectives or expectations – e.g. by adopting unexpected methods to achieve Alice’s objectives or ignoring Alice’s objectives - then Alice may have an argument that there is a mistake as to the terms of the contract because she did not consent to her bot acting in that way and entering into those terms.

Lord Hodge in a 2023 lecture suggested a possible answer to the latter scenario: “Should the law say that those who willingly use computers with machine learning to effect their transactions are to be taken as intending to be contractually bound by the deals which those autonomous machines make?[22] Similarly, Oliver argues that the consent of Alice (who made an undetermined open offer to contract on whatever terms her AI trading bot agrees) is not undermined. Alice therefore assumes the risk of her AI bot operating unpredictably. An AI bot might be treated as an agent instead of an offer-making tool but there is disagreement as to whether that is possible since AI does not (yet) have legal personality.

Under English law, only a mistake as to a term can engage the unilateral mistake doctrine. However, if an equitable doctrine were to be developed, then other types of mistakes may become relevant.

3) What knowledge of the non-mistaken party is necessary?

Deterministic algorithms have no mind of their own. The same is not necessarily true of AI systems. Some potential approaches to ascertaining knowledge in the Quoine AI Scenario are considered below.

On the approaches of the Singapore Court of Appeal majority and Yeo, the focus is on the mental state of the creators of Bob’s bot. This gives rise to the following difficulties in the Quoine AI Scenario:

  • First, there may be multiple relevant individuals, including the designer(s), trainer(s), deployer(s) and supervisor(s). This problem is not unique to the Quoine AI Scenario since there can also be multiple programmers of a deterministic algorithm. But it would be necessary to refine the test to address this point. This point did not arise in Quoine itself since the deterministic algorithm had been almost exclusively designed by one programmer who was a director of the company that deployed the algorithm.
  • Second, there might not be a human creator. For example, Bob may have asked ChatGPT to assist with the creation of his bot. That then raises the question of how one assesses the state of mind of an AI system.
  • Third, the ability to offer an extremely inflated price may not have been intended by the creators of Bob’s AI bot. Instead, it may have been a result of the independent development of the bot. If one is restricted to only looking at the mental state of creators, then that would be a significant barrier to the operation of unilateral mistake in respect of AI systems.

An alternative (or additional) approach is to investigate the internal processes of Bob’s AI bot to determine whether it had the requisite knowledge. However, this gives rise to further difficulties: 

  • First, as an evidential matter, the black box problem may make it impossible to understand why the AI bot acted in the way it did. This will depend on the AI system that has been used. For example, David Quest KC observes that an AI system might be designed from the outset with various characteristics (including explainability, interpretability, transparency, justifiability and contestability) which might assist a court in a legal dispute about the decision of an AI system.[23]
  • Second, even if the black box problem can be overcome, there is a more fundamental question: what do concepts such as knowledge and unconscionability mean when applied to an AI system? Quest considers that it is almost impossible to see how legal concepts that involve consideration of a subjective state of mind can, on the current state of the law, have any application to a machine.[24] Turner, in considering whether AI could commit a crime, notes that a major difficulty is presented by the general requirement in criminal law that a guilty party must intend to commit the criminal act. Even if AI’s mental state could be measured and ascertained, it may not fall within the human-centric mental states currently recognised by the law; so does that mean defining new mental states for AI?[25]
  • Third, Oliver observes that even if determining the beliefs and intentions of AI systems is theoretically possible, it will be a difficult, unpredictable and expensive process in litigation.[26]

The above difficulties may leave people without legal remedies when AI is in play. Mance IJ’s outcome focussed approach may offer a solution, i.e. whether a reasonable person with knowledge of what had happened (i.e. the outcome) would have known that there was a fundamental mistake. This approach involves thinking outside the box of established legal doctrine: the focus is not limited to the mental processes of the non-mistaken party leading up to the contract, and it therefore allows for an inquiry outside of the AI black box. This approach may encourage creators and operators of AI systems to put in place safeguards to minimise the risks of them acting in problematic ways. Whilst Mance IJ’s approach could theoretically be applied to the Quoine AI Scenario and would sidestep the difficulties listed above, it might be objected that expanding the scope of unilateral mistake in this way undermines the certainty of contracts. However, as Mance IJ explained, certainty of contract is important but not everything, and there are cases where justice takes priority [para 184]. The scope of the new doctrine could be limited by providing further clarity on what might amount to a fundamental mistake. Furthermore, the consequences of such a doctrine would also be limited if it were to (as Mance IJ said it must) operate in equity since the contract would be voidable (not void) [para 183]. 

CONCLUSION

Yeo described Quoine as heralding “the start of an adaptation and reformulation of those aspects of English civil law which turn on a subjective element …  in light of automatic execution of contracts by algorithms”.[27] AI presents a further and far more fundamental challenge to those aspects of English law. Whilst the answers are far from clear, the importance and urgency of asking ourselves the difficult questions in advance could not be clearer.


[1] B2C2 v Quoine [2019] SGHC(I) 03 at [206].

[2] Law Commission, Smart Legal Contracts Advice to Government (November 2021) at 5.56.

[3] Law Commission (n 2) at 5.49.  

[4] Nik Yeo, ‘Mistakes and knowledge in algorithmic trading: the Singapore Court of Appeal case of Quoine v B2C2’, (2020) 5 JIBFL 300, p 7.

[5] Matthew Oliver, ‘Contracting by artificial intelligence: open offers, unilateral mistakes, and why algorithms are not agents’, (Autumn 2021), Australian National University Journal of Law and Technology.

[6] Law Commission (n 2) at 5.64.

[7] Law Commission (n 2) at 5.74.

[8] Yeo (n 4), p 8.

[9] Yeo (n 4), p 8.

[10] Jacob Turner, Robot Rules (2019), p 16.

[11] Turner (n 10), p 18.

[12] David Quest KC, ‘Robo-advice and artificial intelligence: legal risks and issues’, (2019) 1 JIBFL 6, p 2.  

[13] Turner (n 10), pp 64-78.  

[14] Turner (n 10), p 325.  

[15] https://cib.bnpparibas/algorithmic-trading-in-foreign-exchange-increasingly-sophisticated/

[16] https://www.investopedia.com/terms/a/algorithmictrading.asp

[17] https://www.economist.com/finance-and-economics/2023/03/09/lessons-from-finances-experience-with-artificial-intelligence

[18] https://castleridgemgt.com/meet-w-a-l-l-a-c-e/

[19] CL-2018-000298 

[20] https://www.coinbase.com/en-gb/learn/tips-and-tutorials/how-to-use-ai-for-crypto-trading

[21] https://www.cftc.gov/LearnAndProtect/AdvisoriesAndArticles/AITradingBots.html

[22] Lord Hodge, ‘The Law and AI: where are we going?’, 30 November 2023 (available at  https://www.supremecourt.uk/docs/speech-231130.pdf).

[23] David Quest KC, ‘Artificial intelligence and decision-making in financial services’, (2020) 6 JIBFL 366, p 4.

[24] Quest (n 23), p 5.

[25] Turner (n 10), pp 203-204.

[26] Oliver (n 5), p 79.

[27] Yeo (n 4), p 1.