Occasion

The Cologne Local Court (Amtsgericht Köln) has recognised with file reference 120 C 137/19 1 according to §§ 280 and 288 BGB:

“The lawyer’s business fee is triggered when a reminder letter generated by an algorithm is issued.”

The algorithm installed “calculates” the existence of a claim and generated a claim letter to the airline. In the opinion of the court, the algorithm has thus provided exactly the same service that a lawyer would provide in an oral conversation and subsequently by writing a letter of claim.

The decisive passage is (emphasis PE):

“The business fee is incurred for the operation of the business, including information, and for participation in the drafting of a contract […]. This undoubtedly includes the legal examination and advice on the existence of claims and the drafting of a letter of claim. Whether this is done through verbal discussion with the lawyer who checks the claim in his head, or the use of an algorithm previously programmed and tested by a lawyer, is not decisive from the point of view of the court._”.

The court thus equates the participation in the drafting of a contract by a lawyer to the functionally and in the result equal performance by a computer algorithm. There are two errors of reasoning here, which I would like to call a blind ontological spot of Legaltech:

An algorithmically driven system (artificial intelligence Machine Learning, “Legal Tech”) works purely syntactically, it lacks any context-related social competence.

A lawyer has the (at least moral) obligation to make a diagnosis at the beginning and to check whether s*he competent or not and whether s*he can help the potential client with his*her expertise accurately or not. A computer can’t do this in principle: since it has no problems that could endanger its survival, it lacks any intelligence whatsoever, and can never be upgraded to achieve this.

So even if the prerequisites for “taking action” as an application of the algorithm should be given here, it does not change the fact that the algorithm itself cannot check this. For this reason alone, it cannot be equated with a human actor, even if it may have been “just by chance” or even if it may be 30% or 75% true.

A lawyer who simply solves cases without being responsible for them and without being able to do so in terms of content would be simply incompetent. This is exactly what the computer does on a regular basis.

Solving a case

The court argues that it does not matter whether an algorithm or a human being solves the case according to the applicable rules, given the jurisdiction and initial examination.

This is wrong, too: a machine can only formally imitate the rules, but cannot apply them with regard to the content of the arguments at stake. The formally documented and documentable imitation of steps and rules has nothing in common with the concrete content-related execution of thought by a human being. Here, too, the result is often the same or something similar – in the best case and in standard cases.

A computer can only guess.

The problem is that we cannot know in advance whether it will be true or not. It is a (by machine learning) qualified coincidence, but it remains random: The computer merely guesses, it is incapable of reasoning. The problem of the court’s argumentation here is that in retrospect it justifies the impossibility of the substrate of thought with the correctness of the solution that has been made, which is by no means always given in the individual case.

For reasons of the rule of law, the comprehensibility of the content and justifiability in the thought process must underlie legal action. The so-called functionalism that comes through here, which is only based on statistical probabilities, can never hold a candle to a legally qualified approach.

Resume

The court is presumably not aware of what ultimately ideologically deluded ideas (so-called “digitalism” 3) it is promoting here and how it itself is thus contributing to the self-abolition of jurisprudence.

The independence from the “substrate of thought” had already been taken as a basis by Turing in his famous Turing test, wrongly 4: Just because humans create and run a software for KI machine learning, which generates something that one might believe to come from a human, does not mean that this software is intelligent and has done the same performance as if it came from a human. What is completely missing is the inner side of the thought, the comprehension of legal rules of thought according to their propositional content and, in connection with this, the ability to justify them at any time. What is completely missing is the intentional ability to evaluate legal contexts correctly and comprehensibly and to act accordingly.

The judiciary prepares its own abolition with such arguments.

There is nothing to be said against using such solutions to reduce costs under conditions that are transparently defined in advance, as long as their results remain verifiable by humans. However, it is fundamentally wrong, as the court does, to evaluate this performance in exactly the same way financially. The judges are preparing the ground to be replaced by machines and to abolish the law as we have known it so far without further ado. Without a situational diagnosis, without semantic thinking, and without an option for justification, nothing remains of the legal system and its basic democratic values. The court thus needlessly ascribes to the algorithm something that it cannot achieve (anthropomorphism) and thus prepares arguments for its own abolishability. This can only be described as creepy 5.

Post Scriptum

References


  1. Amtsgericht Köln, 120 C 137/19 ↩︎

  2. The working definition of “legal diagnosis” used here as a situational initial examination: assessment of the legal starting situation of a natural or legal person and the underlying objectives. Before a course of action is prescribed, a diagnosis should be made. Lawyers have the moral obligation to make a diagnosis before prescribing action and adopting legal acts. Failure to do so would be considered incompetent. See also self-diagnosis. Can only be performed by competent people and cannot be intentionally automated in the same way. Corresponds to the pragmatic, context-dependent and socially anchored interpretation of situations. ↩︎

  3. Horx, Matthias (2019): Das postdigitale Zeitalter, Zukunftsinstitut ↩︎

  4. Gabriel, Markus (2018): KI als Denkmodell, Petersberger Gespräche - Villa Hammerschmitt ↩︎

  5. Mara, Martina (2016): Die Antropomorphismus-Falle, Zukunftsinstitut ↩︎