How does DCM approach relate to generative AI models?

An analogy comparing DCM vs. PL vs. generative AI (ANN)

when posed with a question of “Why did a certain behaviour (e.g. activity or an error) happen“

Glass Box (DCM)

From a business perspective, DCM is like a glass box thanks to semantic transparency. If you ask “why,” you see the answer directly—in business terms.

These terms correspond closely to how a human would think about the matter, because DCM is grounded in evidence from cognitive sciences that study this very process in humans.

Moreover, since DCM models are multi-paneled—capturing multiple levels of reality—there will be several answers, one per level, much like several Aristotelian causes.

Every step in the behavior of DCM-driven code can be traced back to all of its causes—both retrospectively and prospectively when anticipating future actions.

Black Box (PL)

The traditional approach to coding business applications with programming languages produces a black box. If you ask “why,” you must build special mechanisms to inspect what’s happening inside and report the status of technical machinery back.

That kind of answer corresponds to Aristotelian immediate cause.

Anything above this low technical layer has to be reconstructed in an ad hoc manner.

Even at this technical level, you can only trace the behavior of your code if that code implements explicit logging of its behavior.

There is no way to trace prospective behavior—you don’t know what will happen until the code is actually executed, since it is a purely reactive system.

Black Hole (ANN)

You cannot meaningfully ask “why,” because the same ANN will treat your question as an independent query and fabricate a plausible answer instead of revealing the actual way the original answer was generated. An ANN is unable to reflectively track its own inner workings and report on them.

Furthermore, even if it could, patterns in the so-called abstract space of ANN weights do not necessarily correspond to those formed by the human brain, and the same answers could have been produced through a very different generative process—one that might appear completely alien or illogical to a human (logic).

If something ever goes wrong, you might not be able to trace it to a root cause, because not only will there be no single element in the ANN that can be pointed to, but the behavior in this case also cannot be traced back to reveal or simulate why it was decided one way and not another.

DCM is deterministic: it describes effects in terms of their causes (diachronic, deep).

By contrast, LLMs are statistical: they simulate effects without knowing or modeling their actual causes (synchronic, surface-level).

LLMs draw content from extremely large datasets and are designed to provide a plausible answer to almost any question.

By contrast, DCM is focused on a very small set of constructs and operations that possess high inferential potential, where the content must be explicitly provided by the users.

Most of the prime elements of DCM are absent in LLMs as such : an explicit space–time mereotopology, forces, the separation between content pane and the information pane, and so on. All of these are directly understandable by humans because they are modeled after human cognition in the first place.

LLMs, by contrast, operate in their own “abstract space,” which is uninterpretable and has no direct relation to the space–time topology of the world humans inhabit.

For an LLM to solve the same kinds of problems DCM addresses, it must perform orders of magnitude more work (both at design time and runtime) because it does not start at the right level—it lacks the cognitive layer (the level between natural language and neural substrate).

A true AGI, if and when it emerges, would need to include these aspects of DCM at the ground level.

Another path would be to build ANN layers on top of the DCM level.

Same language different understanding

One consequence of the way LLMs are built and trained as artificial neural networks is that you cannot make direct adjustments to the content they generate. Instead, you can only issue instructions to the model, and any changes to the output occur indirectly—through the model’s own interpretation of those instructions as well as it's interpretation of it's own output. This interpretation is mediated by its internal weight space, which is not directly interpretable by humans.

By contrast, with DCM you work with the very same structures that DCM itself operates on. This shared formal language guarantees full control over adjustments to the generated output at any level of granularity. While some modifications may lead to an inconsistent construal, such inconsistencies are explicitly detected and flagged by the DCM compiler.

It is important to note that LLMs interpret natural language and images in ways that can differ dramatically from human interpretation. Even if both a human and an LLM use the same natural language as a medium of instruction, there is no guarantee that they share the same understanding of its meaning. The mere use of a common syntactic form does not ensure convergence of interpretation at semantic level.

The crucial point is that LLMs are not living organisms: they lack agency, do not occupy space, have no intrinsic sense of time, cannot act in the world, and do not experience the consequences of their actions. As a result, there is no fundamental layer of shared understanding between humans and LLMs—only a superficial overlap at the level of linguistic syntax.

What is nowadays called “AI” is essentially an LLM built on ANNs that perform statistical “pattern completion on steroids.”

These systems do not act or reason as organisms do. They are not cognitive in this sense. Instead, they rely on brute force: first building a massive set of token-based patterns from extremely large datasets (text or images). These datasets are heavily preprocessed (e.g., data cleaning) and then used for “pattern completion” when handling token-based queries.

This is somewhat like training, in a laboratory, a structurally primitive but enormously large artificial, disembodied “brain” whose only access to the world is through tokens spoon-fed to it. Then this brain is asked questions (in the form of text tokens) and generates replies (again, only in text tokens).

DCM does none of that.

DCM is not based on ANNs, but on a model of cognition—more traditionally, a model of reasoning. Human reasoning is underpinned by neural networks in the brain, but when writing DCM “code,” we do not need to descend to that level—just as we don’t descend to hardware level of transistor voltages when writing in programming languages.

DCM sits between the ANN level and the natural language level—the very cognitive level that LLMs skip. LLMs take shortcuts: generating outputs directly from ANNs with uninterpretable internals.
LLMs are built to answer just about any query in natural language.
In comparison, DCM targets a very limited domain and does not operate at the level of natural language, but rather on a much smaller underlying (sub-language) model of cognition.