Your search

In authors or contributors
  • The Transformer artificial intelligence model is one of the most accurate models to extract the meaning/semantics from sets of symbolic sequences of various lengths, including long sequences. These models transform the language spaces as per long and short-distance relationships among units of the language. These models thus minimize some aspects of human comprehension of the world. To frame a generalized theory of identification and generation of meaning in human thought, the transformer model needs to be understood in the context of generalized systems theory, such that other equivalent models can be discovered, compared and selected to converge on the base model of meaning identification and discovery aspect of the philosophy of knowledge or epistemology. This paper explores the relationships of the transformer model and its various component parts, processes and the phenomena to some critical aspects of generalized systems theory such as cognition, symmetry & equivalence, holons, emergence, identifiability, system spaces and system universe, reconstructability, equilibriums & oscillations, scaling, polystability, ontogeny, algedonic loops, heterarchy, holarchy, homeorhesis, isomorphism, homeostasis, attractors, equifinality, nesting, parallelization, loops, causal structure, transformations, feedbacks, encodings, and information complexity.

Last update from database: 3/23/25, 8:36 AM (UTC)