The SkyTeam Aerospace Foundation

Tech Reports


SAF Technical reports on ai

The SAF Technical Reports series provides structured, scientifically grounded analyses of contemporary machine learning systems and associated computational practices. The objective of this series is to formalize foundational principles that are currently absent or insufficiently defined within the modern AI development landscape.

These reports examine machine learning technologies through the lens of established scientific disciplines, including information theory, physics, systems engineering, and computational theory. Each document identifies gaps between current industry practice and the underlying scientific requirements necessary for stable, interpretable, and measurable system behavior.

Areas of focus include:

  • Model behavior characterization
    Formal analysis of learning dynamics, stability considerations, failure signatures, and generalization behavior.

  • Evaluation and measurement frameworks
    Development of reproducible metrics, normalization methods, and cross-model comparison standards.

  • Systems and infrastructure analysis
    Examination of compute architectures, software stacks, memory behavior, energy constraints, and operational reliability.

  • Scientific grounding of ML methodologies
    Clarification of where existing approaches lack theoretical support and articulation of the scientific principles necessary to address these deficiencies.

The SAF Technical Reports series is intended for researchers, engineers, standards bodies, and policy stakeholders seeking objective, technically rigorous documentation of the scientific considerations governing modern AI. Each report is independently authored, peer-reviewable, and designed to contribute to a clearer scientific foundation for advancing machine learning as an engineering discipline.


  • A Formal Theoretical Critique of the Transformer Architecture

    This report presents a scientific evaluation of the transformer architecture using principles from computation theory, information theory, and thermodynamics. The analysis identifies structural limitations inherent to transformers that restrict their ability to achieve stable reasoning, truth preservation, or scalable intelligence. By examining irreversibility in computation, entropy accumulation, benchmark stagnation, and resource scaling behavior, the report demonstrates that the architecture encounters fundamental physical and informational constraints that cannot be overcome by increased model size or compute.

  • A Formal Psychological Critique of the Transformer Architecture

    This paper analyzes how linguistic fluency in transformer models has been misinterpreted as evidence of intelligence. Drawing on the history of the ELIZA Effect and on foundational principles from computability, information theory, recursion, and thermodynamics, it shows why transformers cannot support understanding or reasoning despite their impressive surface behavior. The work clarifies how decades of empirical emphasis and cognitive bias enabled this misinterpretation and outlines the requirements future architectures must meet to achieve genuine intelligence.