Many AI systems are touted as open or transparent, but closer examination shows that most are actually closed in critical ways. Although developers may label models as “open,” the underlying code, training data, or internal decision processes are often inaccessible to the public. This gap between perception and reality matters because openness affects accountability, fairness, and public trust in AI technologies.
True openness would mean sharing not just the interface for interacting with an AI model, but also the sources of its training data, the architecture and design decisions, and the ways it was tested and evaluated.
In practice, companies frequently withhold these elements for competitive or legal reasons. As a result, users and researchers are left to guess how systems work, why they make certain decisions, and what biases or limitations they might carry.
The lack of genuine transparency can have real-world consequences. When AI systems influence decisions in areas like hiring, lending, health care, or legal outcomes, stakeholders have little visibility into how those decisions are made. Without access to the inner workings of the models, it’s difficult to detect hidden biases, correct errors, or assess risks. Independent researchers, who play a key role in evaluating and improving technology, are often unable to scrutinise systems that claim to be open.
This situation also affects public debate about regulation and ethics. Policymakers and advocates struggle to craft effective safeguards when basic information about how AI systems are built and trained remains undisclosed. As AI becomes more integrated into society, the difference between genuinely open systems and superficially open ones will grow in importance.
Ultimately, the article argues that labels like “open” or “closed” need clearer definitions, and that greater transparency is crucial for building AI that is accountable, trustworthy, and aligned with public values. Without that transparency, claims of openness risk being meaningless, and society may miss opportunities to ensure that powerful technologies serve the broadest possible benefit.
