top of page
Logo 1 FINAL Orizontal 2.png

The Transparency Crisis in AI: What Stanford's 2025 Study Means for Regulated Industries

Happy New Year!


The artificial intelligence industry stands at a critical juncture. Seven established AI companies now account for more than 35% of the entire S&P 500. Billions of people rely on foundation models daily for search, content creation, and decision support. Yet a comprehensive new study from Stanford, Berkeley, Princeton, and MIT reveals a troubling reality: the AI industry is becoming less transparent, not more.


The State of AI Transparency: A Declining Trend


The 2025 Foundation Model Transparency Index assessed 13 major AI companies—including Google, Meta, OpenAI, and Anthropic—on a 100-point scale covering critical areas such as training data disclosure, risk mitigation practices, and societal impact. The results should concern any organization in regulated industries considering AI adoption.


The headline finding: Companies scored just 40 points on average, down from 58 points in 2024.


Source: 2025 Foundation Model Transparency Index
Source: 2025 Foundation Model Transparency Index

This decline isn't uniform. Three distinct clusters emerged:

  • Top performers (IBM, AI21 Labs): ~75 points

  • Middle tier (Google, Anthropic, Cohere): ~35 points

  • Bottom performers (xAI, Midjourney, DeepSeek): ~15 points


IBM achieved the highest score in the Index's history at 95/100, setting industry precedent by providing sufficient detail for external researchers to replicate their training data and granting auditor access. Meanwhile, xAI and Midjourney scored just 14/100, sharing virtually no information about training data, model risks, or mitigation strategies.


The Transparency-Accountability Relationship


Before examining the study's findings in detail, it's essential to understand a fundamental principle of AI governance: accountability presupposes transparency. Without adequate information about AI systems, meaningful accountability cannot be established.


This relationship is not merely theoretical. When organizations deploy AI systems without understanding their training data, decision-making processes, or limitations, they cannot:


  • Take responsibility for system outputs and errors

  • Implement effective oversight mechanisms

  • Respond appropriately when issues arise

  • Demonstrate due diligence to regulators and stakeholders


However, transparency must be balanced with legitimate interests including intellectual property (IP) protection, security considerations, and privacy requirements. These are real concerns that companies must navigate.


Critically, though, these interests do not imply that the need for transparency about data used in training, nor the principles used to control bias, can be waived.


There is a meaningful difference between:


  • Protecting proprietary algorithmic innovations or model architectures

  • Withholding fundamental information about training data sources, bias mitigation approaches, or validation methodologies


The former is a legitimate business interest. The latter undermines the possibility of responsible AI deployment.


IBM's 95/100 score demonstrates that companies can achieve high levels of transparency while maintaining competitive advantages. The path to accountability doesn't require disclosing trade secrets—it requires disclosing the information stakeholders need to assess system reliability, fairness, and appropriate use.


Four Critical Areas of Systemic Opacity


The study identifies four domains where the entire industry remains systemically opaque:


1. Training Data

Most companies provide little to no information about:

  • Data sources and provenance

  • Data acquisition methods

  • Rights and licensing

  • Data quality assurance processes


Only IBM provides sufficient detail for external researchers to replicate their training data—making them the sole company meeting this critical transparency threshold.


Why this matters: Training data fundamentally shapes model behavior, capabilities, and biases. Without transparency about data sources and composition, organizations cannot assess whether a model is appropriate for their use case or compliant with data governance requirements.


2. Computational Resources and Environmental Impact

Ten companies disclosed none of the key information about environmental impact, including energy usage, carbon emissions, or water consumption. This opacity is particularly significant as datacenter expansion strains energy grids and contributes to increased energy prices globally.

The companies disclosing zero environmental information: AI21 Labs, Alibaba, Amazon, Anthropic, DeepSeek, Google, Midjourney, Mistral, OpenAI, and xAI.


Why this matters: Environmental impact is increasingly material to corporate sustainability commitments and ESG reporting requirements. Organizations adopting AI systems may be indirectly responsible for environmental impacts they cannot measure or report.


3. Model Usage and Deployment

Companies provide minimal information about how their models are actually being used in the real world, making it nearly impossible for adopters to understand:

  • Common use cases and applications

  • Known limitations in specific contexts

  • Potential misuse patterns

  • Performance across different domains


Why this matters: Understanding real-world deployment patterns helps organizations identify potential risks and appropriate safeguards for their specific use cases.


4. Societal Impact

Across the board, companies share virtually no information about the broader societal implications of their foundation models, including effects on labor markets, information ecosystems, or social dynamics.


Why this matters: Organizations in regulated industries have stakeholder obligations that extend beyond immediate users to broader societal impacts—particularly in sectors like financial services (market stability) and healthcare (health equity).


The Openness Fallacy: Why Open Source ≠ Transparency


One of the study's most important findings challenges a common assumption in the AI community: that open-source models inherently provide greater transparency.


Openness refers to whether model weights are publicly available. Transparency refers to whether a company discloses meaningful information about its practices, processes, and impacts.


While open developers tend to be more transparent on average, this relationship is far from guaranteed. Three of the most influential open model developers—Meta, DeepSeek, and Alibaba—scored among the bottom half of all companies assessed:


  • Meta: 31/100 (down from 60 in 2024)

  • DeepSeek: 26/100

  • Alibaba: 29/100


Meta's decline is particularly striking. The company did not release a technical report for its Llama 4 flagship model, contributing to a dramatic drop in transparency. Similarly, Google faced scrutiny from British lawmakers for significant delays in releasing documentation for Gemini 2.5, despite public commitments.


The lesson: access to model weights does not confer transparency about training methodologies, risk assessments, validation processes, or downstream impacts.


Implications for Regulated Industries


For organizations in financial services, healthcare, and professional services, this transparency crisis creates significant governance challenges:


Compliance Risk

Regulatory frameworks increasingly mandate transparency:

  • EU AI Act: Requires extensive documentation for high-risk AI systems

  • California legislation: Mandates transparency around frontier AI risks

  • Sector-specific regulations: Financial services (model risk management), healthcare (HIPAA, clinical decision support), and other regulated industries have existing transparency requirements that AI systems must meet

Without adequate vendor transparency, organizations cannot demonstrate compliance with these requirements.


Model Risk Management

Effective AI governance requires understanding:

  • System capabilities and limitations

  • Training data characteristics and potential biases

  • Validation and testing methodologies

  • Known failure modes and edge cases


The current state of industry transparency makes comprehensive model risk management nearly impossible for most AI systems.


Ethical Deployment

Responsible AI deployment requires transparency across multiple dimensions:

  • Explainability: Can the system's decisions be explained to stakeholders?

  • Accountability: Are there clear lines of responsibility when issues arise?

  • Fairness: Has the system been tested for discriminatory outcomes?

  • Impact assessment: What are the environmental and societal consequences?

Most AI vendors currently provide insufficient information to answer these questions.


Stakeholder Trust

In regulated industries, trust is paramount. Clients, regulators, and boards increasingly expect organizations to demonstrate:

  • Due diligence in AI vendor selection

  • Understanding of AI system behavior

  • Mitigation strategies for identified risks

  • Ongoing monitoring and oversight


Vendor opacity undermines all of these objectives.


Balancing Transparency with Legitimate Business Interests


The transparency gap cannot be attributed solely to lack of capability or will. Companies face genuine challenges in balancing disclosure with legitimate business interests including intellectual property protection, security considerations, and privacy requirements.


However, these interests do not justify the current levels of opacity. Specifically, they do not excuse withholding fundamental information about training data sources or principles used to control bias.


Companies can and should disclose:
  • General categories and sources of training data without revealing specific proprietary datasets

  • Principles and methodologies used to identify and mitigate bias without exposing exact algorithmic implementations

  • Testing and validation approaches without providing attack vectors

  • Known limitations and failure modes without compromising security

  • Environmental impact metrics without revealing datacenter locations


IBM's 95/100 score demonstrates this is feasible. The company disclosed training data composition sufficient for replication while protecting proprietary innovations in model architecture. The question is not whether perfect transparency is possible—it's whether companies are providing the minimum necessary transparency for responsible deployment. Stanford's study suggests most are falling far short of this standard.


The Path Forward: Practical Steps for Regulated Industries


For organizations adopting AI in regulated industries, the transparency crisis creates both challenges and opportunities:

1. Vendor Due Diligence Develop a transparency scorecard for AI vendors assessing training data documentation, bias mitigation principles, risk assessment disclosure, and environmental impact reporting. Weight these factors according to your regulatory requirements.

2. Contractual Requirements Include transparency provisions in vendor agreements: regular model cards, access to testing data, disclosure of training data categories and known biases, notification of updates, and rights to third-party auditing.

3. Internal Documentation Standards Maintain comprehensive internal records regardless of vendor transparency: use case documentation, performance monitoring, incident tracking, and ongoing bias assessments.

4. Tiered Deployment Strategies Match vendor transparency to use case risk: high-transparency vendors for high-risk applications, additional controls for opaque systems, and restricted deployment when minimum transparency cannot be met.


Strategic Positioning

Organizations that prioritize transparency in AI adoption will differentiate themselves through enhanced compliance posture, superior risk management, stakeholder confidence, and competitive advantage in deployment speed and effectiveness.


Policy and Regulatory Outlook


California and the European Union have already enacted transparency requirements for frontier AI. The Foundation Model Transparency Index provides policymakers with baseline measurements and evidence-based targets for intervention.

For organizations in regulated industries, anticipating these mandates is prudent. Companies that establish robust transparency practices now will be better positioned as requirements become universal.


Conclusion: The Imperative for Transparency


The 2025 Foundation Model Transparency Index delivers a clear message: the AI industry's transparency problem is worsening at precisely the moment it should be improving. For organizations in regulated industries, the imperative is straightforward—you cannot be accountable for what you don't understand, and you cannot understand what vendors won't disclose.


While legitimate business interests must be respected, they cannot justify withholding fundamental information about training data and bias mitigation necessary for responsible AI deployment.


At Nestor Global Consulting, we help organizations in financial services, healthcare, and professional services navigate these challenges through vendor assessment frameworks, AI governance structures that prioritize transparency and accountability, and implementation strategies that deliver measurable ROI while maintaining compliance.

The companies building our collective AI future owe us more than black-box systems and vague assurances. They owe us transparency. And organizations adopting these systems owe their stakeholders the diligence to demand it.


About This Research The 2025 Foundation Model Transparency Index was conducted by researchers from Stanford, Berkeley, Princeton, and MIT. The full report, including detailed company scorecards and methodology, is available at the Foundation Model Transparency Index website.


About Nestor Global Consulting Nestor Global Consulting provides premier boutique AI consulting for regulated industries. We help financial services, healthcare, and professional services organizations implement responsible AI that delivers measurable results while maintaining compliance. Contact us to discuss your AI governance needs.



 
 
 

Recent Posts

See All
Why AI Ethics Oversight Can't Wait ⏰ Hello!

According to this article in Hackernoon 97% of CIOs and CTOs are worried about unethical AI use at their companies. Yet only 1 in 3 have oversight in place. https://lnkd.in/grYgPMjF The cost of opacit

 
 
 
Logo mark.png

Ⓒ 2021 Nestor Global Consulting. All rights Reserved.

Website by Daiana Schefler with Dysign.

bottom of page