The Transparency Paradox: When AI Ethics Stop at the Corporate Door
- ggstoev
- 18 hours ago
- 3 min read

Anthropic just published Claude's new constitution - a remarkable 47-page document mandating "radical honesty" for their AI assistant. Claude must never tell white lies. Never manipulate. Never create false impressions. Standards "substantially higher than standard visions of human ethics."
Yet according to the 2025 Foundation Model Transparency Index released just last month, Anthropic the company is increasingly opaque about how Claude was actually built.
What Anthropic Says Claude Must Do:
The constitution dedicates extensive language to honesty as non-negotiable:
"Claude should basically never directly lie or actively deceive anyone"
"Truthful: Claude only sincerely asserts things it believes to be true"
"Transparent: Claude doesn't pursue hidden agendas or lie about itself"
"Autonomy-preserving: Claude protects the epistemic autonomy and rational agency of the user"
This isn't window dressing. It's positioned as fundamental to Claude's value proposition: "As AIs become more capable than us and more influential in society, people need to be able to trust what AIs like Claude are telling us."
What Anthropic Actually Discloses?
The FMTI 2025 tells a different story. Overall, Anthropic ranks 46/100, down from 51 in 2024, perhaps as the company declined to prepare or provide their own transparency report (unline 2024). Moreover, Anthropic's disclosure on their upstream resources which includes information on training data (sources, data acquisition methods, synthetic data, licensed data and so on) gets a score merely 3/34 in 2025, down from 7/34.
For Context:
IBM scored 95/100 with full upstream transparency
Writer scored 72/100 as an enterprise-focused competitor
Even Amazon (39/100) disclosed more about compute infrastructure
The Strategic Contradiction
Here's what makes this interesting: The constitution itself acknowledges why companies stay opaque. It discusses avoiding "liability harms" - "reputational, legal, political, or financial harms to Anthropic." The document even provides specific justifications other companies use such as data poisoning risks from revealing training data, trade secrets and reverse engineering concerns and ongoing copyright litigation exposure.
So Anthropic is essentially operating with two ethical frameworks:
Product ethics: Claude demonstrates radical transparency with users in every conversation
Corporate ethics: Anthropic maintains strategic opacity about training data, costs, and methods
If you're evaluating AI vendors, this split creates a trust problem. A company tells you their AI is radically honest but won't disclose what data trained it, whether copyrighted material was used, what environmental costs it incurred, or how much compute it consumed. The FMTI data shows this isn't unique to Anthropic. The average transparency score declined from 58 in 2024 to 41 in 2025. The industry is becoming less transparent as stakes increase.
But here's the strategic opportunity the constitution itself articulates:
"People need to be able to trust what AIs like Claude are telling us, both about themselves and about the world. This is partly a function of safety concerns, but it's also core to maintaining a healthy information ecosystem."
This logic doesn't stop at the product boundary.
The companies scoring highest on FMTI 2025 were enterprise-focused B2B providers:
IBM: 95/100
Writer: 72/100
AI21 Labs: 66/100
The above companies recognized that their customers - enterprises managing risk, compliance, and vendor relationships - require comprehensive transparency. They disclosed training costs, data sources, environmental impacts, and organizational structures.
The Bottom Line
Anthropic has articulated one of the most sophisticated ethical frameworks in AI. The constitution is genuinely impressive as a statement of values.
But ethical AI cannot be compartmentalized. You can't demand radical honesty from your product while maintaining strategic opacity in your corporate practices.
For executives evaluating AI partnerships: Ask whether your vendors apply their stated ethical principles at every level. Corporate transparency about training data, costs, and methods isn't separate from product ethics - it's the foundation that makes product-level honesty credible.
The window for proactive disclosure is closing. Companies that recognize comprehensive transparency as competitive advantage - not just conversational honesty - will be positioned to win enterprise trust.
The 2025 Foundation Model Transparency Index evaluated 13 major AI companies across 100 indicators spanning training data, model capabilities, post-deployment monitoring, and accountability mechanisms. Full report: stanford-crfm/fmti
Anthropic's Claude Constitution released January 21, 2026 is available at: anthropic.com/claude-constitution
What's your take? Should we expect different transparency standards for AI products versus the companies building them? Or is comprehensive transparency the only path to sustainable trust? Love to hear from you.



Comments