top of page

Getting Smarter, Together

From Voluntary Commitments to Mandatory Governance: What Anthropic's Policy Shift Reveals About AI Safety Architecture
Last month, I analyzed Anthropic's new constitution for Claude—an inspirational 47-page document mandating "radical honesty" and ethical standards they describe as "substantially higher than standard visions of human ethics." I contrasted this with their corporate transparency practices, where the 2025 Foundation Model Transparency Index showed Anthropic scoring just 46/100, with particularly poor disclosure on training data and compute resources. I concluded that this split
ggstoev
Feb 26
US Treasury's New AI Framework for Financial Services: What It Addresses—and What It Doesn't
On February 19, 2026, the U.S. Department of the Treasury released two foundational resources for AI governance in financial services: an AI Lexicon and the Financial Services AI Risk Management Framework (FS AI RMF). These resources establish common terminology and adapt NIST's AI Risk Management Framework to the specific operational and regulatory context of financial services. For institutions deploying traditional predictive AI models, these frameworks provide valuable g
ggstoev
Feb 20


The Transparency Paradox: When AI Ethics Stop at the Corporate Door
Anthropic just published Claude's new constitution - a remarkable 47-page document mandating "radical honesty" for their AI assistant. Claude must never tell white lies. Never manipulate. Never create false impressions. Standards "substantially higher than standard visions of human ethics." Yet according to the 2025 Foundation Model Transparency Index released just last month, Anthropic the company is increasingly opaque about how Claude was actually built. What Anthropic
ggstoev
Jan 23
bottom of page
