Anthropic Under Fire: Department of War Designates Claude Provider Supply Chain Risk
Anthropic Under Fire: Department of War Designates Claude Provider Supply Chain Risk
The AI provider landscape took a dramatic turn this week as Anthropic became the first US company designated a supply chain risk by the Department of War, whilst Google's Vertex AI suffered critical outages and the industry grappled with sophisticated model theft attempts. With 36 signals captured this week (7 critical, 17 high), the focus has shifted from capability releases to security, reliability, and regulatory challenges.
The Big Moves
Anthropic's Government Relations Crisis
Anthropic finds itself in an unprecedented position as the Department of War has designated it a supply chain risk following disputes over AI model exceptions related to domestic surveillance and autonomous weapons. This extraordinary move marks the first time a US company has received such a designation, setting a concerning precedent for the AI industry.
The designation stems from Anthropic's refusal to remove safeguards around mass surveillance capabilities and fully autonomous weapons systems in its Claude models. Whilst the company has partnered with the Department of War on AI deployment for intelligence analysis, cyber operations, and operational planning, it's drawing a firm line on democratic values and technological reliability concerns.
For Department of War contractors currently using Claude, this designation could trigger immediate restrictions on platform access, potentially disrupting critical operations. The ripple effects extend beyond government contracts, as this precedent could influence how other federal agencies approach AI procurement. Organisations should assess their dependency on Anthropic's services and prepare contingency plans, particularly those with any government touchpoints.
Google's Vertex AI Reliability Meltdown
Google experienced a particularly rough week with multiple critical incidents affecting Vertex AI Gemini API customers. The most significant outage occurred on 27 February when a configuration change in the safety filtering service triggered widespread 429 and 503 errors, primarily impacting PayGo customers and cascading to services including Google Cloud Support, Dialogflow CX, Agent Assist, and Customer Experience Agent Studio.
Whilst Google's engineering team executed a rapid rollback and added capacity to restore service, the incident highlights concerning reliability patterns. The global endpoint also experienced separate issues on the same day, creating a perfect storm for customers relying on Vertex AI for production workloads.
For organisations using Vertex AI, this week's incidents underscore the critical importance of robust monitoring, alerting, and fallback strategies. Consider implementing circuit breakers and alternative model providers to maintain service continuity during outages. Google's quick response demonstrates their operational maturity, but the frequency of incidents suggests underlying infrastructure challenges that warrant attention.
Industrial-Scale Model Theft Exposed
Anthropic has detected and is actively preventing what it describes as "industrial-scale distillation attacks" by three AI labs: DeepSeek, Moonshot, and MiniMax. These sophisticated operations involve generating millions of exchanges with Claude to train competing models, effectively stealing capabilities whilst circumventing safety measures.
The national security implications are significant, as these distilled models often lack the safety guardrails present in the original systems. When these capabilities are subsequently open-sourced, they create pathways for dangerous AI capabilities to proliferate without adequate controls.
This revelation exposes a critical vulnerability in the current AI ecosystem. Model providers must now balance accessibility with security, implementing detection systems for unusual usage patterns whilst maintaining legitimate research access. For enterprise customers, this highlights the importance of understanding the provenance and safety measures of any AI models deployed in production environments.
Worth Watching
OpenAI's Architecture Evolution with Mixture of Experts
OpenAI's introduction of Mixture of Experts (MoEs) in Transformers represents a significant architectural advancement for compute efficiency and scaling. This technology, already leveraged by models like Qwen 3.5 and MiniMax M2, uses specialised "experts" to process different parts of tokens, dramatically improving parameter efficiency and inference speeds. The transformers library updates will provide native MoE support, potentially reducing inference costs across the ecosystem.
AWS Bedrock's Comprehensive Expansion
Amazon Bedrock delivered substantial capability expansions this week, introducing batch inference support for DeepSeek V3.1 and Qwen3, fine-tuning for open-weight models, and multilingual document processing. The addition of Anthropic Claude Sonnet 4.5 and Stability AI Image Services significantly broadens the platform's model portfolio. The new server-side tool execution through AgentCore Gateway enhances security and reduces latency for agent-based applications.
Real-Time Voice Models from OpenAI
OpenAI released gpt-realtime-1.5 and gpt-audio-1.5 models specifically designed for voice-first applications, available through Microsoft Foundry's chat completion APIs. These models offer improved instruction following, multi-lingual support, and tool calling capabilities for real-time interactions, including conversation diarisation features. Developers building voice applications should evaluate migration paths to leverage these enhanced capabilities.
Speech Model Street Name Recognition Crisis
Together AI identified a critical gap in state-of-the-art speech models' ability to accurately transcribe street names, particularly affecting diverse speakers. This seemingly niche issue has significant real-world consequences for navigation systems, ride-sharing services, and emergency response. The proposed solution involves synthetic data generation using cross-lingual style transfer, demonstrating how targeted data augmentation can address specific model weaknesses.
Amazon Cognito Security Enhancements
Amazon Cognito introduced secret rotation and custom secrets support, enabling automated credential cycling to mitigate compromised secret risks. This enhancement addresses a critical security practice gap, requiring developers to update configurations to leverage automated rotation capabilities.
Quick Hits
- Claude 3 Haiku deprecated by OpenAI effective 23 February 2026, requiring migration to alternatives like GPT-4o-mini
- Elasticsearch releases 8.19.12, 9.3.1, and 9.2.6 with bug fixes and security updates
- AWS pricing changes for VPC Encryption Controls effective 1 March 2026
- Amazon Redshift Serverless introduces 3-year reservation options for cost optimisation
- Google Gemini 3.1 Flash Image enters public preview with improved pricing and latency
- Aurora DSQL adds support for Tortoise, Flyway, and Prisma ORM frameworks
- AWS Elemental Inference reaches general availability with U7i instance support
The Week Ahead
The immediate focus should be on the 1 March 2026 effective date for AWS VPC Encryption Controls pricing changes. Organisations using this capability need to assess budget impacts and adjust encryption strategies accordingly.
Anthropic's supply chain risk designation will likely trigger broader industry discussions about AI governance and government relations. Watch for potential policy responses from other providers and clarification on the scope of restrictions for Department of War contractors.
Google's Vertex AI reliability issues warrant continued monitoring, particularly given the clustering of incidents this week. The company's response to these challenges will indicate whether this represents a temporary infrastructure strain or deeper systemic issues.
The model distillation attack revelations may prompt industry-wide security reviews and potentially new protective measures from major providers. This could influence how AI companies balance model accessibility with security concerns going forward.