AWS Bedrock
cloud_wrapper
506 signals tracked
Anthropic Claude 3.7 Sonnet moving to legacy status — migration to Claude Sonnet 4.5 required by April 2026
Anthropic Claude 3.7 Sonnet model is now in legacy status on AWS Bedrock and will be discontinued. Users must migrate to Claude Sonnet 4.5 before the April 28, 2026 deadline to avoid service interruption.
28 Apr 2026
CriticalDeprecationAWS Deadline Cloud Launches AI-Powered Troubleshooting Assistant for Render Jobs
AWS Deadline Cloud introduces an AI-powered troubleshooting assistant for render jobs, helping diagnose and resolve rendering issues automatically.
17 Apr 2026
HighCapabilitySageMaker JumpStart optimized for foundation models
SageMaker JumpStart now offers optimized deployments for foundation models, enabling faster and more cost-effective model deployment for AI/ML workloads.
17 Apr 2026
MediumCapabilityAmazon ECR Pull Through Cache Now Supports Referrer Discovery and Sync
Amazon Elastic Container Registry (Amazon ECR) now automatically discovers and syncs OCI referrers, such as image signatures, SBOMs, and attestations, from upstream registries into your Amazon ECR private repositories with its pull through cache feature. Previously, when you listed referrers on a repository with a matching pull through cache rule, Amazon ECR would not return or sync referrers from the upstream repository. This meant that you had to manually list and fetch the upstream referrers. With today's launch, Amazon ECR's pull through cache will now reach upstream during referrers API requests and automatically cache related referrer artifacts in your private repository. This enables end-to-end image signature verification, SBOM discovery, and attestation retrieval workflows to work seamlessly with pull through cache repositories without requiring any client-side workarounds. This feature is available today in all AWS Regions where Amazon ECR pull through cache is supported. To learn more, visit the Amazon ECR documentation .
17 Apr 2026
MediumCapabilityAmazon Managed Grafana now supports creating Grafana 12.4 workspaces
Amazon Managed Grafana now supports creating new workspaces with Grafana version 12.4. This release includes features that were launched as a part of open source Grafana versions 11.0 to 12.4, including Drilldown apps, scenes powered dashboards, variables in transformations, visualization enhancements, and new features with the Amazon CloudWatch plugin. Queryless Drilldown apps enable customers to perform point-and-click exploration of Prometheus metrics, Loki logs, Tempo traces, and Pyroscope profiles. The Scenes-powered rendering engine boosts dashboard performance. Amazon CloudWatch Logs adds support for PPL and SQL queries, cross-account Metrics Insights, and log anomaly detection. The rebuilt table visualization improves performance with CSS cell styling and interactive Actions buttons, while trendline transformations and navigation bookmarks enhance data exploration. Grafana 12.4 is supported in all AWS regions where Amazon Managed Grafana is generally available. You can create a new Amazon Managed Grafana workspace from the AWS Console, SDK, or CLI. To explore the complete list of new features, please refer to the user documentation . Follow the instructions here to create workspaces with version 12.4. To learn more about Amazon Managed Grafana features and its pricing, visit the product page and pricing page .
17 Apr 2026
InfoPricingSageMaker JumpStart adds optimized deployments for foundation models
SageMaker JumpStart now offers optimized deployments, enabling customers to deploy foundation models with pre-configured settings tailored to specific use cases and performance constraints. SageMaker JumpStart optimized deployments simplify model deployment by offering task-aware configurations that optimize for cost, throughput, or latency based on your workload requirements - whether content generation, summarization, or Q&A. This launch includes support for 30+ popular models from Meta, Microsoft, Mistral AI, Qwen, Google, and TII, with visibility into key performance metrics like P50 latency, time-to-first token (TTFT), and throughput before deployment. With SageMaker JumpStart optimized deployments, customers can select from use case-specific configurations (such as generative writing or chat-style interactions) and choose optimization targets including cost-optimized, throughput-optimized, latency-optimized, or balanced performance. Models deploy to SageMaker AI Managed Inference endpoints or SageMaker HyperPod clusters with pre-set configurations that eliminate guesswork while maintaining full visibility into deployment details. Available models include Meta Llama 3.1 and 3.2 variants, Microsoft Phi-3, Mistral AI models including the new Mistral-Small-24B-Instruct-2501, Qwen 2 and 3 series including multimodal Qwen2-VL, Google Gemma, and TII Falcon3. All deployments leverage SageMaker's VPC deployment capabilities, ensuring data control and production-ready infrastructure with enterprise-grade security. The feature is available in all AWS regions where SageMaker JumpStart is curretly supported. To get started with optimized deployments, navigate to Models in SageMaker Studio, select your desired foundation model in the JumpStart Models tab, choose "Deploy," and select your use case and performance optimization target. For details, visit the SageMaker JumpStart documentation . AWS is actively expanding support to include additional models.
17 Apr 2026
MediumCapabilityAmazon EC2 U7i High Memory Instances Available in Singapore
Amazon EC2 High Memory U7i-8TB instances (u7i-8tb.112xlarge) and U7i-12TB instances (u7i-12tb.224xlarge) are now available in AWS Asia Pacific (Singapore) region. U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-8tb instances offer 8TiB of DDR5 memory, and U7i-12tb instances offer 12TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-8tb instances deliver 448 vCPUs; U7i-12tb instances deliver 896 vCPUs. Both instances support up to 100 Gbps of Amazon EBS bandwidth for faster data loading and backups, 100 Gbps of network bandwidth, and ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page .
17 Apr 2026
MediumCapabilitySageMaker JumpStart optimized deployments for foundation models
SageMaker JumpStart now offers optimized deployments for foundation models, enabling faster and more cost-effective model deployment with pre-configured instances and auto-scaling.
17 Apr 2026
MediumCapabilityAmazon Managed Grafana supports Grafana 12.4 workspaces
Amazon Managed Grafana now supports creating new workspaces with Grafana version 12.4. This release includes features that were launched as a part of open source Grafana versions 11.0 to 12.4, including Drilldown apps, scenes powered dashboards, variables in transformations, visualization enhancements, and new features with the Amazon CloudWatch plugin. Queryless Drilldown apps enable customers to perform point-and-click exploration of Prometheus metrics, Loki logs, Tempo traces, and Pyroscope profiles. The Scenes-powered rendering engine boosts dashboard performance. Amazon CloudWatch Logs adds support for PPL and SQL queries, cross-account Metrics Insights, and log anomaly detection. The rebuilt table visualization improves performance with CSS cell styling and interactive Actions buttons, while trendline transformations and navigation bookmarks enhance data exploration. Grafana 12.4 is supported in all AWS regions where Amazon Managed Grafana is generally available. You can create a new Amazon Managed Grafana workspace from the AWS Console, SDK, or CLI. To explore the complete list of new features, please refer to the user documentation . Follow the instructions here to create workspaces with version 12.4. To learn more about Amazon Managed Grafana features and its pricing, visit the product page and pricing page .
17 Apr 2026
MediumCapabilityAWS Deadline Cloud adds AI-powered troubleshooting assistant for render jobs
Today, AWS Deadline Cloud announces an AI-powered troubleshooting assistant that helps you quickly diagnose and resolve render job failures. AWS Deadline Cloud is a fully managed service that simplifies render management for computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design. Render job failures from missing assets, software errors, configuration mismatches, and resource constraints can stall production pipelines and waste compute resources. Previously, diagnosing these issues required specialized technical staff to manually parse logs and identify root causes — a process that is time-consuming, difficult to scale, and often unavailable to smaller studios. The new Deadline Cloud assistant investigates failed jobs you identify, analyzes logs and metrics, detects common issues, and provides troubleshooting recommendations based on industry best practices and a pre-trained knowledge base covering Deadline Cloud, common render farm issues, and popular digital content creation applications including Autodesk Maya, 3ds Max, VRED, Blender, SideFX Houdini, Maxon Cinema 4D, Foundry Nuke, and Adobe After Effects. The assistant runs within your AWS account using Amazon Bedrock, keeping all data and analysis within your control. The Deadline Cloud assistant is available today in all AWS Regions where AWS Deadline Cloud is supported. Watch a demo on YouTube to see it in action, or visit the AWS Deadline Cloud documentation to learn more.
17 Apr 2026
HighCapabilityAWS Deadline Cloud announces AI-powered troubleshooting assistant
AWS Deadline Cloud announces AI-powered troubleshooting assistant for render jobs, automating diagnostics and reducing manual troubleshooting time.
17 Apr 2026
HighCapabilityAWS Deadline Cloud announces AI-powered troubleshooting assistant for render jobs
Today, AWS Deadline Cloud announces an AI-powered troubleshooting assistant that helps you quickly diagnose and resolve render job failures. AWS Deadline Cloud is a fully managed service that simplifies render management for computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design. Render job failures from missing assets, software errors, configuration mismatches, and resource constraints can stall production pipelines and waste compute resources. Previously, diagnosing these issues required specialized technical staff to manually parse logs and identify root causes — a process that is time-consuming, difficult to scale, and often unavailable to smaller studios. The new Deadline Cloud assistant investigates failed jobs you identify, analyzes logs and metrics, detects common issues, and provides troubleshooting recommendations based on industry best practices and a pre-trained knowledge base covering Deadline Cloud, common render farm issues, and popular digital content creation applications including Autodesk Maya, 3ds Max, VRED, Blender, SideFX Houdini, Maxon Cinema 4D, Foundry Nuke, and Adobe After Effects. The assistant runs within your AWS account using Amazon Bedrock, keeping all data and analysis within your control. The Deadline Cloud assistant is available today in all AWS Regions where AWS Deadline Cloud is supported. Watch a demo on YouTube to see it in action, or visit the AWS Deadline Cloud documentation to learn more.
17 Apr 2026
InfoCapabilitySageMaker HyperPod now supports flexible instance groups
Amazon SageMaker HyperPod now supports flexible instance groups, providing improved resource management and cost optimization for distributed training workloads.
17 Apr 2026
HighCapabilityAmazon SageMaker HyperPod now supports flexible instance groups
Amazon SageMaker HyperPod now supports flexible instance groups, enabling customers to specify multiple instance types and multiple subnets within a single instance group. Customers running training and inference workloads on HyperPod often need to span multiple instance types and availability zones for capacity resilience, cost optimization, and subnet utilization, but previously had to create and manage a separate instance group for every instance type and availability zone combination, resulting in operational overhead across cluster configuration, scaling, patching, and monitoring. With flexible instance groups, you can define an ordered list of instance types using the new InstanceRequirements parameter and provide multiple subnets across availability zones in a single instance group. HyperPod provisions instances using the highest-priority type first and automatically falls back to lower-priority types when capacity is unavailable, eliminating the need for customers to manually retry across individual instance groups. Training customers benefit from multi-subnet distribution within an availability zone to avoid subnet exhaustion. Inference customers scaling manually get automatic priority-based fallback across instance types without needing to retry each instance group individually, while those using Karpenter autoscaling can reference a single flexible instance group. Karpenter automatically detects supported instance types from the flexible instance group and provisions the optimal type and availability zone based on pod requirements. You can create flexible instance groups using the CreateCluster and UpdateCluster APIs, the AWS CLI, or the AWS Management Console. Flexible instance groups are available for SageMaker HyperPod clusters using the EKS orchestrator in all AWS Regions where SageMaker HyperPod is supported. To learn more, see Flexible instance groups .
17 Apr 2026
InfoPricingSageMaker HyperPod now supports flexible instance groups
Amazon SageMaker HyperPod now supports flexible instance groups, allowing better resource allocation and cost optimization for distributed training workloads.
17 Apr 2026
HighCapabilityAmazon EC2 High Memory U7i instances now available in AWS Asia Pacific (Singapore) region
Amazon EC2 High Memory U7i-8TB instances (u7i-8tb.112xlarge) and U7i-12TB instances (u7i-12tb.224xlarge) are now available in AWS Asia Pacific (Singapore) region. U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-8tb instances offer 8TiB of DDR5 memory, and U7i-12tb instances offer 12TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-8tb instances deliver 448 vCPUs; U7i-12tb instances deliver 896 vCPUs. Both instances support up to 100 Gbps of Amazon EBS bandwidth for faster data loading and backups, 100 Gbps of network bandwidth, and ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page .
17 Apr 2026
InfoCapabilityAmazon SageMaker HyperPod supports flexible instance groups
Amazon SageMaker HyperPod now supports flexible instance groups, enabling customers to specify multiple instance types and multiple subnets within a single instance group. Customers running training and inference workloads on HyperPod often need to span multiple instance types and availability zones for capacity resilience, cost optimization, and subnet utilization, but previously had to create and manage a separate instance group for every instance type and availability zone combination, resulting in operational overhead across cluster configuration, scaling, patching, and monitoring. With flexible instance groups, you can define an ordered list of instance types using the new InstanceRequirements parameter and provide multiple subnets across availability zones in a single instance group. HyperPod provisions instances using the highest-priority type first and automatically falls back to lower-priority types when capacity is unavailable, eliminating the need for customers to manually retry across individual instance groups. Training customers benefit from multi-subnet distribution within an availability zone to avoid subnet exhaustion. Inference customers scaling manually get automatic priority-based fallback across instance types without needing to retry each instance group individually, while those using Karpenter autoscaling can reference a single flexible instance group. Karpenter automatically detects supported instance types from the flexible instance group and provisions the optimal type and availability zone based on pod requirements. You can create flexible instance groups using the CreateCluster and UpdateCluster APIs, the AWS CLI, or the AWS Management Console. Flexible instance groups are available for SageMaker HyperPod clusters using the EKS orchestrator in all AWS Regions where SageMaker HyperPod is supported. To learn more, see Flexible instance groups .
17 Apr 2026
HighCapabilityAmazon CloudWatch: Cross-Region Telemetry Auditing & Enablement Rules
Amazon CloudWatch now supports auditing telemetry configuration and enabling telemetry from AWS services such as Amazon EC2, Amazon VPC, and AWS CloudTrail across multiple AWS Regions from a single region. Customers can enable the telemetry auditing feature for their account or organization across all supported regions at once and create enablement rules that automatically apply to selected regions or all available regions. With today's launch, customers can scope enablement rules to specific regions or all supported regions. For example, a central security team can create a single organization-wide enablement rule for VPC Flow Logs that applies across all regions, ensuring consistent telemetry collection for every VPC across every account. Rules configured for all regions automatically expand to include new regions as they become available. CloudWatch's cross-region telemetry configuration and enablement rule is available in all AWS commercial regions. Standard CloudWatch pricing applies for telemetry ingestion. To learn more, visit the Amazon CloudWatch documentation .
16 Apr 2026
MediumCapabilityAmazon CloudWatch: Cross-Region Telemetry Auditing & Enablement Rules
Amazon CloudWatch now supports auditing telemetry configuration and enabling telemetry from AWS services such as Amazon EC2, Amazon VPC, and AWS CloudTrail across multiple AWS Regions from a single region. Customers can enable the telemetry auditing feature for their account or organization across all supported regions at once and create enablement rules that automatically apply to selected regions or all available regions. With today's launch, customers can scope enablement rules to specific regions or all supported regions. For example, a central security team can create a single organization-wide enablement rule for VPC Flow Logs that applies across all regions, ensuring consistent telemetry collection for every VPC across every account. Rules configured for all regions automatically expand to include new regions as they become available. CloudWatch's cross-region telemetry configuration and enablement rule is available in all AWS commercial regions. Standard CloudWatch pricing applies for telemetry ingestion. To learn more, visit the Amazon CloudWatch documentation .
16 Apr 2026
MediumCapabilityAWS Elastic Disaster Recovery now in AWS European Sovereign Cloud (Germany)
AWS Elastic Disaster Recovery (AWS DRS) is now available in the AWS European Sovereign Cloud, enabling organizations with data sovereignty requirements to protect their mission-critical workloads with disaster recovery on AWS. AWS DRS minimizes downtime and data loss with fast, reliable recovery of on-premises and cloud-based applications using affordable storage, minimal compute, and point-in-time recovery, with Recovery Point Objectives (RPOs) measured in seconds and Recovery Time Objectives (RTOs) typically in minutes. With AWS DRS, you can recover applications from physical infrastructure, VMware vSphere, Microsoft Hyper-V, and cloud infrastructure. AWS DRS uses a unified process for testing, recovery, and failback for a wide range of applications, including critical databases such as Oracle, MySQL, and SQL Server, and enterprise applications such as SAP. AWS Elastic Disaster Recovery is available in the AWS European Sovereign Cloud (Germany). See the AWS Regional Services List for the latest availability information. To learn more about AWS Elastic Disaster Recovery, visit our product page or documentation .
16 Apr 2026
MediumCapability
Get alerts for AWS Bedrock
Never miss a breaking change. SignalBreak monitors AWS Bedrock and dozens of other AI providers in real time.
Sign up free — no credit card required