AWS Bedrock
cloud_wrapper
400 signals tracked
Amazon EC2 X8aedz instances are now available in Europe (Stockholm) region
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) X8aedz instances are available in Europe (Stockholm) region. These instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin). These instances offer the highest maximum CPU frequency, 5GHz in the cloud. X8aedz instances are built using the latest sixth generation AWS Nitro Cards and are ideal for electronic design automation (EDA) workloads such as physical layout and physical verification jobs, and relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of 5 GHz processors and local NVMe storage enables faster processing of memory-intensive backend EDA workloads such as floor planning, logic placement, clock tree synthesis (CTS), routing, and power/signal integrity analysis. X8aedz instances feature a 32:1 ratio of memory to vCPU and are available in 8 sizes ranging from 2 to 96 vCPUs with 64 to 3,072 GiB of memory, including two bare metal variants, and up to 8 TB of local NVMe SSD storage. Customers can purchase X8aedz instances via Savings Plans, On-Demand instances, and Spot instances. To get started, sign in to the AWS Management Console. For more information visit the Amazon EC2 X8aedz instance page .
16 Apr 2026
InfoCapabilityAmazon FSx for Lustre Persistent-2 file systems are now available in four additional AWS Regions
You can now create Amazon FSx for Lustre Persistent-2 file systems in four additional AWS Regions: Asia Pacific (Hyderabad, Jakarta), Europe (Zurich), and South America (São Paulo). Amazon FSx for Lustre Persistent-2 file systems are built on AWS Graviton processors and provide higher throughput per terabyte (up to 1 GB/s per terabyte) and lower cost of throughput compared to previous generation FSx for Lustre file systems. Using FSx for Lustre Persistent-2 file systems, you can accelerate execution of machine learning, high-performance computing, media & entertainment, and financial simulations workloads while reducing your cost of storage. To get started with Amazon FSx for Lustre Persistent-2 in these new regions, create a file system through the AWS Management Console . To learn more about Amazon FSx for Lustre, visit our product pag e , and see the AWS Region Table for complete regional availability information.
16 Apr 2026
InfoPricingAWS Clean Rooms now supports configurable Spark properties for PySpark
AWS Clean Rooms now supports configurable Spark properties for PySpark jobs , offering customers the ability to optimize their workloads based on their performance and scale requirements. With this launch, customers can customize Spark settings such as memory overhead, task concurrency, and network timeouts for each analysis that uses PySpark, the Python API for Apache Spark . For example, a pharmaceutical research company collaborating with healthcare organizations for real-world clinical trial data can set specific memory tuning for large-scale workloads to improve performance and optimize costs. AWS Clean Rooms helps companies and their partners easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms .
17 Apr 2026
InfoPricingAWS Deadline Cloud Launches AI-Powered Troubleshooting Assistant for Render Jobs
AWS Deadline Cloud introduces an AI-powered troubleshooting assistant for render jobs, helping diagnose and resolve rendering issues automatically.
Date not specified
HighCapabilitySageMaker JumpStart optimized deployments for foundation models
SageMaker JumpStart now offers optimized deployments for foundation models, enabling faster and more cost-effective model deployment with pre-configured instances and auto-scaling.
Date not specified
MediumCapabilityClaude Opus 4.7 now available on Amazon Bedrock — enhanced AI model for production
Amazon Bedrock, the platform for building AI applications and agents at production scale, now offers Claude Opus 4.7-- Anthropic's most capable Opus model to date -- delivering meaningful improvements across agentic coding, professional work, and long-running tasks for developers and enterprises building production AI applications. Claude Opus 4.7 is an upgrade from Claude Opus 4.6, with stronger performance across the workflows teams run in production. Opus 4.7 works better through ambiguity, is more thorough in its problem solving, and folllows instructions more precisely. For coding, the model extends agentic capabilities with improved long-horizon autonomy, systems engineering, and complex code reasoning. For knowledge work, Claude Opus 4.7 advances professional tasks such as slides and document creation, financial analysis, and data visualization. For long-running tasks, the model stays on track over longer horizons with improved reasoning and memory capabilities. Claude Opus 4.7 also advances visual capabilities with high-resolution image support improving accuracy on charts, dense documents, and screen UIs where fine detail matters. Claude Opus 4.7 is served through Amazon Bedrock's next-generation inference engine, delivering enterprise-grade infrastructure for production workloads. It provides zero operator data access, meaning customer prompts and responses are never visible to Anthropic or AWS operators, keeping sensitive data private. It also enables enhanced availability through dynamic traffic routing with expanded in-region options, along with improved scalability. Claude Opus 4.7 is available in select AWS Regions . To learn more about Claude Opus 4.7 and other Anthropic models available in Amazon Bedrock, visit the Amazon Bedrock page . To get started, see the Amazon Bedrock documentation .
Date not specified
MediumCapabilityAmazon EC2 X8aedz instances now available in Europe (Stockholm)
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) X8aedz instances are available in Europe (Stockholm) region. These instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin). These instances offer the highest maximum CPU frequency, 5GHz in the cloud. X8aedz instances are built using the latest sixth generation AWS Nitro Cards and are ideal for electronic design automation (EDA) workloads such as physical layout and physical verification jobs, and relational databases that benefit from high single-threaded processor performance and a large memory footprint. The combination of 5 GHz processors and local NVMe storage enables faster processing of memory-intensive backend EDA workloads such as floor planning, logic placement, clock tree synthesis (CTS), routing, and power/signal integrity analysis. X8aedz instances feature a 32:1 ratio of memory to vCPU and are available in 8 sizes ranging from 2 to 96 vCPUs with 64 to 3,072 GiB of memory, including two bare metal variants, and up to 8 TB of local NVMe SSD storage. Customers can purchase X8aedz instances via Savings Plans, On-Demand instances, and Spot instances. To get started, sign in to the AWS Management Console. For more information visit the Amazon EC2 X8aedz instance page .
Date not specified
MediumCapabilitySageMaker JumpStart adds optimized deployments for foundation models
SageMaker JumpStart now offers optimized deployments, enabling customers to deploy foundation models with pre-configured settings tailored to specific use cases and performance constraints. SageMaker JumpStart optimized deployments simplify model deployment by offering task-aware configurations that optimize for cost, throughput, or latency based on your workload requirements - whether content generation, summarization, or Q&A. This launch includes support for 30+ popular models from Meta, Microsoft, Mistral AI, Qwen, Google, and TII, with visibility into key performance metrics like P50 latency, time-to-first token (TTFT), and throughput before deployment. With SageMaker JumpStart optimized deployments, customers can select from use case-specific configurations (such as generative writing or chat-style interactions) and choose optimization targets including cost-optimized, throughput-optimized, latency-optimized, or balanced performance. Models deploy to SageMaker AI Managed Inference endpoints or SageMaker HyperPod clusters with pre-set configurations that eliminate guesswork while maintaining full visibility into deployment details. Available models include Meta Llama 3.1 and 3.2 variants, Microsoft Phi-3, Mistral AI models including the new Mistral-Small-24B-Instruct-2501, Qwen 2 and 3 series including multimodal Qwen2-VL, Google Gemma, and TII Falcon3. All deployments leverage SageMaker's VPC deployment capabilities, ensuring data control and production-ready infrastructure with enterprise-grade security. The feature is available in all AWS regions where SageMaker JumpStart is curretly supported. To get started with optimized deployments, navigate to Models in SageMaker Studio, select your desired foundation model in the JumpStart Models tab, choose "Deploy," and select your use case and performance optimization target. For details, visit the SageMaker JumpStart documentation . AWS is actively expanding support to include additional models.
Date not specified
MediumCapabilityAmazon Managed Grafana supports Grafana 12.4 workspaces
Amazon Managed Grafana now supports creating new workspaces with Grafana version 12.4. This release includes features that were launched as a part of open source Grafana versions 11.0 to 12.4, including Drilldown apps, scenes powered dashboards, variables in transformations, visualization enhancements, and new features with the Amazon CloudWatch plugin. Queryless Drilldown apps enable customers to perform point-and-click exploration of Prometheus metrics, Loki logs, Tempo traces, and Pyroscope profiles. The Scenes-powered rendering engine boosts dashboard performance. Amazon CloudWatch Logs adds support for PPL and SQL queries, cross-account Metrics Insights, and log anomaly detection. The rebuilt table visualization improves performance with CSS cell styling and interactive Actions buttons, while trendline transformations and navigation bookmarks enhance data exploration. Grafana 12.4 is supported in all AWS regions where Amazon Managed Grafana is generally available. You can create a new Amazon Managed Grafana workspace from the AWS Console, SDK, or CLI. To explore the complete list of new features, please refer to the user documentation . Follow the instructions here to create workspaces with version 12.4. To learn more about Amazon Managed Grafana features and its pricing, visit the product page and pricing page .
Date not specified
MediumCapabilityAWS Deadline Cloud adds AI-powered troubleshooting assistant for render jobs
Today, AWS Deadline Cloud announces an AI-powered troubleshooting assistant that helps you quickly diagnose and resolve render job failures. AWS Deadline Cloud is a fully managed service that simplifies render management for computer-generated 2D/3D graphics and visual effects for films, TV shows, commercials, games, and industrial design. Render job failures from missing assets, software errors, configuration mismatches, and resource constraints can stall production pipelines and waste compute resources. Previously, diagnosing these issues required specialized technical staff to manually parse logs and identify root causes — a process that is time-consuming, difficult to scale, and often unavailable to smaller studios. The new Deadline Cloud assistant investigates failed jobs you identify, analyzes logs and metrics, detects common issues, and provides troubleshooting recommendations based on industry best practices and a pre-trained knowledge base covering Deadline Cloud, common render farm issues, and popular digital content creation applications including Autodesk Maya, 3ds Max, VRED, Blender, SideFX Houdini, Maxon Cinema 4D, Foundry Nuke, and Adobe After Effects. The assistant runs within your AWS account using Amazon Bedrock, keeping all data and analysis within your control. The Deadline Cloud assistant is available today in all AWS Regions where AWS Deadline Cloud is supported. Watch a demo on YouTube to see it in action, or visit the AWS Deadline Cloud documentation to learn more.
Date not specified
HighCapabilityAmazon ECR Pull Through Cache Now Supports Referrer Discovery and Sync
Amazon Elastic Container Registry (Amazon ECR) now automatically discovers and syncs OCI referrers, such as image signatures, SBOMs, and attestations, from upstream registries into your Amazon ECR private repositories with its pull through cache feature. Previously, when you listed referrers on a repository with a matching pull through cache rule, Amazon ECR would not return or sync referrers from the upstream repository. This meant that you had to manually list and fetch the upstream referrers. With today's launch, Amazon ECR's pull through cache will now reach upstream during referrers API requests and automatically cache related referrer artifacts in your private repository. This enables end-to-end image signature verification, SBOM discovery, and attestation retrieval workflows to work seamlessly with pull through cache repositories without requiring any client-side workarounds. This feature is available today in all AWS Regions where Amazon ECR pull through cache is supported. To learn more, visit the Amazon ECR documentation .
Date not specified
MediumCapabilityAmazon SageMaker HyperPod supports flexible instance groups
Amazon SageMaker HyperPod now supports flexible instance groups, enabling customers to specify multiple instance types and multiple subnets within a single instance group. Customers running training and inference workloads on HyperPod often need to span multiple instance types and availability zones for capacity resilience, cost optimization, and subnet utilization, but previously had to create and manage a separate instance group for every instance type and availability zone combination, resulting in operational overhead across cluster configuration, scaling, patching, and monitoring. With flexible instance groups, you can define an ordered list of instance types using the new InstanceRequirements parameter and provide multiple subnets across availability zones in a single instance group. HyperPod provisions instances using the highest-priority type first and automatically falls back to lower-priority types when capacity is unavailable, eliminating the need for customers to manually retry across individual instance groups. Training customers benefit from multi-subnet distribution within an availability zone to avoid subnet exhaustion. Inference customers scaling manually get automatic priority-based fallback across instance types without needing to retry each instance group individually, while those using Karpenter autoscaling can reference a single flexible instance group. Karpenter automatically detects supported instance types from the flexible instance group and provisions the optimal type and availability zone based on pod requirements. You can create flexible instance groups using the CreateCluster and UpdateCluster APIs, the AWS CLI, or the AWS Management Console. Flexible instance groups are available for SageMaker HyperPod clusters using the EKS orchestrator in all AWS Regions where SageMaker HyperPod is supported. To learn more, see Flexible instance groups .
Date not specified
HighCapabilityAmazon EC2 U7i High Memory Instances Available in Singapore
Amazon EC2 High Memory U7i-8TB instances (u7i-8tb.112xlarge) and U7i-12TB instances (u7i-12tb.224xlarge) are now available in AWS Asia Pacific (Singapore) region. U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-8tb instances offer 8TiB of DDR5 memory, and U7i-12tb instances offer 12TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment. U7i-8tb instances deliver 448 vCPUs; U7i-12tb instances deliver 896 vCPUs. Both instances support up to 100 Gbps of Amazon EBS bandwidth for faster data loading and backups, 100 Gbps of network bandwidth, and ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server. To learn more about U7i instances, visit the High Memory instances page .
Date not specified
MediumCapabilityAmazon Bedrock expands structured outputs to AWS GovCloud (US)
Amazon Bedrock is a fully managed service that provides access to a wide selection of high-performing foundation models from leading AI companies through a single API. Today, Amazon Bedrock expands structured outputs support to AWS GovCloud (US) Regions. Structured outputs enables foundation models to return consistent, schema-compliant, machine-readable responses—making it well-suited for government and regulated workloads that must meet strict compliance and data handling requirements. Structured outputs helps with common production tasks, such as extracting key fields and powering workflows that use APIs or tools, where even minor formatting errors can break downstream systems. By ensuring schema compliance, it reduces the need for custom validation logic and lowers operational overhead by minimizing failed requests and retries—so you can confidently deploy AI applications that require predictable, machine-readable outputs. You can use structured outputs either by defining a JSON schema that describes your desired response format or by using strict tool definitions to ensure a model's tool calls match your specifications. Structured outputs is now generally available in all commercial AWS and AWS GovCloud (US) Regions where Amazon Bedrock is supported. To learn more about structured outputs and the supported models, visit the Amazon Bedrock documentation.
Date not specified
MediumCapabilityAmazon RDS for Oracle expands cross-account snapshot sharing to include additional storage volumes
Amazon RDS for Oracle now supports cross-account snapshot sharing for database instances with additional storage volumes. Additional storage volumes allow customers to scale database storage up to 256 TiB by adding up to three storage volumes, each with up to 64 TiB, in addition to the primary storage volume. With this launch, customers can create, share, and copy a database snapshot across AWS accounts for database instances set up with additional storage volumes. Cross account snapshots enable customers to set up isolated backup environments in separate accounts for compliance requirements and to perform diagnostics, such as investigating production issues by restoring database snapshots in a separate account for development and testing. Cross account snapshots for database instances with additional storage volumes preserve the storage layout of the original database instance, including the configuration of additional storage volumes. When a snapshot is shared to a target AWS account, authorized users in the target account can restore it to another database instance, copy the snapshot within the same or different AWS Region, or create independent backups under different AWS Identity and Access Management (IAM) access permissions for backup and disaster recovery. Cross-account snapshot sharing with additional storage volumes is available in all AWS Regions, including AWS GovCloud (US) Regions. Customers can start using this feature today through the AWS Management Console, AWS CLI, or AWS SDKs. To learn more, see Sharing a DB snapshot for Amazon RDS , Copying a DB snapshot for Amazon RDS , and Working with storage in RDS for Oracle in the Amazon RDS User Guide.
Date not specified
MediumCapabilityAmazon CloudFront supports SHA-256 for signed URLs and cookies
Amazon CloudFront now supports SHA-256 as a hash algorithm for creating signed URLs and signed cookies. SHA-256 provides an improved security posture with stronger collision detection and alignment with modern cryptographic standards, giving you stronger cryptographic signing when restricting access to content. Previously, CloudFront signed URLs and signed cookies used SHA-1 exclusively for signature generation. This feature helps you meet security and compliance requirements that mandate SHA-256 for digital signatures, while also future-proofing your content delivery workflows. To use SHA-256, include the Hash-Algorithm=SHA256 query parameter in your signed URLs, or the CloudFront-Hash-Algorithm=SHA256 cookie attribute for signed cookies. Existing signed URLs and signed cookies that don't specify a hash algorithm continue to use SHA-1, so this change is fully backwards compatible. This feature is available in all edge locations where Amazon CloudFront is available. There is no additional cost to use SHA-256 signing. To learn more, see Create a signed URL using a canned policy or Set signed cookies using a canned policy in the Amazon CloudFront Developer Guide .
Date not specified
MediumCapabilityAWS VPC Encryption Controls now available in AWS GovCloud (US) Regions
AWS launches VPC Encryption Controls in AWS GovCloud (US) Regions to make it easy to audit and enforce encryption in transit within and across Amazon Virtual Private Clouds (VPC), and demonstrate compliance with encryption standards. You can turn it on your existing VPCs to monitor encryption status of traffic flows and identify VPC resources that are unintentionally allowing plaintext traffic. This feature also makes it easy to enforce encryption across different network paths by automatically (and transparently) turning on hardware-based AES-256 encryption on traffic between multiple VPC resources including AWS Fargate, Network Load Balancers, and Application Load Balancers. To meet stringent compliance standards like HIPAA, PCI DSS, FedRAMP, and FIPS 140-2, government customers rely on both application layer encryption and the hardware-based encryption that AWS offers across different network paths. AWS provides hardware-based AES-256 encryption transparently between modern EC2 Nitro instances. AWS also encrypts all network traffic between AWS data centers in and across Availability Zones, and AWS Regions before the traffic leaves our secure facilities. All inter-region traffic that uses VPC Peering, Transit Gateway Peering, or AWS Cloud WAN receives an additional layer of transparent encryption before leaving AWS data centers. Prior to this release, customers had to track and confirm encryption across all network paths. With VPC Encryption Controls, customers can now monitor, enforce and demonstrate encryption within and across Virtual Private Clouds (VPCs) in just a few clicks. Your information security team can turn it on centrally to maintain a secure and compliant environment, and generate audit logs for compliance and reporting. With this launch, VPC Encryption Controls is now available in AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. To learn more about this feature and its use cases, please see our documentation .
1 Apr 2026
HighPolicyAmazon SageMaker Data Agent gains geo-specific inference for Japan and Australia
Amazon SageMaker Data Agent now supports cross-region inference profiles for Japan and Australia through Amazon Bedrock. With this update, inference requests from Data Agent in the Asia Pacific (Tokyo) and Asia Pacific (Sydney) regions are processed within their respective geographies, supporting data sovereignty requirements for customers in Japan and Australia. Data Agent provides an AI-powered conversational experience for data exploration, Python and SQL code generation, troubleshooting, and analytics directly within Amazon SageMaker Unified Studio Notebook and Query Editor. With geo-specific inference through JP-CRIS (Japan Cross-Region Inference) and AU-CRIS (Australia Cross-Region Inference), you can use Data Agent with confidence that your inference requests are routed exclusively within your geography over the AWS Global Network. Customers in regulated industries such as financial services, healthcare, and the public sector can meet data residency requirements while using the full set of Data Agent capabilities. To get started, open a project in SageMaker Unified Studio in a supported region and use Data Agent in notebooks or Query Editor. For more information, see SageMaker Data Agent in the Amazon SageMaker Unified Studio User Guide.
Date not specified
MediumCapabilityAmazon ECS: Managed Daemons simplify agent deployment and management
Amazon ECS announces Managed Daemons for ECS Managed Instances , enabling organizations to centrally deploy and manage software agents such as security, observability, and networking across their container infrastructure independent of application deployments. By decoupling daemon lifecycle management from application operations, Managed Daemons helps guarantee reliable agent coverage across all workloads, simplifies deployments and version updates, and improves resource utilization by running a single daemon task per managed instance. With Managed Daemons, you can create a daemon for one or more Managed Instances capacity providers in your cluster. ECS places exactly one daemon task per managed instance and guarantees that daemons are running before any application tasks are placed, so cross-cutting functions such as logging, tracing, and metrics collection are always available. ECS orchestrates daemons as independent processes bound to the instance lifecycle rather than individual application tasks, allowing platform administrators to manage them independently from application teams. When you update daemon versions, ECS drains existing instances and provisions new instances with the updated daemon, automatically replacing service tasks with circuit breaker protection and rollback capabilities for reliable coverage across all your workloads. To get started, you can use AWS Console, CLI, CloudFormation, or AWS SDKs to register a daemon task definition specifying your container image, then create a daemon with associated capacity providers in your clusters. This feature is now available in all AWS Regions. For more details, refer to our documentation and launch blog post . There is no additional cost - you pay only for the standard compute resources consumed by your daemon tasks.
Date not specified
HighCapabilityAmazon SES Mail Manager adds mTLS and Lambda support
Amazon Simple Email Service (SES) Mail Manager now offers enhancements to email security and processing while simplifying email infrastructure migrations. These enhancements include optional TLS and certificate-based authentication (mTLS) support in Ingress Endpoint, and two new rule actions: Invoke Lambda function and Bounce. These enhancements benefit organizations seeking to maintain compatibility with legacy systems while implementing stronger security controls, and advanced email routing capabilities. For example customers can now configure STARTTLS as an optional TLS configuration, enabling legacy systems that don't support STARTTLS to connect to Mail Manager. With Mutual TLS (mTLS) in Ingress Endpoint customers can now used certificate-based authentication for enhanced security. The Invoke Lambda function rule action allows direct invocation of AWS Lambda functions from rule sets, enabling custom email processing workflows and the Bounce rule action provides RFC-compliant SMTP responses to sending servers. These new enhancements are available today in all AWS Regions where Amazon SES Mail Manager is offered, except for the Middle East (UAE) and Middle East (Bahrain) regions. To learn more about Amazon SES Mail Manager and how these features can help streamline your email operations, visit https://aws.amazon.com/ses/ .
Date not specified
MediumCapability
Get alerts for AWS Bedrock
Never miss a breaking change. SignalBreak monitors AWS Bedrock and dozens of other AI providers in real time.
Sign up free — no credit card required