Red Hat Inc., the world’s leading provider of open source solutions, today announced new certifications and capabilities for Red Hat OpenShift aimed at accelerating the delivery of intelligent applications across the hybrid cloud. These enhancements, including the certification of Red Hat OpenShift with NVIDIA AI Enterprise 2.0, as well as the general availability of Red Hat OpenShift 4.10, are intended to help organizations deploy, manage and scale artificial intelligence (AI) workloads with confidence.
According to Gartner®, worldwide artificial intelligence (AI) software revenue is forecast to total $62.5 billion in 2022, an increase of 21.3% from 2021. As enterprises integrate AI and machine learning capabilities into cloud-native applications to deliver more insight and customer value, they need a more agile, flexible and scalable platform for developing and deploying ML models and intelligent applications into production more quickly. Red Hat OpenShift is engineered to provide this foundation and, with today’s updates, Red Hat OpenShift makes it easier for organizations to add AI workloads to the industry’s leading enterprise Kubernetes platform.
Streamlining AI innovation
While AI is transforming how enterprises do business, operationalizing an AI infrastructure can be complex and time- and resource-intensive. To help accelerate the process, Red Hat OpenShift is now certified and supported with the NVIDIA AI Enterprise 2.0 software suite, an end-to-end, cloud-native suite of AI and data analytics software that runs on mainstream, NVIDIA-Certified Systems. The integrated platform delivers NVIDIA’s flagship AI software, the NVIDIA AI Enterprise suite, optimized for Red Hat OpenShift. With NVIDIA AI Enterprise on Red Hat OpenShift, data scientists and developers can more quickly train models, build them into applications and deploy at scale.
Customers now have the option to deploy Red Hat OpenShift on NVIDIA-Certified Systems with NVIDIA Enterprise AI software as well as on previously supported NVIDIA DGX A100 systems, a universal high performance compute system for AI workloads. This allows organizations to consolidate and accelerate the MLOps lifecycle, including data engineering, analytics, training, software development and inference, into a unified, easier-to-deploy AI infrastructure. Additionally, Red Hat OpenShift’s integrated DevOps and GitOps capabilities enable MLOps to speed up continuous delivery of AI-powered applications.
This complements the planned support for NVIDIA GPUs available with Red Hat OpenShift Data Science, announced previously.
A comprehensive platform to run AI/ML workloads
Red Hat OpenShift 4.10 continues the platform’s expansion to support a broad spectrum of cloud-native workloads across the open hybrid cloud, enabling organizations to run AI/ML workloads in even more environments. The latest version of OpenShift adds support for additional public clouds and hardware architectures, providing organizations the flexibility to choose where to run their applications by making development as easy and as consistent as possible. New features and capabilities designed to accelerate AI/ML workloads include:
- Installer provisioned infrastructure (IPI) support for Azure Stack Hub as well as Alibaba Cloud and IBM Cloud, both available as a technology preview. Users can now use the IPI process for fully automated, integrated, one-click installation of OpenShift 4.
- Running Red Hat OpenShift on Arm® processors. Arm support will be available in two ways: full stack automation IPI for Amazon Web Services (AWS) and user provisioned (UPI) for bare-metal on pre-existing infrastructure. This provides users with the same experience they’ve come to expect from Red Hat OpenShift on AWS, backed by the latest Arm-based instances.
- Red Hat OpenShift availability on NVIDIA LaunchPad. NVIDIA LaunchPad provides free access to curated labs for enterprise IT and AI professionals to experience NVIDIA-accelerated systems and software. With Red Hat OpenShift now available on LaunchPad, enterprises can get hands-on lab experience configuring, optimizing and orchestrating resources for AI and data science workloads using NVIDIA AI Enterprise with Red Hat.
Better oversight and compliance features for diverse, modern workloads
Managing diverse, modern workloads can frequently require additional oversight and governance. To help users support their regulatory standard enforcement programs, Red Hat OpenShift 4.10 includes three new compliance operators that enable users to check their cluster for compliance and remediate identified issues. The compliance profiles include:
- The Payment Card Industry Data Security Standard (PCI DSS), a set of security standards focused on helping companies that accept, process, store or transmit credit card information with greater confidence.
- North American Electric Reliability Corporation Critical Infrastructure Protection (NAERC CIP), a set of requirements to address the security needs associated with operating North America’s bulk electric system.
- FedRAMP Moderateimpact level, the standard for cloud computing security for controlled, unclassified information across federal government agencies.
Additionally, Red Hat OpenShift 4.10 includes the general availability of sandboxed containers. Sandboxed containers provide an optional additional layer of isolation for workloads with stringent application-level security requirements. Improvements have also been made to OpenShift in disconnected or air-gapped settings, simplifying the process of installing disconnected OpenShift clusters. This simplifies maintaining mirrors of OpenShift images and keeping them up to date as if they were a connected cluster.