Language Selection

English French German Italian Portuguese Spanish

Red Hat

IBM/Red Hat: OpenShift, CUDA, Jim Whitehurst, VMworld and RHELvolution

Filed under
Red Hat
  • Red Hat Launches OpenShift Service Mesh to Accelerate Adoption of Microservices and Cloud-Native Applications

    Red Hat, Inc., the world's leading provider of open source solutions, today announced the general availability of Red Hat OpenShift Service Mesh to connect, observe and simplify service-to-service communication of Kubernetes applications on Red Hat OpenShift 4, the industry’s most comprehensive enterprise Kubernetes platform. Based on the Istio, Kiali and Jaeger projects and enhanced with Kubernetes Operators, OpenShift Service Mesh is designed to deliver a more efficient, end-to-end developer experience around microservices-based application architectures. This helps to free developer teams from the complex tasks of having to implement bespoke networking services for their applications and business logic.

  • CUDA 10.1 U2 Adds RHEL8 Support, Nsight Compute Tools For POWER

    NVIDIA last week quietly released a second update to CUDA 10.1.

    CUDA 10.1 Update 2 brings Red Hat Enterprise Linux 8.0 support, continued POWER architecture support improvements, and other additions.

  • IBM Stock and Jim Whitehurst’s Toughest Test

    What analysts say they want from IBM stock is Red Hat CEO Jim Whitehurst in current CEO Virginia Rometty’s chair. They want Red Hat running IBM.

    That wasn’t the promise when this deal was put together. The promise was that Red Hat would get autonomy from IBM, not that IBM would lose its autonomy to Red Hat. But Whitehurst’s concept of an Open Organization has excited analysts who don’t even know what it is.

    If IBM became an Open Organization, these analysts think, it would replace the top-down structure IBM has used for a century with an organic system in which employees and customers are part of the product design process. Instead of selling gear or even solutions, IBM would become a corporate change agent.

  • Going to VMWorld? Learn to help data scientists and application developers accelerate AI/ML initiatives

    IT experts from around the world are headed to VMworld 2019 in San Francisco to learn how they can leverage emerging technologies from VMware and ecosystem partners (e.g. Red Hat, NVIDIA, etc.) to help achieve the digital transformation for their organizations. Artificial Intelligence (AI)/Machine Learning (ML) is a very popular technology trend, with Red Hat OpenShift customers like HCA Healthcare, BMW, Emirates NBD, and several more are offering differentiated value to their customers. Investments are ramping up across many industries to develop intelligent digital services that help improve customer satisfaction, and gain competitive business advantages. Early deployment trends indicate AI/ML solution architectures are spanning across edge, data center, and public clouds.

  • RHELvolution 2: A brief history of Red Hat Enterprise Linux releases from RHEL 6 to today

    In the previous post, we looked at the history of Red Hat Enterprise Linux from pre-RHEL days through the rise of virtualization. In this one we'll take a look at RHEL's evolution from early days of public cloud to the release of RHEL 8 and beyond.

Fedora Switching To The BFQ I/O Scheduler For Better Responsiveness & Throughput

Filed under
Linux
Red Hat

Following Chromebooks switching to BFQ and other distributions weighing this I/O scheduler for better responsiveness while maintaining good throughput capabilities, beginning with Fedora 31 there will be BFQ used as well.

In-step with today's systemd 243 RC2 update, the Fedora packages in Rawhide and F31 have switched to using BFQ.

Read more

Red Hat Enterprise Linux 6 and CentOS 6 Receive Important Kernel Security Update

Filed under
Linux
Red Hat
Security

The new Linux kernel security update is marked by the Red Hat Product Security team as having an "Important" security impact due to the fact that it patches several critical flaws, including the Spectre SWAPGS gadget vulnerability (CVE-2019-1125) affecting x86 processors.

Also patched are a security vulnerability (CVE-2019-5489) leading to page cache side-channel attacks, an issue in the Salsa20 encryption algorithm that could allow local attackers to cause a denial of service (CVE-2017-17805), and a flaw (CVE-2018-17972) that let unprivileged users inspect kernel stacks of arbitrary tasks.

Read more

Red Hat/Fedora: Flock’19 Budapest, Cockpit 201 and Systemd 243 RC2

Filed under
Red Hat
  • Flock’19 Budapest

    This was the first occurrence of the conference for me to attend. Its an annual Fedora Community gathering, which happens in a new city of Europe every year. This time it was in Budapest, the capital of Hungary, last year it was hosted in Dresden. Dates for the same were: 8th Aug through 11th Aug 2019. Also I got an opportunity to present there on my proposal: “Getting Started with Fedora QA”.

    Day 1 Started with a Keynote by Mathew Miller (mattdm). In here he spoke about where we as a community are and where we need to go further. It was a knowledgeable discussion for a first timer like me who was always looking out for the Vision and Mission of Fedora community. There are people who are with Fedora since its first release and you get to meet them here at the annual gathering.

    [...]

    Groups were formed and people decided for themselves where they wanted to go for the evening hangout on the Day 1. We were 7 people who decided to hangout at the Atmosphere Klub near the V.Kerulet and left at around 9:00 pm by walk.

    Day 2 started with a keynote by Denise Dumas, Vice President, Operating System Platform, Red Hat. She spoke on “Fedora, Red Hat and IBM”. I woke up late, 20 minutes before the first session as I went to bed late last night and had walked for around 11 kms the day before.

  • Fedora 30 : Set up the Linux Malware Detect.
  • Cockpit 201

    It’s now again possible to stop a service, without disabling it. Reloading is now available only when the service allows it.

    Furthermore, disabling or masking a service removes any lingering “failed” state, reducing noise.

  • Systemd 243 RC2 Released

    Released nearly one month ago was the systemd 243 release candidate while the official update has yet to materialize. It looks though like it may be on the horizon with a second release candidate being posted today.

    Red Hat's Zbigniew Jędrzejewski-Szmek has just tagged systemd 243-RC2 as the newest test release for this new version of this de facto Linux init system. Over the past month have been new hardware database (HWDB) additions, various fixes, new network settings, resolvectl zsh shell completion support, bumping timedated to always run at the highest priority, and other changes.

IBM/Red Hat and Intel Leftovers

Filed under
GNU
Linux
Red Hat
Hardware
  • Troubleshooting Red Hat OpenShift applications with throwaway containers

    Imagine this scenario: Your cool microservice works fine from your local machine but fails when deployed into your Red Hat OpenShift cluster. You cannot see anything wrong with the code or anything wrong in your services, configuration maps, secrets, and other resources. But, you know something is not right. How do you look at things from the same perspective as your containerized application? How do you compare the runtime environment from your local application with the one from your container?

    If you performed your due diligence, you wrote unit tests. There are no hard-coded configurations or hidden assumptions about the runtime environment. The cause should be related to the configuration your application receives inside OpenShift. Is it time to run your app under a step-by-step debugger or add tons of logging statements to your code?

    We’ll show how two features of the OpenShift command-line client can help: the oc run and oc debug commands.

  • What piece of advice had the greatest impact on your career?

    I love learning the what, why, and how of new open source projects, especially when they gain popularity in the DevOps space. Classification as a "DevOps technology" tends to mean scalable, collaborative systems that go across a broad range of challenges—from message bus to monitoring and back again. There is always something new to explore, install, spin up, and explore.

  • How DevOps is like auto racing

    When I talk about desired outcomes or answer a question about where to get started with any part of a DevOps initiative, I like to mention NASCAR or Formula 1 racing. Crew chiefs for these race teams have a goal: finish in the best place possible with the resources available while overcoming the adversity thrown at you. If the team feels capable, the goal gets moved up a series of levels to holding a trophy at the end of the race.

    To achieve their goals, race teams don’t think from start to finish; they flip the table to look at the race from the end goal to the beginning. They set a goal, a stretch goal, and then work backward from that goal to determine how to get there. Work is delegated to team members to push toward the objectives that will get the team to the desired outcome.

    [...]

    Race teams practice pit stops all week before the race. They do weight training and cardio programs to stay physically ready for the grueling conditions of race day. They are continually collaborating to address any issue that comes up. Software teams should also practice software releases often. If safety systems are in place and practice runs have been going well, they can release to production more frequently. Speed makes things safer in this mindset. It’s not about doing the “right” thing; it’s about addressing as many blockers to the desired outcome (goal) as possible and then collaborating and adjusting based on the real-time feedback that’s observed. Expecting anomalies and working to improve quality and minimize the impact of those anomalies is the expectation of everyone in a DevOps world.

  • Deep Learning Reference Stack v4.0 Now Available

    Artificial Intelligence (AI) continues to represent one of the biggest transformations underway, promising to impact everything from the devices we use to cloud technologies, and reshape infrastructure, even entire industries. Intel is committed to advancing the Deep Learning (DL) workloads that power AI by accelerating enterprise and ecosystem development.

    From our extensive work developing AI solutions, Intel understands how complex it is to create and deploy applications for deep learning workloads. That?s why we developed an integrated Deep Learning Reference Stack, optimized for Intel Xeon Scalable processor and released the companion Data Analytics Reference Stack.

    Today, we?re proud to announce the next Deep Learning Reference Stack release, incorporating customer feedback and delivering an enhanced user experience with support for expanded use cases.

  • Clear Linux Releases Deep Learning Reference Stack 4.0 For Better AI Performance

    Intel's Clear Linux team on Wednesday announced their Deep Learning Reference Stack 4.0 during the Linux Foundation's Open-Source Summit North America event taking place in San Diego.

    Clear Linux's Deep Learning Reference Stack continues to be engineered for showing off the most features and maximum performance for those interested in AI / deep learning and running on Intel Xeon Scalable CPUs. This optimized stack allows developers to more easily get going with a tuned deep learning stack that should already be offering near optimal performance.

low-memory-monitor: new project announcement

Filed under
Linux
Red Hat

I'll soon be flying to Greece for GUADEC but wanted to mention one of the things I worked on the past couple of weeks: the low-memory-monitor project is off the ground, though not production-ready.

low-memory-monitor, as its name implies, monitors the amount of free physical memory on the system and will shoot off signals to interested user-space applications, usually session managers, or sandboxing helpers, when that memory runs low, making it possible for applications to shrink their memory footprints before it's too late either to recover a usable system, or avoid taking a performance hit.

It's similar to Android's lowmemorykiller daemon, Facebook's oomd, Endless' psi-monitor, amongst others

Read more

Also: New Low-Memory-Monitor Project Can Help With Linux's RAM/Responsiveness Problem

IBM: Kubernetes/OpenShift, OpenPOWER, and Red Hat Enterprise Linux for Developers

Filed under
Red Hat
  • Red Hat Integration delivers new Kubernetes Operators and expands data integration capabilities with latest release

    We are pleased to announce the Q3 release of Red Hat Integration, which brings us further in our alignment around Red Hat OpenShift as the platform of choice for developing and deploying cloud-native applications across hybrid cloud environments, as well as helping customers get their integrations up and running easier and faster.

    As modern IT continues its rapid evolution, it becomes important that the cloud-native solutions supporting this transformation keep pace, enabling IT organizations to truly benefit from this constant innovation. To help customers take full advantage of this, we've updated, tested and certified every single component in Red Hat Integration with the latest version of OpenShift: Red Hat OpenShift 4.

  • The Linux Foundation Announces New Open Hardware Technologies and Collaboration

    The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced that the OpenPOWER Foundation will become a project hosted at The Linux Foundation. The project includes IBM’s open POWER Instruction Set Architecture (ISA) and contributed Source Design Implementations required to support data-driven hardware for intensive workloads like Artificial Intelligence (AI).

    OpenPOWER is the open steward for the Power Architecture and has the support of 350 members, including IBM, Google, Inspur Power Systems, Yadro, Hitachi, Wistron, Mellanox, NVIDIA, and Red Hat.

    The governance model within the Linux Foundation gives software developers assurance of compatibility while developing AI and hybrid cloud native applications that take advantage of POWER’s rich feature set and open compute hardware and software ecosystems.

    As the demand rises for more and more compute-intensive workloads like AI and in-memory analytics, commodity systems vendors have struggled with the looming predictions of the end of Moore’s Law. Central processing units (CPUs) may no longer handle the rising demands alone, and data-centric systems are built to maximize the flow of data between CPUs and attached devices for specialized workloads. By hosting OpenPOWER at The Linux Foundation, a cross-project, cross-community collaboration, it will accelerate development of hardware and software to support data-centric systems, by making it available to a growing global audience.

    “The OpenPOWER community has been doing critical work to support the increasing demands of enterprises that are using big data for AI and machine learning workloads. The move to bring these efforts together with the worldwide ecosystem of open source developers across projects at The Linux Foundation will unleash a new level of innovation by giving developers everywhere more access to the tools and technologies that will define the next generation of POWER architecture,” said Jim Zemlin, executive director at The Linux Foundation.

  • Raptor Computing Systems Planning To Launch New ATX POWER9 Board With OpenCAPI

    In addition to the news out of the OpenPOWER Summit in San Diego that the POWER ISA is going open-source and the OpenPOWER Foundation becoming part of the Linux Foundation, Raptor Computing Systems shared they plan to launch a new standard ATX motherboard next year that will feature OpenCAPI connectivity.

    Built off the successes of their Talos II high-end server motherboard and lower-cost Blackbird desktop motherboard designs, there is apparently a new motherboard design for POWER9 being worked on that could launch in early 2020.

  • Why you should be developing on Red Hat Enterprise Linux

    With a $0 Red Hat Developer membership, you get access to Red Hat Enterprise Linux (RHEL) at no cost. We have downloads available for RHEL versions starting as far back as 7.2, and as current as RHEL 8.1 Beta. The subscription costs nothing, and there are no additional costs for any of the software or content we make available through the program.

Eclipse is Now a Module on Fedora 30

Filed under
Red Hat

From Fedora 30 onwards, Eclipse will be available as a module for Fedora Modularity.

This shows that Eclipse 2019-06 is available to install with three different profiles from which to choose. Each profile will install the Eclipse IDE and a curated set of plug-ins for accomplishing specific tasks.

java -- This is the default profile and will install everything you need to start developing Java applications.
c -- This profile will install everything you need to start developing C/C++ applications.
everything -- This profile will install all the Eclipse plug-ins currently available in the module, including those that are a part of the above two profiles.

Read more

Red Hat Satellite 6.6 Beta is now available with enhancements across reporting, automation, and supportability

Filed under
Red Hat

We are pleased to announce that Red Hat Satellite 6.6 is now available in beta to current Satellite customers.

Red Hat Satellite is a scalable platform to manage patching, provisioning, and subscription management of your Red Hat infrastructure, regardless of where it is running. The Satellite 6.6 beta is focused on enhancements across reporting, automation, and supportability

While Satellite 6.6 Beta supports Red Hat Enterprise Linux 8 hosts, it is important to note that Satellite 6.6 must be installed on a Red Hat Enterprise Linux 7 host. Support for running Satellite itself on a Red Hat Enterprise Linux 8 host is scheduled for a later release.

Read more

Also: Serverless on Kubernetes, diverse automation, and more industry trends

IBM: OpenPOWER Foundation, Savings and the OpenStack Platform

Filed under
Red Hat
Hardware
  • OpenPOWER Foundation | The Next Step in the OpenPOWER Foundation Journey

    Today marks one of the most important days in the life of the OpenPOWER Foundation. With IBM announcing new contributions to the open source community including the POWER Instruction Set Architecture (ISA) and key hardware reference designs at OpenPOWER Summit North America 2019, the future has never looked brighter for the POWER architecture.

    OpenPOWER Foundation Aligns with Linux Foundation

    The OpenPOWER Foundation will now join projects and organizations like OpenBMC, CHIPS Alliance, OpenHPC and so many others within the Linux Foundation. The Linux Foundation is the premier open source group, and we’re excited to be working more closely with them.

    Since our founding in 2013, IEEE-ISTO has been our home, and we owe so much to its team. It’s as a result of IEEE-ISTO’s support and guidance that we’ve been able to expand to more than 350 members and that we’re ready to take the next step in our evolution. On behalf of our membership, our board of directors and myself, we place on record our thanks to the IEEE-ISTO team.

    By moving the POWER ISA under an open model – guided by the OpenPOWER Foundation within the Linux Foundation – and making it available to the growing open technical commons, we’ll enable innovation in the open hardware and software space to grow at an accelerated pace. The possibilities for what organizations and individuals will be able to develop on POWER through its mature ISA and software ecosystem will be nearly limitless.

  • How Red Hat delivers $7B in customer savings

    This spring, Red Hat commissioned IDC to conduct a new study to analyze the contributions of Red Hat Enterprise Linux to the global business economy. While many of the findings were impressive, including immense opportunities for partners, we were especially excited to learn more about how our customers benefit from Red Hat Enterprise Linux.

    According to the study, the world’s leading enterprise Linux platform "touches" more than $10 trillion of business revenues worldwide each year and provides economic benefits of more than $1 trillion each year to customers. Nearly $7 billion of that number comes in the form of IT savings. Even more exciting? As hybrid cloud adoption grows, we expect customers to continue to benefit given the importance of a common, flexible and open operating system to IT deployments that span the many footprints of enterprise computing.

  • The road ahead for the Red Hat OpenStack Platform

    If you didn't have a chance to attend our Road Ahead session at Red Hat Summit 2019 (or you did, but want a refresher) you'll want to read on for a quick update. We'll cover where Red Hat OpenStack Platform is today, where we're planning to go tomorrow, and the longer-term plan for Red Hat OpenStack Platform support all the way to 2025.

    A strategic part of our portfolio

    Red Hat OpenStack Platform is a strategic part of Red Hat's vision for open hybrid cloud. It's the on-prem foundation that can help organizations bridge the gap between today's existing workloads and emerging workloads. In fact, it just earned the 2019 CODiE award for "Best Software Defined Infrastructure."

    One of those emerging workloads, and more on the rest in a moment, is Red Hat OpenShift.

Syndicate content

More in Tux Machines

GNU Parallel 20190822 ('Jesper Svarre') released [stable]

GNU Parallel 20190822 ('Jesper Svarre') [stable] has been released. It is available for download at: http://ftpmirror.gnu.org/parallel/ No new functionality was introduced so this is a good candidate for a stable release. GNU Parallel is 10 years old next year on 2020-04-22. You are here by invited to a reception on Friday 2020-04-17. Read more

KDE ISO Image Writer – Release Announcement

My GSoC project comes to an end and I am going to conclude this series of articles by announcing the release of a beta version of KDE ISO Image Writer. Read more Also: How I got a project in Labplot KDE

Linux Foundation: Automotive Grade Linux Announcement and Calling Surveillance Operations "Confidential Computing"

  • Automotive Grade Linux Announces New Instrument Cluster Expert Group and UCB 8.0 Code Release

    Automotive Grade Linux (AGL), an open source project developing a shared software platform for in-vehicle technology, today announced a new working group focused on Instrument Cluster solutions, as well as the latest code release of the AGL platform, the UCB 8.0. The AGL Instrument Cluster Expert Group (EG) is working to reduce the footprint of AGL and optimize the platform for use in lower performance processors and low-cost vehicles that do not require an entire infotainment software stack. Formed earlier this year, the group plans to release design specifications later this year with an initial code release in early 2020. “AGL is now supported by nine major automotive manufacturers, including the top three producers by worldwide volume, and is currently being used in production for a range of economy and luxury vehicles” said Dan Cauchy, Executive Director of Automotive Grade Linux at the Linux Foundation. “The new Instrument Cluster Expert Group, supported by several of these automakers, will expand the use cases for AGL by enabling the UCB platform to support solutions for lower-cost vehicles, including motorcycles.”

  • Shhh! Microsoft, Intel, Google and more sign up to the Confidential Computing Consortium

    The Linux Foundation has signed up the likes of Microsoft and Google for its Confidential Computing Consortium, a group with the laudable goal of securing sensitive data. The group – which also includes Alibaba, Arm, Baidu, IBM, Intel, Red Hat, Swisscom and Tencent – will be working on open-source technologies and standards to speed the adoption of confidential computing. The theory goes that while approaches to encrypting data at rest and in transit have supposedly been dealt with, assuming one ignores the depressingly relentless splurts of user information from careless vendors, keeping it safe while in use is quite a bit more challenging. Particularly as workloads spread to the cloud and IoT devices.

  • Tech giants come together to form cloud security watchdog

    Some of the world’s biggest technology companies are joining forces to improve the security of files in the cloud. This includes Google, IBM, Microsoft, Intel, and many others. The news first popped up on the Linux Foundation, where it was said that the Confidential Computing Consortium will work to bring industry standards and identify the proper tools to encrypt data used by apps, devices and online services. At the moment, cloud security solutions focus to protect data that’s either resting, or is in transit. However, when the data is being used is “the third and possibly most challenging step to providing a fully encrypted lifecycle for sensitive data.”

  • Tech firms join forces to boost cloud security

    Founding members of the group – which unites hardware suppliers, cloud providers, developers, open source experts and academics – include Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent. [...] “The earliest work on technologies that have the ability to transform an industry is often done in collaboration across the industry and with open source technologies,” said Jim Zemlin, executive director at the Linux Foundation. “The Confidential Computing Consortium is a leading indicator of what is to come for security in computing and will help define and build open technologies to support this trust infrastructure for data in use.”

  • Google, Intel and Microsoft form data protection consortium
  • Intel Editorial: Intel Joins Industry Consortium to Accelerate Confidential Computing

    Leaders in information and infrastructure security are well versed in protecting data at-rest or in-flight through a variety of methods. However, data being actively processed in memory is another matter. Whether running on your own servers on-prem, in an edge deployment, or in the heart of a cloud service provider’s data center, this “in-use” data is almost always unencrypted and potentially vulnerable.

  • Confidential Computing: How Big Tech Companies Are Coming Together To Secure Data At All Levels

    Data today moves constantly from on-premises to public cloud and the edge, which is why it is quite challenging to protect. While there are standards available that aim to protect data when it is in rest and transit, standards related to protecting it when in use do not exist. Protecting data while in use is called confidential computing, which the Confidential Computing Consortium is aiming to create across the industry. The Confidential Computing Consortium, created under the Linux Foundation, will work to build up guidelines, systems and tools to ensure data is encrypted when it’s being used by applications, devices and online services. The consortium says that encrypting data when in use is “the third and possibly most challenging step to providing a fully encrypted lifecycle for sensitive data.” Members focused on the undertaking are Alibaba, ARM, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom and Tencent.

  • IT giants join forces for full-system data security

    Apple is conspiciously missing from the consortium, despite using both Intel hardware and inhouse designed ARM-based processors. Of the first set of commitments, Intel will release its Software Guard Extensions (SGX) software development kit as open source through the CCC.

  • Google, Intel, and Microsoft partner to improve cloud security

    Some of the biggest names in tech have banded together in an effort to promote industry-wide security standards for protecting data in use.

  • Alibaba, Baidu, Google, Microsoft, Others Back Confidential Computing Consortium

    The Confidential Computing Consortium aims to help define and accelerate open-source technology that keeps data in use secure. Data typically gets encrypted by service providers, but not when it’s in use. This consortium will focus on encrypting and processing the data “in memory” to reduce the exposure of the data to the rest of the system. It aims to provide greater control and transparency for users.

  • Microsoft, Intel and others are doubling down on open source Linux security

    In other words, the operating system could be compromised by some kind of malware, but the data being used in a program would still be encrypted, and therefore safe from an attacker.

  • Microsoft, Intel, and Red Hat Back Confidential Computing

    The Linux Foundation’s latest project tackles confidential computing with a group of companies that reads like a who’s who of cloud providers, chipmakers, telecom operators, and other tech giants. Today at the Open Source Summit the Linux Foundation said it will form a new group called the Confidential Computing Consortium. Alibaba, Arm, Baidu, Google Cloud, IBM, Intel, Microsoft, Red Hat, Swisscom, and Tencent all committed to work on the project, which aims to accelerate the adoption of confidential computing.

IBM/Red Hat: OpenShift, CUDA, Jim Whitehurst, VMworld and RHELvolution

  • Red Hat Launches OpenShift Service Mesh to Accelerate Adoption of Microservices and Cloud-Native Applications

    Red Hat, Inc., the world's leading provider of open source solutions, today announced the general availability of Red Hat OpenShift Service Mesh to connect, observe and simplify service-to-service communication of Kubernetes applications on Red Hat OpenShift 4, the industry’s most comprehensive enterprise Kubernetes platform. Based on the Istio, Kiali and Jaeger projects and enhanced with Kubernetes Operators, OpenShift Service Mesh is designed to deliver a more efficient, end-to-end developer experience around microservices-based application architectures. This helps to free developer teams from the complex tasks of having to implement bespoke networking services for their applications and business logic.

  • CUDA 10.1 U2 Adds RHEL8 Support, Nsight Compute Tools For POWER

    NVIDIA last week quietly released a second update to CUDA 10.1. CUDA 10.1 Update 2 brings Red Hat Enterprise Linux 8.0 support, continued POWER architecture support improvements, and other additions.

  • IBM Stock and Jim Whitehurst’s Toughest Test

    What analysts say they want from IBM stock is Red Hat CEO Jim Whitehurst in current CEO Virginia Rometty’s chair. They want Red Hat running IBM. That wasn’t the promise when this deal was put together. The promise was that Red Hat would get autonomy from IBM, not that IBM would lose its autonomy to Red Hat. But Whitehurst’s concept of an Open Organization has excited analysts who don’t even know what it is. If IBM became an Open Organization, these analysts think, it would replace the top-down structure IBM has used for a century with an organic system in which employees and customers are part of the product design process. Instead of selling gear or even solutions, IBM would become a corporate change agent.

  • Going to VMWorld? Learn to help data scientists and application developers accelerate AI/ML initiatives

    IT experts from around the world are headed to VMworld 2019 in San Francisco to learn how they can leverage emerging technologies from VMware and ecosystem partners (e.g. Red Hat, NVIDIA, etc.) to help achieve the digital transformation for their organizations. Artificial Intelligence (AI)/Machine Learning (ML) is a very popular technology trend, with Red Hat OpenShift customers like HCA Healthcare, BMW, Emirates NBD, and several more are offering differentiated value to their customers. Investments are ramping up across many industries to develop intelligent digital services that help improve customer satisfaction, and gain competitive business advantages. Early deployment trends indicate AI/ML solution architectures are spanning across edge, data center, and public clouds.

  • RHELvolution 2: A brief history of Red Hat Enterprise Linux releases from RHEL 6 to today

    In the previous post, we looked at the history of Red Hat Enterprise Linux from pre-RHEL days through the rise of virtualization. In this one we'll take a look at RHEL's evolution from early days of public cloud to the release of RHEL 8 and beyond.