Language Selection

English French German Italian Portuguese Spanish

Red Hat

IBM/Red Hat and Debian Leftovers

Filed under
Red Hat
Debian
  • With the acquisition closed, IBM goes all in on Red Hat

    IBM’s massive $34 billion acquisition of Red Hat closed a few weeks ago and today, the two companies are now announcing the first fruits of this process. For the most part, today’s announcement furthers IBM’s ambitions to bring its products to any public and private cloud. That was very much the reason why IBM acquired Red Hat in the first place, of course, so this doesn’t come as a major surprise, though most industry watchers probably didn’t expect this to happen this fast.

    Specifically, IBM is announcing that it is bringing its software portfolio to Red Hat OpenShift, Red Hat’s Kubernetes-based container platform that is essentially available on any cloud that allows its customers to run Red Hat Enterprise Linux.

  • IBM To Offer Cloud Native Software on Red Hat OpenShift

    Post the completion of Red Hat acquisition, IBM has started building bridges between the product and services of the two companies. IBM has reengineered its software portfolio to now be "cloud-native and optimized to run on Red Hat OpenShift."

  • Debian Buster Arrives; IBM Acquires Red Hat;

    Debian Buster Arrives; IBM Acquires Red Hat; Raspberry Pi 4 Is Here; Ubuntu Takes a U-Turn with 32-Bit Support: OpenSSH Fixes Side Channel Attacks; Firefox Fixes Error that Crashed HTTPS Pages; and Altair Releases HyperWorks 2019

    [...]

    The Debian community has announced the release of Debian 10 "Buster" (https://www.debian.org/News/2019/20190706). Debian is one of the most popular GNU/Linux-based distributions. Buster will be supported for the next five years.

    Buster ships with several desktop environments including Cinnamon 3.8, GNOME 3.30, KDE Plasma 5.14, LXDE 0.99.2, LXQt 0.14, MATE 1.20, and Xfce 4.12. In this release, GNOME will default to using the Wayland display server instead of Xorg. "The Xorg display server is still installed by default and the default display manager allows users to choose Xorg as the display server for their next session," according to a blog post from the Debian project.

  • [Sparky] July 2019 donation report

    Many thanks to all of you for supporting our open-source projects!

  • Goodbye, pgp.gwolf.org

    I started running an SKS keyserver a couple of years ago (don't really remember, but I think it was around 2014). I am, as you probably expect me to be given my lines of work, a believer of the Web-of-Trust model upon which the PGP network is built. I have published a couple of academic papers (Strengthening a Curated Web of Trust in a Geographically Distributed Project, with Gina Gallegos, Cryptologia 2016, and Insights on the large-scale deployment of a curated Web-of-Trust: the Debian project’s cryptographic keyring, with Victor González Quiroga, Journal of Internet Services and Applications, 2018) and presented several conferences regarding some aspects of it, mainly in relation to the Debian project.

Red Hat/IBM: EPEL, Ceph, OpenShift and Call for Code Challenge,

Filed under
Red Hat
Server
  • Kevin Fenzi: epel8-playground

    We have been working away at getting epel8 ready (short status: we have builds and are building fedpkg and bodhi and all the other tools maintainers need to deal with packages and hope to have some composes next week), and I would like to introduce a new thing we are trying with epel8: The epel8-playground.

    epel8-playground is another branch for all epel8 packages. By default when a package is setup for epel8 both branches are made, and when maintainers do builds in the epel8 branch, fedpkg will build for _both_ epel8 and epel8-playground. epel8 will use the bodhi updates system with an updates-testing and stable repo. epel8-playground will compose every night and use only one repo.

  • Red Hat OpenStack Platform with Red Hat Ceph Storage: MySQL Database Performance on Ceph RBD

    In Part 1 of this series, we detailed the hardware and software architecture of our testing lab, as well as benchmarking methodology and Ceph cluster baseline performance. In this post, we?ll take our benchmarking to the next level by drilling down into the performance evaluation of MySQL database workloads running on top of Red Hat OpenStack Platform backed by persistent block storage using Red Hat Ceph Storage.

  • OpenShift Persistent Storage with a Spring Boot Example

    One of the great things about Red Hat OpenShift is the ability to develop both Cloud Native and traditional applications. Often times, when thinking about traditional applications, the first thing that comes to mind is the ability to store things on the file system. This could be media, metadata, or any type of content that your application relies on but isn’t stored in a database or other system.

    To illustrate the concept of persistent storage (i.e. storage that will persist even when a container is stopped or recreated), I created a sample application for tracking my electronic books that I have in PDF format. The library of PDF files can be stored on the file system, and the application relies on this media directory to present the titles to the user. The application is written in Java using the Spring Boot framework and scans the media directory for PDF files. Once a suitable title is found, the application generates a thumbnail image of the book and also determines how many pages it contains. This can be seen in the following image:

  • IBM and Linux Foundation Call on Developers to Make Natural Disasters Less Deadly

    On a stormy Tuesday in July, a group of 30 young programmers gathered in New York City to take on natural disasters. The attendees—most of whom were current college students and alumnae of the nonprofit Girls Who Code—had signed up for a six-hour hackathon in the middle of summer break.

    Flash floods broke out across the city, but the atmosphere in the conference room remained upbeat. The hackathon was hosted in the downtown office of IBM as one of the final events in this year’s Call for Code challenge, a global competition sponsored by IBM and the Linux Foundation. The challenge focuses on using technology to assist survivors of catastrophes including tropical storms, fires, and earthquakes.

    Recent satellite hackathon events in the 2019 competition have recruited developers in Cairo to address Egypt’s national water shortage; in Paris to brainstorm AI solutions for rebuilding the Notre Dame cathedral; and in Bayamón, Puerto Rico, to improve resilience in the face of future hurricanes.

    Those whose proposals follow Call for Code’s guidelines are encouraged to submit to the annual international contest for a chance to win IBM membership and Linux tech support, meetings with potential mentors and investors, and a cash prize of US $200,000. But anyone who attends one of these optional satellite events also earns another reward: the chance to poke around inside the most prized software of the Call for Code program’s corporate partners.

SUSE and IBM/Red Hat Leftovers

Filed under
Red Hat
Server
SUSE
  • No More Sleepless Nights and Long Weekends Doing Maintenance

    Datacenter maintenance – you dread it, right? Staying up all night to make sure everything runs smoothly and nothing crashes, or possibly losing an entire weekend to maintenance if something goes wrong. Managing your datacenter can be a real drag. But it doesn’t have to be that way.

    At SUSECON 2019, Raine and Stephen discussed how SUSE can help ease your pain with SUSE Manager, a little Salt and a few best practices for datacenter management and automation.

  • Fedora Has Formed A Minimization Team To Work On Shrinking Packaged Software

    The newest initiative within the Fedora camp is a "Minimization Team" seeking to reduce the size of packaged applications, run-times, and other software available on Fedora Linux.

    The hope of the Fedora Minimization Team is that they can lead to smaller containers, eliminating package dependencies where not necessary, and reducing the patching foot-print.

  • DevNation Live: Easily secure your cloud-native microservices with Keycloak

    DevNation Live tech talks are hosted by the Red Hat technologists who create our products. These sessions include real solutions and code and sample projects to help you get started. In this talk, you’ll learn about Keycloak from Sébastien Blanc, Principal Software Engineer at Red Hat.

    This tutorial will demonstrate how Keycloak can help you secure your microservices. Regardless of whether it’s a Node.js REST Endpoint, a PHP app, or a Quarkus service, Keycloak is completely agnostic of the technology being used by your services. Learn how to obtain a JWT token and how to propagate this token between your different secured services. We will also explain how to add fine-grained authorizations to these services.

Fedora Flock Coverage (From Fedora Project)

Filed under
Red Hat
  • Fedora Localization project status and horizons

    L10n (short for “localization”) is the Fedora sub-project dedicated to translation. It is unique in its form and organization because under this label are a set of autonomous teams of speakers. Some statistics will show you the reduction of our community, and invite you to come discuss with us at Flock.

    First, the number of unique contributors per week, by time in the project (based on the model of what Matthew Miller does in his “state of Fedora” talk each year at Flock).

  • Flock to Budapest
  • Modularity at Flock 2019

    There are three sessions ready that will help you decide when to make a module, how to make them, and a discussion about making everything Modularity work better.

  • Outreachy FHP week 7: Django, Docker, and fedora-messaging

    The main goal for the next half of the internship is deploying the project locally to Minishift and then in production on OpenShift. This will help see in effect the badges for Fedora Happiness Packet in action! I will also be preparing for the project showcase at the annual contributor summit, Flock to Fedora. As a stretch goal I hope to integrate the filter methods for the search option in archive.

Servers ('Cloud'), IBM, and Fedora

Filed under
Red Hat
Server
  • Is the cloud right for you?

    Corey Quinn opened his lightning talk at the 17th annual Southern California Linux Expo (SCaLE 17x) with an apology. Corey is a cloud economist at The Duckbill Group, writes Last Week in AWS, and hosts the Screaming in the Cloud podcast. He's also a funny and engaging speaker. Enjoy this video "The cloud is a scam," to learn why he wants to apologize and how to find out if the cloud is right for you.

  • Google Cloud to offer VMware data-center tools natively

    Google this week said it would for the first time natively support VMware workloads in its Cloud service, giving customers more options for deploying enterprise applications.

    The hybrid cloud service called Google Cloud VMware Solution by CloudSimple will use VMware software-defined data center (SDCC) technologies including VMware vSphere, NSX and vSAN software deployed on a platform administered by CloudSimple for GCP.

  • Get started with reactive programming with creative Coderland tutorials

    The Reactica roller coaster is the latest addition to Coderland, our fictitious amusement park for developers. It illustrates the power of reactive computing, an important architecture for working with groups of microservices that use asynchronous data to work with each other.

    In this scenario, we need to build a web app to display the constantly updated wait time for the coaster.

  • Fedora Has Deferred Its Decision On Stopping Modular/Everything i686 Repositories

    The recent proposal to drop Fedora's Modular and Everything repositories for the upcoming Fedora 31 release is yet to be decided after it was deferred at this week's Fedora Engineering and Steering Committee (FESCo) meeting.

    The proposal is about ending the i686 Modular and Everything repositories beginning with the Fedora 31 cycle later this year. But this isn't about ending multi-lib support, so 32-bit packages will continue to work from Fedora x86_64 installations. But as is the trend now, if you are still running pure i686 (32-bit x86) Linux distributions, your days are numbered. Separately, Fedora is already looking to drop their i686 kernels moving forward and they are not the only Linux distribution pushing for the long overdue retirement of x86 32-bit operating system support.

Servers: Twitter Moves to Kubernetes, Red Hat/IBM News and Tips

Filed under
Red Hat
Server
  • Twitter Announced Switch from Mesos to Kubernetes

    On the 2nd of May at 7:00 PM (PST), Twitter held a technical release conference and meetup at its headquarters in San Francisco. At the conference, David McLaughlin, Product and Technical Head of Twitter Computing Platform, announced that Twitter's infrastructure would completely switch from Mesos to Kubernetes.

    For a bit of background history, Mesos was released in 2009, and Twitter was one of the early companies in support and use Mesos. As one of the most successful social media giants in the world, Twitter has received much attention due to its large production cluster scale (having tens of thousands of nodes). In 2010, Twitter started to develop the Aurora project based on the Mesos project to make it more convenient to manage both its online and offline business and gradually adopt to Mesos.

  • Linux Ending Support for the Floppy Drive, Unity 2019.2 Launches Today, Purism Unveils Final Librem 5 Smartphone Specs, First Kernel Security Update for Debian 10 "Buster" Is Out, and Twitter Is Switching from Mesos to Kubernetes

    Twitter is switching from Mesos to Kubernetes. Zhang Lei, Senior Technical Expert on Alibaba Cloud Container Platform and Co-maintainer of Kubernetes Project, writes "with the popularity of cloud computing and the rise of cloud-based containerized infrastructure projects like Kubernetes, this traditional Internet infrastructure starts to show its age—being a much less efficient solution compared with that of Kubernetes". See Zhang's post for some background history and more details on the move.

  • Three ways automation can help service providers digitally transform

    As telecommunication service providers (SPs) look to stave off competitive threats from over the top (OTT) providers, they are digitally transforming their operations to greatly enhance customer experience and relevance by automating their networks, applying security, and leveraging infrastructure management. According to EY’s "Digital transformation for 2020 and beyond" study, process automation can help smooth the path for SP IT teams to reach their goals, with 71 percent of respondents citing process automation as "most important to [their] organization’s long-term operational excellence."

    There are thousands of virtual and physical devices that comprise business, consumer, and mobile services in an SP’s environment, and automation can help facilitate and accelerate the delivery of those services.

    [...]

    Some SPs are turning to Ansible and other tools to embark on their automation journey. Red Hat Ansible Automation, including Red Hat Ansible Engine and Red Hat Ansible Tower, simplifies software-defined infrastructure deployment and management, operations, and business processes to help SPs more effectively deliver consumer, business, and mobile services.

    Red Hat Process Automation Manager (formerly Red Hat JBoss BPM Suite) combines business process management, business rules management, business resource optimization, and complex event processing technologies in a platform that also includes tools for creating user interfaces and decision services. 

  • Deploy your API from a Jenkins Pipeline

    In a previous article, 5 principles for deploying your API from a CI/CD pipeline, we discovered the main steps required to deploy your API from a CI/CD pipeline and this can prove to be a tremendous amount of work. Hopefully, the latest release of Red Hat Integration greatly improved this situation by adding new capabilities to the 3scale CLI. In 3scale toolbox: Deploy an API from the CLI, we discovered how the 3scale toolbox strives to automate the delivery of APIs. In this article, we will discuss how the 3scale toolbox can help you deploy your API from a Jenkins pipeline on Red Hat OpenShift/Kubernetes.

  • How to set up Red Hat CodeReady Studio 12: Process automation tooling

    The release of the latest Red Hat developer suite version 12 included a name change from Red Hat JBoss Developer Studio to Red Hat CodeReady Studio. The focus here is not on the Red Hat CodeReady Workspaces, a cloud and container development experience, but on the locally installed developers studio. Given that, you might have questions about how to get started with the various Red Hat integration, data, and process automation product toolsets that are not installed out of the box.

    In this series of articles, we’ll show how to install each set of tools and explain the various products they support. We hope these tips will help you make informed decisions about the tooling you might want to use on your next development project.

SUSE displaces Red Hat @ Istanbul Technical University

Filed under
Red Hat
SUSE

Did you know the third-oldest engineering sciences university in the world is in Turkey? Founded in 1773, Istanbul Technical University (ITU) is one of the oldest universities in Turkey. It trains more than 40,000 students in a wide range of science, technology and engineering disciplines.

The third-oldest engineering sciences university selected the oldest Enterprise Linux company. Awesome match of experience! The university ditched the half-closed/half-open Red Hat products and went for truly open, open source solutions from SUSE.

Read more

Red Hat/IBM Leftovers

Filed under
Red Hat
  • 3scale toolbox: Deploy an API from the CLI

    Deploying your API from a CI/CD pipeline can be a tremendous amount of work. The latest release of Red Hat Integration greatly improved this situation by adding new capabilities to the 3scale CLI. The 3scale CLI is named 3scale toolbox and strives to help API administrators to operate their services as well as automate the delivery of their API through Continuous Delivery pipelines.

    Having a standard CLI is a great advantage for our customers since they can use it in the CI/CD solution of their choice (Jenkins, GitLab CI, Ansible, Tekton, etc.). It is also a means for Red Hat to capture customer needs as much as possible and offer the same feature set to all our customers.

  • Red Hat Universal Base Image: How it works in 3 minutes or less
  • Guidelines for instruction encoding in the NOP space
  • Edge computing: 6 things to know

    As more and more things get smart – from thermostats and toothbrushes to utility grids and industrial machines – data is being created nearly everywhere, making it increasingly urgent for IT leaders to determine how and where that data will be processed.

    Enter the edge. There are perhaps as many ways to define edge computing as there are ways to apply it. At its core, edge computing is the practice of processing data close to where it is generated.

Red Hat and IBM

Filed under
Red Hat
Server
  • 16 essentials for sysadmin superheroes

    You know you're a sysadmin if you are either knee-deep in system logs, constantly handling user errors, or carving out time to document it all along the way. Yesterday was Sysadmin Appreciation Day and we want to give a big "thank you" to our favorite IT pros. We've pulled together the ultimate list of tasks, resources, tools, commands, and guides to help you become a sysadmin superhero.

  • Kubernetes by the numbers: 13 compelling stats

    Fast-forward to the dog days of summer 2019 and a fresh look at various stats in and around the Kubernetes ecosystem, and the story’s sequel plays out a lot like the original: Kubernetes is even more popular. It’s tough to find a buzzier platform in the IT world these days. Yet Kubernetes is still quite young; it just celebrated its fifth “birthday,” and version 1.0 of the open source project was released just over four years ago. So there’s plenty of room for additional growth.

  • Vendors not contributing to open source will fall behind says John Allessio, SVP & GM, Red Hat Global Services
  • IBM open-sources AI algorithms to help advance cancer research

    IBM Corp. has open-sourced three artificial intelligence projects focused on cancer research.

  • IBM Just Made its Cancer-Fighting AI Projects Open-Source

    IBM just announced that it was making three of its artificial intelligence projects designed to help doctors and cancer researchers open-source.

  • IBM Makes Its Cancer-Fighting AI Projects Open Source

    IBM launches three new AI projects to help researchers and medical experts study cancer and find better treatment to the said disease in the future.

  • New Open-Source AI Machine Learning Tools to Fight Cancer

    In Basel, Switzerland at this week’s 18th European Conference on Computational Biology (ECCB) and 27th Conference on Intelligent Systems for Molecular Biology (ISMB), IBM will share three novel artificial intelligence (AI) machine learning tools called PaccMann, INtERAcT, and PIMKL, that are designed to assist cancer researchers.

    [...]

    “There have been a plethora of works focused on prediction of drug sensitivity in cancer cells, however, the majority of them have focused on the analysis of unimodal datasets such as genomic or transcriptomic profiles of cancer cells,” wrote the IBM researchers in their study. “To the best of our knowledge, there have not been any multi-modal deep learning solutions for anticancer drug sensitivity prediction that combine a molecular structure of compounds, the genetic profile of cells and prior knowledge of protein interactions.”

  • IBM offering cancer researchers 3 open-source AI tools

    Researchers and data scientists at IBM have developed three novel algorithms aimed at uncovering the underlying biological processes that cause tumors to form and grow.

    And the computing behemoth is making all three tools freely available to clinical researchers and AI developers.

    The offerings are summarized in a blog post written by life sciences researcher Matteo Manica and data scientist Joris Cadow, both of whom work at an IBM research lab in Switzerland.

  • Red Hat CTO says no change to OpenShift, conference swag plans after IBM buy

    Red Hat’s CTO took to Reddit this week to reassure fans that the company would stick to its open source knitting after the firm absorbed by IBM earlier this month AND that their Red Hat swag could be worth a packet in future .

    The first question to hit in Chris Wright’s Reddit AMA regarded the effect on Red Hat’s OpenShift strategy. The short answer, was “no effect”.

    “First, Red Hat is still Red Hat, and we are focused on delivering the industry’s most comprehensive enterprise Kubernetes platform,” Wright answered “Second, upstream first development in Kubernetes and community ecosystem development in OKD are part of our product development process. Neither of those change. The IBM acquisition can help accelerate the adoption of OpenShift given the increase scale and reach in sales and services that IBM has.”

IBM, Red Hat, Fedora Leftovers

Filed under
Red Hat
  • 5 principles for deploying your API from a CI/CD pipeline

    With companies generating more and more revenue through their APIs, these APIs also have become even more critical. Quality and reliability are key goals sought by companies looking for large scale use of their APIs, and those goals are usually supported through well-crafted DevOps processes. Figures from the tech giants make us dizzy: Amazon is deploying code to production every 11.7 seconds, Netflix deploys thousands of time per day, and Fidelity saved $2.3 million per year with their new release framework. So, if you have APIs, you might want to deploy your API from a CI/CD pipeline.

    Deploying your API from a CI/CD pipeline is a key activity of the “Full API Lifecycle Management.” Sitting between the “Implement” and “Secure” phases, the “Deploy” activity encompasses every process needed to bring the API from source code to the production environment. To be more specific, it covers Continuous Integration and Continuous Delivery.

  • DevNation Live: Subatomic reactive systems with Quarkus

    DevNation Live tech talks are hosted by the Red Hat technologists who create our products. These sessions include real solutions and code and sample projects to help you get started. In this talk, Clement Escoffier, Principal Software Engineer at Red Hat, will dive into the reactive side of Quarkus.

    Quarkus provides a supersonic development experience and a subatomic execution environment thanks to its integration with GraalVM. But, that’s not all. Quarkus also unifies the imperative and reactive paradigm.

    This discussion is about the reactive side of Quarkus and how you can use it to implement reactive and data streaming applications. From WebSockets to Kafka integration and reactive streams, you will learn how to build a reactive system with Quarkus.

  • What does it mean to be a sysadmin hero?

    Sysadmins spend a lot of time preventing and fixing problems. There are certainly times when a sysadmin becomes a hero, whether to their team, department, company, or the general public, though the people they "saved" from trouble may never even know.

    Enjoy these two stories from the community on sysadmin heroics. What does it mean to you?

  • What’s The Future Of Red Hat At IBM

    IBM has a long history of working with the open source community. Way back in 1999, IBM announced a $1billion investment in Linux. IBM is also credited for creating one of the most innovative advertisements about Linux. But IBM’s acquisition of Red Hat raised some serious and genuine questions around IBM’s commitment to Open Source and the future of Red Hat at the big blue.

    Red Hat CTO, Chris Wright, took it upon himself to address some of these concerns and answer people’s questions in an AMA (Ask Me Anything) on Reddit. Wright has evolved from being a Linux kernel developer to becoming the CTO of the world’s largest open source company. He has his pulse on both the business and community sides of the open source world.

  • Financial industry leaders talk open source and modernization at Red Hat Summit 2019

    IT leaders at traditional financial institutions seem poised to become the disruptors rather than the disrupted in what has become a dynamic industry. And they’re taking advantage of enterprise open source technology to do it, building applications in exciting and innovative ways, and even adopting the principles and culture of startup technology companies themselves.

  • FPgM report: 2019-30

    Here’s your report of what has happened in Fedora Program Management this week. The mass rebuild is underway.

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

Syndicate content

More in Tux Machines

DragonFlyBSD Pulls In AMD Radeon Graphics Code From Linux The 4.7 Kernel

It was just last month that DragonFlyBSD pulled in Radeon's Linux 4.4 kernel driver code as an upgrade from the Linux 3.19 era code they had been using for their open-source AMD graphics support. This week that's now up to a Linux 4.7 era port. François Tigeot who continues doing amazing work on pulling in updates to DragonFlyBSD's graphics driver now upgraded the Radeon DRM code to match that of what is found in the upstream Linux 4.7.10 kernel. Read more

Android Leftovers

TenFourFox FPR16b1 available

FPR16 got delayed because I really tried very hard to make some progress on our two biggest JavaScript deficiencies, the infamous issues 521 (async and await) and 533 (this is undefined). Unfortunately, not only did I make little progress on either, but the speculative fix I tried for issue 533 turned out to be the patch that unsettled the optimized build and had to be backed out. There is some partial work on issue 521, though, including a fully working parser patch. The problem is plumbing this into the browser runtime which is ripe for all kinds of regressions and is not currently implemented (instead, for compatibility, async functions get turned into a bytecode of null throw null return, essentially making any call to an async function throw an exception because it wouldn't have worked in the first place). This wouldn't seem very useful except that effectively what the whole shebang does is convert a compile-time error into a runtime warning, such that other functions that previously might not have been able to load because of the error can now be parsed and hopefully run. With luck this should improve the functionality of sites using these functions even if everything still doesn't fully work, as a down payment hopefully on a future implementation. It may not be technically possible but it's a start. Read more

Simon Steinbeiß of Xfce, Dalton Durst of UBports, KDE Apps 19.08, Huawei – Destination Linux 135

Simon Steinbeiß of Xfce, Dalton Durst of UBports, KDE Applications, CutiePi Open Source Tablet, Huawei To Create Open Source Foundation, Rust Removes Linux Support, Stranded Deep Survival Game Fix Read more