Language Selection

English French German Italian Portuguese Spanish

Red Hat

Fedora Flock Coverage (From Fedora Project)

Filed under
Red Hat
  • Fedora Localization project status and horizons

    L10n (short for “localization”) is the Fedora sub-project dedicated to translation. It is unique in its form and organization because under this label are a set of autonomous teams of speakers. Some statistics will show you the reduction of our community, and invite you to come discuss with us at Flock.

    First, the number of unique contributors per week, by time in the project (based on the model of what Matthew Miller does in his “state of Fedora” talk each year at Flock).

  • Flock to Budapest
  • Modularity at Flock 2019

    There are three sessions ready that will help you decide when to make a module, how to make them, and a discussion about making everything Modularity work better.

  • Outreachy FHP week 7: Django, Docker, and fedora-messaging

    The main goal for the next half of the internship is deploying the project locally to Minishift and then in production on OpenShift. This will help see in effect the badges for Fedora Happiness Packet in action! I will also be preparing for the project showcase at the annual contributor summit, Flock to Fedora. As a stretch goal I hope to integrate the filter methods for the search option in archive.

Servers ('Cloud'), IBM, and Fedora

Filed under
Red Hat
Server
  • Is the cloud right for you?

    Corey Quinn opened his lightning talk at the 17th annual Southern California Linux Expo (SCaLE 17x) with an apology. Corey is a cloud economist at The Duckbill Group, writes Last Week in AWS, and hosts the Screaming in the Cloud podcast. He's also a funny and engaging speaker. Enjoy this video "The cloud is a scam," to learn why he wants to apologize and how to find out if the cloud is right for you.

  • Google Cloud to offer VMware data-center tools natively

    Google this week said it would for the first time natively support VMware workloads in its Cloud service, giving customers more options for deploying enterprise applications.

    The hybrid cloud service called Google Cloud VMware Solution by CloudSimple will use VMware software-defined data center (SDCC) technologies including VMware vSphere, NSX and vSAN software deployed on a platform administered by CloudSimple for GCP.

  • Get started with reactive programming with creative Coderland tutorials

    The Reactica roller coaster is the latest addition to Coderland, our fictitious amusement park for developers. It illustrates the power of reactive computing, an important architecture for working with groups of microservices that use asynchronous data to work with each other.

    In this scenario, we need to build a web app to display the constantly updated wait time for the coaster.

  • Fedora Has Deferred Its Decision On Stopping Modular/Everything i686 Repositories

    The recent proposal to drop Fedora's Modular and Everything repositories for the upcoming Fedora 31 release is yet to be decided after it was deferred at this week's Fedora Engineering and Steering Committee (FESCo) meeting.

    The proposal is about ending the i686 Modular and Everything repositories beginning with the Fedora 31 cycle later this year. But this isn't about ending multi-lib support, so 32-bit packages will continue to work from Fedora x86_64 installations. But as is the trend now, if you are still running pure i686 (32-bit x86) Linux distributions, your days are numbered. Separately, Fedora is already looking to drop their i686 kernels moving forward and they are not the only Linux distribution pushing for the long overdue retirement of x86 32-bit operating system support.

Servers: Twitter Moves to Kubernetes, Red Hat/IBM News and Tips

Filed under
Red Hat
Server
  • Twitter Announced Switch from Mesos to Kubernetes

    On the 2nd of May at 7:00 PM (PST), Twitter held a technical release conference and meetup at its headquarters in San Francisco. At the conference, David McLaughlin, Product and Technical Head of Twitter Computing Platform, announced that Twitter's infrastructure would completely switch from Mesos to Kubernetes.

    For a bit of background history, Mesos was released in 2009, and Twitter was one of the early companies in support and use Mesos. As one of the most successful social media giants in the world, Twitter has received much attention due to its large production cluster scale (having tens of thousands of nodes). In 2010, Twitter started to develop the Aurora project based on the Mesos project to make it more convenient to manage both its online and offline business and gradually adopt to Mesos.

  • Linux Ending Support for the Floppy Drive, Unity 2019.2 Launches Today, Purism Unveils Final Librem 5 Smartphone Specs, First Kernel Security Update for Debian 10 "Buster" Is Out, and Twitter Is Switching from Mesos to Kubernetes

    Twitter is switching from Mesos to Kubernetes. Zhang Lei, Senior Technical Expert on Alibaba Cloud Container Platform and Co-maintainer of Kubernetes Project, writes "with the popularity of cloud computing and the rise of cloud-based containerized infrastructure projects like Kubernetes, this traditional Internet infrastructure starts to show its age—being a much less efficient solution compared with that of Kubernetes". See Zhang's post for some background history and more details on the move.

  • Three ways automation can help service providers digitally transform

    As telecommunication service providers (SPs) look to stave off competitive threats from over the top (OTT) providers, they are digitally transforming their operations to greatly enhance customer experience and relevance by automating their networks, applying security, and leveraging infrastructure management. According to EY’s "Digital transformation for 2020 and beyond" study, process automation can help smooth the path for SP IT teams to reach their goals, with 71 percent of respondents citing process automation as "most important to [their] organization’s long-term operational excellence."

    There are thousands of virtual and physical devices that comprise business, consumer, and mobile services in an SP’s environment, and automation can help facilitate and accelerate the delivery of those services.

    [...]

    Some SPs are turning to Ansible and other tools to embark on their automation journey. Red Hat Ansible Automation, including Red Hat Ansible Engine and Red Hat Ansible Tower, simplifies software-defined infrastructure deployment and management, operations, and business processes to help SPs more effectively deliver consumer, business, and mobile services.

    Red Hat Process Automation Manager (formerly Red Hat JBoss BPM Suite) combines business process management, business rules management, business resource optimization, and complex event processing technologies in a platform that also includes tools for creating user interfaces and decision services. 

  • Deploy your API from a Jenkins Pipeline

    In a previous article, 5 principles for deploying your API from a CI/CD pipeline, we discovered the main steps required to deploy your API from a CI/CD pipeline and this can prove to be a tremendous amount of work. Hopefully, the latest release of Red Hat Integration greatly improved this situation by adding new capabilities to the 3scale CLI. In 3scale toolbox: Deploy an API from the CLI, we discovered how the 3scale toolbox strives to automate the delivery of APIs. In this article, we will discuss how the 3scale toolbox can help you deploy your API from a Jenkins pipeline on Red Hat OpenShift/Kubernetes.

  • How to set up Red Hat CodeReady Studio 12: Process automation tooling

    The release of the latest Red Hat developer suite version 12 included a name change from Red Hat JBoss Developer Studio to Red Hat CodeReady Studio. The focus here is not on the Red Hat CodeReady Workspaces, a cloud and container development experience, but on the locally installed developers studio. Given that, you might have questions about how to get started with the various Red Hat integration, data, and process automation product toolsets that are not installed out of the box.

    In this series of articles, we’ll show how to install each set of tools and explain the various products they support. We hope these tips will help you make informed decisions about the tooling you might want to use on your next development project.

SUSE displaces Red Hat @ Istanbul Technical University

Filed under
Red Hat
SUSE

Did you know the third-oldest engineering sciences university in the world is in Turkey? Founded in 1773, Istanbul Technical University (ITU) is one of the oldest universities in Turkey. It trains more than 40,000 students in a wide range of science, technology and engineering disciplines.

The third-oldest engineering sciences university selected the oldest Enterprise Linux company. Awesome match of experience! The university ditched the half-closed/half-open Red Hat products and went for truly open, open source solutions from SUSE.

Read more

Red Hat/IBM Leftovers

Filed under
Red Hat
  • 3scale toolbox: Deploy an API from the CLI

    Deploying your API from a CI/CD pipeline can be a tremendous amount of work. The latest release of Red Hat Integration greatly improved this situation by adding new capabilities to the 3scale CLI. The 3scale CLI is named 3scale toolbox and strives to help API administrators to operate their services as well as automate the delivery of their API through Continuous Delivery pipelines.

    Having a standard CLI is a great advantage for our customers since they can use it in the CI/CD solution of their choice (Jenkins, GitLab CI, Ansible, Tekton, etc.). It is also a means for Red Hat to capture customer needs as much as possible and offer the same feature set to all our customers.

  • Red Hat Universal Base Image: How it works in 3 minutes or less
  • Guidelines for instruction encoding in the NOP space
  • Edge computing: 6 things to know

    As more and more things get smart – from thermostats and toothbrushes to utility grids and industrial machines – data is being created nearly everywhere, making it increasingly urgent for IT leaders to determine how and where that data will be processed.

    Enter the edge. There are perhaps as many ways to define edge computing as there are ways to apply it. At its core, edge computing is the practice of processing data close to where it is generated.

Red Hat and IBM

Filed under
Red Hat
Server
  • 16 essentials for sysadmin superheroes

    You know you're a sysadmin if you are either knee-deep in system logs, constantly handling user errors, or carving out time to document it all along the way. Yesterday was Sysadmin Appreciation Day and we want to give a big "thank you" to our favorite IT pros. We've pulled together the ultimate list of tasks, resources, tools, commands, and guides to help you become a sysadmin superhero.

  • Kubernetes by the numbers: 13 compelling stats

    Fast-forward to the dog days of summer 2019 and a fresh look at various stats in and around the Kubernetes ecosystem, and the story’s sequel plays out a lot like the original: Kubernetes is even more popular. It’s tough to find a buzzier platform in the IT world these days. Yet Kubernetes is still quite young; it just celebrated its fifth “birthday,” and version 1.0 of the open source project was released just over four years ago. So there’s plenty of room for additional growth.

  • Vendors not contributing to open source will fall behind says John Allessio, SVP & GM, Red Hat Global Services
  • IBM open-sources AI algorithms to help advance cancer research

    IBM Corp. has open-sourced three artificial intelligence projects focused on cancer research.

  • IBM Just Made its Cancer-Fighting AI Projects Open-Source

    IBM just announced that it was making three of its artificial intelligence projects designed to help doctors and cancer researchers open-source.

  • IBM Makes Its Cancer-Fighting AI Projects Open Source

    IBM launches three new AI projects to help researchers and medical experts study cancer and find better treatment to the said disease in the future.

  • New Open-Source AI Machine Learning Tools to Fight Cancer

    In Basel, Switzerland at this week’s 18th European Conference on Computational Biology (ECCB) and 27th Conference on Intelligent Systems for Molecular Biology (ISMB), IBM will share three novel artificial intelligence (AI) machine learning tools called PaccMann, INtERAcT, and PIMKL, that are designed to assist cancer researchers.

    [...]

    “There have been a plethora of works focused on prediction of drug sensitivity in cancer cells, however, the majority of them have focused on the analysis of unimodal datasets such as genomic or transcriptomic profiles of cancer cells,” wrote the IBM researchers in their study. “To the best of our knowledge, there have not been any multi-modal deep learning solutions for anticancer drug sensitivity prediction that combine a molecular structure of compounds, the genetic profile of cells and prior knowledge of protein interactions.”

  • IBM offering cancer researchers 3 open-source AI tools

    Researchers and data scientists at IBM have developed three novel algorithms aimed at uncovering the underlying biological processes that cause tumors to form and grow.

    And the computing behemoth is making all three tools freely available to clinical researchers and AI developers.

    The offerings are summarized in a blog post written by life sciences researcher Matteo Manica and data scientist Joris Cadow, both of whom work at an IBM research lab in Switzerland.

  • Red Hat CTO says no change to OpenShift, conference swag plans after IBM buy

    Red Hat’s CTO took to Reddit this week to reassure fans that the company would stick to its open source knitting after the firm absorbed by IBM earlier this month AND that their Red Hat swag could be worth a packet in future .

    The first question to hit in Chris Wright’s Reddit AMA regarded the effect on Red Hat’s OpenShift strategy. The short answer, was “no effect”.

    “First, Red Hat is still Red Hat, and we are focused on delivering the industry’s most comprehensive enterprise Kubernetes platform,” Wright answered “Second, upstream first development in Kubernetes and community ecosystem development in OKD are part of our product development process. Neither of those change. The IBM acquisition can help accelerate the adoption of OpenShift given the increase scale and reach in sales and services that IBM has.”

IBM, Red Hat, Fedora Leftovers

Filed under
Red Hat
  • 5 principles for deploying your API from a CI/CD pipeline

    With companies generating more and more revenue through their APIs, these APIs also have become even more critical. Quality and reliability are key goals sought by companies looking for large scale use of their APIs, and those goals are usually supported through well-crafted DevOps processes. Figures from the tech giants make us dizzy: Amazon is deploying code to production every 11.7 seconds, Netflix deploys thousands of time per day, and Fidelity saved $2.3 million per year with their new release framework. So, if you have APIs, you might want to deploy your API from a CI/CD pipeline.

    Deploying your API from a CI/CD pipeline is a key activity of the “Full API Lifecycle Management.” Sitting between the “Implement” and “Secure” phases, the “Deploy” activity encompasses every process needed to bring the API from source code to the production environment. To be more specific, it covers Continuous Integration and Continuous Delivery.

  • DevNation Live: Subatomic reactive systems with Quarkus

    DevNation Live tech talks are hosted by the Red Hat technologists who create our products. These sessions include real solutions and code and sample projects to help you get started. In this talk, Clement Escoffier, Principal Software Engineer at Red Hat, will dive into the reactive side of Quarkus.

    Quarkus provides a supersonic development experience and a subatomic execution environment thanks to its integration with GraalVM. But, that’s not all. Quarkus also unifies the imperative and reactive paradigm.

    This discussion is about the reactive side of Quarkus and how you can use it to implement reactive and data streaming applications. From WebSockets to Kafka integration and reactive streams, you will learn how to build a reactive system with Quarkus.

  • What does it mean to be a sysadmin hero?

    Sysadmins spend a lot of time preventing and fixing problems. There are certainly times when a sysadmin becomes a hero, whether to their team, department, company, or the general public, though the people they "saved" from trouble may never even know.

    Enjoy these two stories from the community on sysadmin heroics. What does it mean to you?

  • What’s The Future Of Red Hat At IBM

    IBM has a long history of working with the open source community. Way back in 1999, IBM announced a $1billion investment in Linux. IBM is also credited for creating one of the most innovative advertisements about Linux. But IBM’s acquisition of Red Hat raised some serious and genuine questions around IBM’s commitment to Open Source and the future of Red Hat at the big blue.

    Red Hat CTO, Chris Wright, took it upon himself to address some of these concerns and answer people’s questions in an AMA (Ask Me Anything) on Reddit. Wright has evolved from being a Linux kernel developer to becoming the CTO of the world’s largest open source company. He has his pulse on both the business and community sides of the open source world.

  • Financial industry leaders talk open source and modernization at Red Hat Summit 2019

    IT leaders at traditional financial institutions seem poised to become the disruptors rather than the disrupted in what has become a dynamic industry. And they’re taking advantage of enterprise open source technology to do it, building applications in exciting and innovative ways, and even adopting the principles and culture of startup technology companies themselves.

  • FPgM report: 2019-30

    Here’s your report of what has happened in Fedora Program Management this week. The mass rebuild is underway.

    I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

Fedora's ARM SIG Is Looking At Making An AArch64 Xfce Desktop Spin

Filed under
Red Hat

Another late change proposal being talked about for this autumn's Fedora 31 release is introducing a 64-bit ARM (AArch64) Xfce desktop spin.

Fedora's ARM special interest group already maintains an AArch64 minimal spin, a server spin, and Fedora Workstation complete with the GNOME Shell desktop. This proposed Xfce desktop image for 64-bit Arm SoCs would be catering towards lighter-weight SBCs/systems not capable or interested in running a full workstation desktop.

Read more

Also: Now available: The user preview release of Fedora CoreOS

Red Hat CTO Chris Wright talks about Red Hat's future with IBM

Filed under
Red Hat

Many people are still waiting for the other shoe to drop now that IBM has acquired Red Hat. In a Reddit Ask Me Anything (AMA), Red Hat CTO and Linux kernel developer Chris Wright reassured everyone that Red Hat would be staying its open-source and product course.

Question number one was what are the plans for Red Hat's Kubernetes offering OpenShift. Kubernetes is vital for the modern-day hybrid cloud. Indeed, one of the big reasons why IBM bought Red Hat was for its hybrid-cloud expertise. That said, IBM has its own native Kubernetes offering, IBM Cloud Kubernetes Service for use on its private cloud offerings.

Read more

IBM and Servers

Filed under
Red Hat
Server
  • Controlling Red Hat OpenShift from an OpenShift pod

    This article explains how to configure a Python application running within an OpenShift pod to communicate with the Red Hat OpenShift cluster via openshift-restclient-python, the OpenShift Python client.

  • 24 sysadmin job interview questions you should know

    As a geek who always played with computers, a career after my masters in IT was a natural choice. So, I decided the sysadmin path was the right one. In the process of my career, I have grown quite familiar with the job interview process. Here is a look at what to expect, the general career path, and a set of common questions and my answers to them.

  • How to transition into a career as a DevOps engineer

    DevOps engineering is a hot career with many rewards. Whether you're looking for your first job after graduating or seeking an opportunity to reskill while leveraging your prior industry experience, this guide should help you take the right steps to become a DevOps engineer.

    [...]

    If you have prior experience working in technology, such as a software developer, systems engineer, systems administrator, network operations engineer, or database administrator, you already have broad insights and useful experience for your future role as a DevOps engineer. If you're just starting your career after finishing your degree in computer science or any other STEM field, you have some of the basic stepping-stones you'll need in this transition.

  • Getting Started with Knative on Ubuntu

    Serverless computing is a style of computing that simplifies software development by separating code development from code packaging and deployment. You can think of serverless computing as synonymous with function as a service (FaaS). 

    Serverless has at least three parts, and consequently can mean something different depending on your persona and which part you look at – the infrastructure used to run your code, the framework and tools (middleware) that hide the infrastructure, and your code which might be coupled with the middleware. In practice, serverless computing can provide a quicker, easier path to building microservices. It will handle the complex scaling, monitoring, and availability aspects of cloud native computing.

Syndicate content