Language Selection

English French German Italian Portuguese Spanish

Red Hat

Red Hat: Puff Pieces, OpenStack, OpenShift, CodeReady and More

Filed under
Red Hat
  • Red Hat and SAS: Enabling enterprise intelligence across the hybrid cloud

    Every day 2.5 quintillion bytes of big data is created - this data comes from externally sourced websites, blog posts, tweets, sensors of various types and public data initiatives such as the human genome project as well as audio and video recordings from smart devices/apps and the Internet of Things (IoT). Many businesses are learning how to look beyond just data volume (storage requirements), velocity (port bandwidth) and variety (voice, video and data) of this data; they are learning how to use the data to make intelligent business decisions.

    Today, every organization, across geographies and industries can innovate digitally, creating more customer value and differentiation while helping to level the competitive playing field. The ability to capture and analyze big data and apply context-based visibility and control into actionable information is what creates an intelligent enterprise. It entails using data to get real-time insights across the lines of business which can then drive improved operations, innovation, new areas of growth and deliver enhanced customer and end user experiences

  • Working together to raise mental health awareness: How Red Hat observed World Mental Health Day

    Cultivating a diverse and inclusive workspace is an important part of Red Hat’s open culture. That’s why we work to create an environment where associates feel comfortable bringing their whole selves to work every single day. One way we achieve this mission is by making sure that Red Hatters who wish to share their mental health experiences, are met with compassion and understanding, but most importantly, without stigma. It is estimated that one in four adults suffers from mental illness every year.

  • Introducing Red Hat OpenShift 4.2: Developers get an expanded and improved toolbox

    Today Red Hat announces Red Hat OpenShift 4.2 extending its commitment to simplifying and automating the cloud and empowering developers to innovate.

    Red Hat OpenShift 4, introduced in May, is the next generation of Red Hat’s trusted enterprise Kubernetes platform, reengineered to address the complexity of managing container-based applications in production systems. It is designed as a self-managing platform with automatic software updates and lifecycle management across hybrid cloud environments, built on the trusted foundation of Red Hat Enterprise Linux and Red Hat Enterprise Linux CoreOS.

    The Red Hat OpenShift 4.2 release focuses on tooling that is designed to deliver a developer-centric user experience. It also helps cluster administrators by easing the management of the platform and applications, with the availability of OpenShift migration tooling from 3.x to 4.x, as well as newly supported disconnected installs.

  • A look at the most exciting features in OpenStack Train

    With all eyes turning towards Shanghai, we’re getting ready for the next Open Infrastructure Summit in November with great excitement. But before we hit the road, I wanted to draw attention to the latest OpenStack upstream release. The Train release continues to showcase the community’s drive toward offering innovations in OpenStack. Red Hat has been part of developing more than 50 new features spanning Nova, Ironic, Cinder, TripleO and many more projects.

    But given all the technology goodies (you can see the release highlights here) that the Train release has to offer, you may be curious about the features that we at Red Hat believe are among the top capabilities that will benefit our telecommunications and enterprise customers and their uses cases. Here's an overview of the features we are most excited about this release.

  • New developer tools in Red Hat OpenShift 4.2

    Today’s announcement of Red Hat OpenShift 4.2 represents a major release for developers working with OpenShift and Kubernetes. There is a new application development-focused user interface, new tools, and plugins for container builds, CI/CD pipelines, and serverless architecture.

  • Red Hat CodeReady Containers overview for Windows and macOS

    Red Hat CodeReady Containers 1.0 is now available with support for Red Hat OpenShift 4.2. CodeReady Containers is “OpenShift on your laptop,” the easiest way to get a local OpenShift environment running on your machine. You can get an overview of CodeReady Containers in the tech preview launch post. You can download CodeReady Containers from the product page.

  • Tour of the Developer Perspective in the Red Hat OpenShift 4.2 web console

    Of all of the new features of the Red Hat OpenShift 4.2 release, what I’ve been looking forward to the most are the developer-focused updates to the web console. If you’ve used OpenShift 4.1, then you’re probably already familiar with the updated Administrator Perspective, which is where you can manage workloads, storage, networking, cluster settings, and more.

    The addition of the new Developer Perspective aims to give developers an optimized experience with the features and workflows they’re most likely to need to be productive. Developers can focus on higher level abstractions like their application and components, and then drill down deeper to get to the OpenShift and Kubernetes resources that make up their application.

    Let’s take a tour of the Developer Perspective and explore some of the key features.

Fedora at 15: Why Matthew Miller sees a bright future for the Linux distribution

Filed under
Red Hat
Interviews

Fedora—as a Linux distribution—will celebrate the 15th anniversary of its first release in November, though its technical lineage is much older, as Fedora Core 1 was created following the discontinuation of Red Hat Linux 9 in favor of Red Hat Enterprise Linux (RHEL).

That was a turbulent time in Red Hat history, and Fedora has had its own share of turbulence as well. Since becoming project leader in June 2014, Matthew Miller had led the Fedora.next initiative, intended to guide the second decade of the Fedora project. That initiative resulted in the creation of separate Fedora Workstation, Server, and Cloud editions—the latter of which has since been replaced with CoreOS—as well as the addition of an Internet of Things (IoT) edition.

Read more

Red Hat and Fedora: syslog-ng, Ansible, Libinput and Fedora Community

Filed under
Red Hat
  • syslog-ng in two words at One Identity UNITE: reduce and simplify

    UNITE is the partner and user conference of One Identity, the company behind syslog-ng. This time the conference took place in Phoenix, Arizona where I talked to a number of American business customers and partners about syslog-ng. They were really enthusiastic about syslog-ng and emphasized two major reasons why they use syslog-ng or plan to introduce it to their infrastructure: syslog-ng allows them to reduce the log data volume and greatly simplify their infrastructure by introducing a separate log management layer.

    [...]

    When you collect log messages to a central location using syslog-ng, you can archive all of the messages there. If you add a new log analysis application to your infrastructure, you can just point syslog-ng at it and forward the necessary subset of log data there.

    Life at both security and operations in your environment becomes easier, as there is only a single software to check for security problems and distribute on your systems instead of many.

  • Ansible vs Terraform vs Juju: Fight or cooperation?

    Ansible vs Terraform vs Juju vs Chef vs SaltStack vs Puppet vs CloudFormation – there are so many tools available out there. What are these tools? Do I need all of them? Are they fighting with each other or cooperating?

    The answer is not really straightforward. It usually depends on your needs and the particular use case. While some of these tools (Ansible, Chef, StaltStack, Puppet) are pure configuration management solutions, the others (Juju, Terraform, CloudFormation) focus more on services orchestration. For the purpose of this blog, we’re going to focus on Ansible vs Terraform vs Juju comparison – the three major players which have dominated the market.

    [...]

    Contrary to both Ansible and Terraform, Juju is an application modelling tool, developed and maintained by Canonical. You can use it to model and automate deployments of even very complex environments consisting of various interconnected applications. Examples of such environments include OpenStack, Kubernetes or Ceph clusters. Apart from the initial deployment, you can also use Juju to orchestrate deployed services too. Thanks to Juju you can backup, upgrade or scale-out your applications as easily as executing a single command.

    Like Terraform, Juju uses a declarative approach, but it brings it beyond the providers up to the applications layer. You can not only declare a number of machines to be deployed or number of application units, but also configuration options for deployed applications, relations between them, etc. Juju takes care of the rest of the job. This allows you to focus on shaping your application instead of struggling with the exact routines and recipes for deploying them. Forget the “How?” and focus on the “What?”.

  • libinput's bus factor is 1

    Let's arbitrarily pick the 1.9.0 release (roughly 2 years ago) and look at the numbers: of the ~1200 commits since 1.9.0, just under 990 were done by me. In those 2 years we had 76 contributors in total, but only 24 of which have more than one commit and only 6 contributors have more than 5 commits. The numbers don't really change much even if we go all the way back to 1.0.0 in 2015. These numbers do not include the non-development work: release maintenance for new releases and point releases, reviewing CI failures [1], writing documentation (including the stuff on this blog), testing and bug triage. Right now, this is effectively all done by one person.

    This is... less than ideal. At this point libinput is more-or-less the only input stack we have [2] and all major distributions rely on it. It drives mice, touchpads, tablets, keyboards, touchscreens, trackballs, etc. so basically everything except joysticks.

  • Contribute to Fedora Magazine

    Do you love Linux and open source? Do you have ideas to share, enjoy writing, or want to help run a blog with over 60k visits every week? Then you’re at the right place! Fedora Magazine is looking for contributors. This article walks you through various options of contributing and guides you through the process of becoming a contributor.

  • Fabiano Fidêncio: Libosinfo (Part Sleepy

    Libosinfo is the operating system information database. As a project, it consists of three different parts, with the goal to provide a single place containing all the required information about an operating system in order to provision and manage it in a virtualized environment.

  • Τι κάνεις FOSSCOMM 2019

    When the students visited our Fedora booth, they were excited to take some Fedora gifts, especially the tattoo sticker. I was asking how many of them used Fedora, and most of them were using Ubuntu, Linux Mint, Kali Linux and Elementary OS. It was an opportunity to share the Fedora 30 edition and give the beginner’s guide that the Fedora community wrote in a little book. Most of them enjoyed taking photos with the Linux frame I did in Edinburgh...

    [...]

    I was planning to teach the use of the GTK library with C, Python, and Vala. However, because of the time and the preference of the attendees, we only worked with C. The workshop was supported by Alex Angelo who also traduced some of my expressions in Greek. I was flexible in using different Operating Systems such as Linux Mint, Ubuntu, Kubuntu among other distros. There were only two users that used Fedora. Almost half of the audience did not bring a laptop, and then I grouped in groups to work together. I enjoyed to see young students eager to learn, they took their own notes, and asked questions. You might see the video of the workshop that was recorded by the organizers.

  • Extending the Minimization objective

    Earlier this summer, the Fedora Council approved the first phase of the Minimization objective. Minimization looks at package dependencies and tries to minimize the footprint for a variety of use cases. The first phase resulted in the development of a feedback pipeline, a better understanding of the problem space, and some initial ideas for policy improvements.

Kubernetes at SUSE and Red Hat

Filed under
Red Hat
SUSE
  • Eirinix: Writing Extensions for Eirini

    At the recent Cloud Foundry Summit EU in the Netherlands, Vlad Iovanov and Ettore Di Giacinto of SUSE presented a talk about Eirini — a project that allows the deployment and management of applications on Kubernetes using the Cloud Foundry Platform. They introduced eirinix — a framework that allows developers to extend Eirini. Eirinix is built from the Quarks codebase, which leverages Kubernetes Mutating Webhooks. With the flexibility of Kubernetes and Eirini’s architecture, developers can now build features around Eirini, like Persi support, access to the application via SSH, ASGs via Network Policies and more. In this talk, they explained how this can be done, and how everyone can start contributing to a rich ecosystem of extensions that will improve Eirini and the developer experience of Cloud Foundry.

  • Building an open ML platform with Red Hat OpenShift and Open Data Hub Project

    Unaddressed, these challenges impact the speed, efficiency and productivity of the highly valuable data science teams. This leads to frustration, lack of job satisfaction and ultimately the promise of AI/ML to the business is not redeemed.

    IT departments are being challenged to address the above. IT has to deliver a cloud-like experience to data scientists. That means a platform that offers freedom of choice, is easy to access, is fast and agile, scales on-demand and is resilient. The use of open source technologies will prevent lockin, and maintain long term strategic leverage over cost.

    In many ways, a similar dynamic has played out in the world of application development in the past few years that has led to microservices, the hybrid cloud and automation and agile processes. And IT has addressed this with containers, kubernetes and open hybrid cloud.

    So how does IT address this challenge in the world of AI – by learning from their own experiences in the world of application development and applying to the world of AI/ML. IT addresses the challenge by building an AI platform that is container based, that helps build AI/ML services with agile process that accelerates innovation and is built with the hybrid cloud in mind.

  • Launching OpenShift/Kubernetes Support for Solarflare Cloud Onload

    This is a guest post co-written by Solarflare, a Xilinx company. Miklos Reiter is Software Development Manager at Solarflare and leads the development of Solarflare’s Cloud Onload Operator. Zvonko Kaiser is Team Lead at Red Hat and leads the development of the Node Feature Discovery operator.

Red Hat: Red Hat Summit 2019, CFO Fired, Fedora 32, Dependency Analytics and Awards

Filed under
Red Hat
  • Top 10 highlights at Red Hat Summit 2019

    As we careen into Fall, we at Red Hat have had a few months to catch our breath after another fantastic Red Hat Summit. Which means… we're busy planning for next year's Red Hat Summit. As we get everything lined up for next year, let's take a look back at some of the highlights from our time in Boston.

    [...]

    Every year the Red Hat Innovation Awards recognize the technological achievements of Red Hat customers around the world who demonstrate creative thinking, determined problem-solving and transformative uses of Red Hat technology.

    The 2019 winners were: BP, Deutsche Bank, Emirates NBD, HCA Healthcare and Kohl's. In addition, HCA Healthcare was voted the 2019 Red Hat Innovator of the Year for its efforts to use data and technology to support modern healthcare. A cross-functional team of clinicians, data scientists and technology professionals at HCA Healthcare used Red Hat solutions to create a real-time predictive analytics product system to more accurately and rapidly detect sepsis, a potentially life-threatening condition.

  • Red Hat Sacks CFO Over Alleged Workplace Standards Violation

    Red Hat CFO has been shown the door in alleged workplace standards violation.

  • Red Hat Developers Eyeing CPU Thermal Management Improvements For Fedora 32

    Several Red Hat developers are looking at improving the CPU thermal management capabilities for Fedora Workstation 32 and in turn possibly helping Intel CPUs reach better performance.

    The change being sought for Fedora Workstation 32 would be shipping Intel's thermal daemon (thermald) by default with Fedora 32 and with that carrying various hardware specific configuration data for helping CPUs reach their optimal thermal/power limits. Intel's open-source thermal daemon can already be installed on most Linux distributions as a separate package but isn't normally shipped by default. With Fedora Workstation 32 it could be shipped by default for its goal of trying to keep CPUs operating in the correct temperature envelop and to reach maximum performance.

  • What’s new in Red Hat Dependency Analytics

    We are excited to announce a new release of Red Hat Dependency Analytics, a solution that enables developers to create better applications by evaluating and adding high-quality open source components, directly from their IDE.

    Red Hat Dependency Analytics helps your development team avoid security and licensing issues when building your applications. It plugs into the developer’s IDE, automatically analyzes your software composition, and provides recommendations to address security holes and licensing problems that your team may be missing.

    Without further ado, let’s jump into the new capabilities offered in this release. This release includes a new version of the IDE plugin and the server-side analysis service hosted by Red Hat.

  • Awards roll call: Red Hat awards, July 2019 - October 2019

    As we head into the new season, we?d like to spread the excitement by sharing some of our latest awards and industry recognition. Since our last roundup, Red Hat has been honored with accolades highlighting our unique culture, our creative and design work and our expansive product portfolio.

IBM Fires Red Hat CFO

Filed under
Red Hat
  • Red Hat CFO Loses Out on Retention Bonus Following Standards-Related Ouster

    Red Hat Inc.’s finance chief Eric Shander has been dismissed from the company, forfeiting a $4 million retention award that was agreed to ahead of Red Hat’s acquisition by International Business Machines Corp.

    The Raleigh, N.C.-based software company confirmed late Thursday that Mr. Shander was no longer working at Red Hat. “Eric was dismissed without pay in connection with Red Hat’s workplace standards,” a company spokeswoman said in a statement.

    The company, which said that its accounting and control functions remain healthy, on Friday declined to provide specifics about what led to Mr. Shander’s dismissal.

  • Red Hat CFO 'Dismissed' From Company, Forfeits $4M Retention Award

    "Red Hat Inc.'s finance chief Eric Shander has been dismissed from the company, forfeiting a $4 million retention award that was agreed to ahead of Red Hat's acquisition by IBM," reports the Wall Street Journal...

Hubert Figuiere on Flatpak and Flathub, GLib 2.63.1 Coming Soon

Filed under
Red Hat
GNOME
  • Getting a stack trace out of a Flatpak

    So, the flatpak application you use just crashed

    How do you report it? If you file a bug just saying it crashed, the developers will probably ask for some stack trace. On Fedora 30, for example, abrt (the crash reporting system) doesn't provide any useful information. Let's see if we can extract that information.

    We are gonna have to use the terminal to use some command line tools. Flatpak has a tool flatpak-coredumpctl to use the core dump in the flatpak sandbox. The core dump is an image of the program memory when it crashed that will contain a lot about the crash. But by default the tool will not be able to provide much useful info. There is some initial setup need to be able to have a better output.

    First you must make sure that you have the right Debug package for the right version of the Flatpak runtime. Well, actually, for the corresponding SDK.

  • Music, Flathub and Qt

    I quickly realised that trying these apps on my Dell XPS 13 was really an adventure, mostly because of HiDPI (the high DPI screen that the PS 13 has). Lot of the applications found on Fedora, by default, don't support high DPI and a thus quasi impossible to use out of the box. Some of it is fixable easily, some of it with a bit more effort and some, we need to try harder.

    Almost all the apps I have tried used Qt. With Qt5 the fix is easy, albeit not necessarily user friendly. Just set the QT_AUTO_SCREEN_SCALE_FACTOR environment variable to 1 as specified in Qt HiDPI support documentation. There is also an API to set the attribute on the QCoreApplication object. There must be a good reason why this opt-in and not opt-out.

    [...]

    In the end, I have Hydrogen available on Flathub, the three others in queue for Flathub, and all have had patches submitted (with Muse3 and Rosegarden already merged upstream).

  • g_warning_once() in GLib 2.63.1

    GLib 2.63.1 will be released in the next few weeks, and will contain a fun new API to slightly simplify emitting a warning once, and then shutting up to avoid emitting loads of log spam.

Red Hat Leftovers

Filed under
Red Hat
  • Modern continuous integration/continuous delivery (CI/CD) pipeline for traditional provisioning: Your questions answered (Part 2)

    During a recent webinar titled, “Modern continuous integration/continuous delivery (CI/CD) pipeline for traditional provisioning,” we received a lot of interest and many questions regarding the topic. Some of the questions were coming in at a very rapid rate and we were not able to address them all. As a followup to our webinar, we have decided to put the answers to those questions into this blog post. The questions are listed below. This is part two in a series, check out our first blog post here.

    The demo in the webinar showed a combination of CloudForms/Ansible Tower to accomplish lifecycle provisioning. Is CloudForms an alternative or must it be used together with Ansible? Can you elaborate on the integration?

  • Tagging resources for IT and business alignment

    Traditional IT management based on fixed resources stopped making sense with the cloud, an unlimited pool of resources that can be accessed from any point in the world. Companies are moving from a CAPEX intensive environment to a new OPEX based cloud. With the new consumption model that favours the cloud, the weight shifts from asset lifecycle management to resource governance. This generates additional requirements for forecasting and budgeting. But the question is still "are we spending our money well?"

    The question is not so simple to answer because comparisons are difficult. The first reaction many organizations have is to believe that lower costs are better costs, but in many cases that is basically wrong.

    For instance, it is easy to reduce costs by purchasing a storage service that is cheaper than the one you are using now. However, that change may be associated with a decrease in performance; can your application support it or would you be losing customers - and revenue - in the process? The same thing can happen if you reduce expenses at the cost of limiting the application availability and not investing enough in load balancers, databases or application workers.

    In order to align business, resources and costs you need to take several steps; in this post we will outline some best practices we have been gathering about the topic.

  • Red Hat: We’re a neutral broker

    Red Hat claims to be a neutral broker that will pave the way for organisations to run the same container application platform across different public cloud services and in a hybrid cloud environment.

    This comes at a time when major public cloud suppliers are all trying to differentiate themselves through platform services – for example, with their own implementations of the open-source Kubernetes container orchestration platform.

    Speaking to Computer Weekly on the sidelines of Red Hat Forum in Singapore, Damien Wong, vice-president and general manager for Asian growth and emerging markets at Red Hat, said the company’s OpenShift platform will let enterprises run containerised applications on the same platform, regardless of cloud deployment model or underlying cloud infrastructure service.

  • [Older] How Red Hat is pioneering a serverless movement

    The old-school "one server/one function" concept has prevailed for veritable decades in the technology realm, whereby a single server stands duty to perform authentication, file, print, web, messaging, and other services.

    That's the past. The future is moving towards a serverless model whereby functions (e.g. applications) are more important than actual server implementations.

IBM/Red Hat and Fedora: CentOS, Ceph, Mainframes and Fedora Migration/Refresh

Filed under
Red Hat
  • Download CentOS 8 ? DVD ISO Image

    CentOS is a Linux operating system, which is a 100% compatible rebuild of the Red Hat Enterprise Linux operating system. A user can download and use this enterprise-level operating system free of cost. CentOS 8 is the latest version available to download.

  • Modern continuous integration/continuous delivery (CI/CD) pipeline for traditional provisioning: Your questions answered (Part 1)

    During a recent webinar titled, “Modern continuous integration/continuous delivery (CI/CD) pipeline for traditional provisioning,” we received a lot of interest and many questions regarding the topic. Some of the questions were coming in at a very rapid rate and we were not able to address them all. As a followup to our webinar, we have decided to put the answers to those questions into this blog post. The questions are listed below.

  • Red Hat Ceph object store on Dell EMC servers (Part 1)

    Organizations are increasingly being tasked with managing billions of files and tens to hundreds of petabytes of data. Object storage is well suited to these challenges, both in the public cloud and on-premise. Organizations need to understand how to best configure and deploy software, hardware, and network components to serve a diverse range of data intensive workloads.

    This blog series details how to build robust object storage infrastructure using a combination of Red Hat Ceph Storage coupled with Dell EMC storage servers and networking. Both large-object and small-object synthetic workloads were applied to the test system and the results subjected to performance analysis. Testing also evaluated the ability of the system under test to scale beyond a billion objects.

  • Why Linux Developers Should Reconsider IBM Mainframes

    When mainframes were mainstream, many software professionals in the industry today were not even born yet. Mainframe computers have an extensive history, which makes it tempting to call them old, but today’s mainframes are extremely mature, fast, reliable and powerful. In fact, they are critical to the modern economy: Top airlines, banks, insurance companies and health care corporations rely on mainframe computing.

    One of the organizations keeping this technology with the times is IBM, with its IBM Z family of mainframe computers. Some of these mainframes—like the 31-bit s390 and, later, the 64-bit s390x architecture—were originally designed and built in the 1960s, and they have continued to evolve and modernize.

    “IBM still sells a lot of these even today,” said Elizabeth K. Joseph, a seasoned open source advocate who recently joined IBM as the developer advocate for its Z architectures. These machines run operating systems including z/OS, z/VM, z/VSE and z/TPF, as well as Linux-based distributions like Red Hat Enterprise Linux and SUSE Linux Enterprise Server.

  • Fedora localization platform migrates to Weblate

    Fedora Project provides an operating system that is used in a wide variety of languages and cultures. To make it easy for non-native English speakers to use Fedora, significant effort is made to translate the user interfaces, websites and other materials.

    Part of this work is done in the Fedora translation platform, which will migrate to Weblate in the coming months.

    This migration was mandatory as development and maintenance of Zanata — the previous translation platform — ceased in 2018.

    There are a number of translation platforms available, but having a translation platform that is open source, answering Fedora Project’s needs, and likely to be long-lived are key considerations in choosing Weblate. Most other translation platforms being closed source or lacking features.

  • F30-20191009 updated Live Isos released

    The Fedora Respins SIG is pleased to announce the latest release of Updated F30-20190904 Live ISOs, carrying the 5.2.18-200 kernel.

    This set of updated isos will save considerable amounts of updates after install. ((for new installs.)(New installs of Workstation have 1.2GB of updates)).

    A huge thank you goes out to irc nicks dowdle, Southern-Gentleman for testing these iso.

Red Hat: EPEL8, vDPA and Apache Kafka on OpenShift

Filed under
Red Hat
  • EPEL8 packages

    With the opening up of EPEL8, there’s a lot of folks looking and seeing packages they formerly used in EPEL6/7 not being available and wondering why. The reason is simple: EPEL is not a fixed exact list of packages, it’s a framework that allows interested parties to build and provide the packages they are interested in providing to the community.

    This means for a package to be in EPEL8, it requires a maintainer to step forward and explicitly ask “I’d like to maintain this in EPEL8” and then build, test and do all the other things needed to provide that package.

    The reason for this is simple: We want a high quality, maintained collection of packages. Simply building things once and never again doesn’t allow for someone fixing bugs, updating the package or adjusting it for other changes. We need a active maintainer there willing and able to do the work.

  • vDPA hands on: The proof is in the pudding

    In this post, we will set up vDPA using its DPDK framework. Since vDPA compatible HW cards are in the process of being commonly available on the market, we will work around the HW constraint by using a paravirtualized Virtio-net device in a guest as if it was a full Virtio HW offload NIC.

  • Open Banking with Microservices Architectures and Apache Kafka on OpenShift

    Last month, at OpenShift Commons Gathering Milan, Paolo Gigante and Pierluigi Sforza of Poste Italiane, showed the audience how they built a microservices based banking architecture using Apache Kafka and OpenShift. Their slides are available here. For more great in-person events like this, register for the next Commons Gathering near you! San Francisco is coming up before the end of the month, and will focus on AI/ML.

Syndicate content

More in Tux Machines

Microsoft admits Android is the best operating system for mobile devices

At an event at Microsoft’s flagship store in London, Panos Panay, the chief product officer for the Microsoft Devices group, admitted that the company is using Android in its upcoming Surface Duo phone because, quite simply, the “best OS for this product is Android”. It’s a noteworthy admission, as Google’s Android mobile operating system is one of Microsoft’s biggest rivals. In the past, the company has tried – and failed – to take on Android with its own operating system for mobile devices: Windows Mobile. We’ve picked all the best 2-in-1 laptops of 2019 Black Friday laptop deals 2019: how to find the best laptop deals How to buy a laptop on Black Friday and Cyber Monday While Windows 10 Mobile is no more, it must have been tempting for Microsoft to revive the OS for its upcoming dual-screen handset, so it’s commendable that it has gone for the much more popular Android operating system – while being so frank about its reasons. On one hand, it seems like Microsoft has acknowledged just how hard it is to compete with Android – which is currently the most-used operating system on the planet – a title Microsoft’s own Windows OS used to have. The failure of Windows 10 Mobile, and the Windows phones that ran the software, was likely a humbling experience that the company is in no rush to repeat. Read more

Canonical releases Ubuntu Linux 19.10 Eoan Ermine with GNOME 3.34, light theme, and Raspberry Pi 4 support

Thank God for Linux. No, seriously, regardless of your beliefs, you should be thankful that we have the Linux kernel to provide us with a free alternative to Windows 10. Lately, Microsoft's operating system has been plagued by buggy updates, causing some Windows users to lose faith in it. Hell, even Dona Sarkar -- the now-former leader of the Windows Insider program -- has been relieved of her duties and transitioned to a new role within the company (read into that what you will). While these are indeed dark times for Windows, Linux remains that shining beacon of light. When Windows becomes unbearable, you can simply use Chrome OS, Android, Fedora, Manjaro, or some other Linux distribution. Today, following the beta period, one of the best and most popular Linux-based desktop operating systems reaches a major milestone -- you can now download Ubuntu 19.10! Code-named "Eoan Ermine" (yes, I know, it's a terrible name), the distro is better and faster then ever. Read more

Which Raspberry Pi OS should you use?

There are a wide range of different Raspberry Pi OS packages available and choosing the correct one for your hardware, application or project is not always easy. Here we compliled a list of popular operating systems for the Raspberry Pi range of single board computers, providing a quick insight into what you can expect from each and how you can use it to build a variety of different applications from games emulators. To fully functional desktop replacements using the powerful Raspberry Pi 4 mini PC, as well as as few more specialist Raspberry Pi OSes. Instructional videos are also included detailing how to install and setup the various OSes, allowing you to quickly choose which Raspberry Pi OS is best for your project. If you are starting out with the Raspberry Pi and class yourself as a beginner then the NOOBS Raspberry Pi OS is a great place to start. A number of online stores sell affordable SD cards pre-installed with NOOBS, ready to use straight away. Although if you have any spare SD cards lying around you can also download the NOOBS distribution directly from the Raspberry Pi Foundation website. Read more

Canonical Outs Linux Kernel Security Update for Ubuntu 19.04 to Patch 9 Flaws

The new security update for Ubuntu 19.04 is here to patch a total of seven security flaws affecting the Linux 5.0 kernel used by the operating system, including an issue (CVE-2019-15902) discovered by Brad Spengler which could allow a local attacker to expose sensitive information as a Spectre mitigation was improperly implemented in the ptrace susbsystem. It also fixes several flaws (CVE-2019-14814, CVE-2019-14815, CVE-2019-14816) discovered by Wen Huang in the Marvell Wi-Fi device driver, which could allow local attacker to cause a denial of service or execute arbitrary code, as well as a flaw (CVE-2019-15504) discovered by Hui Peng and Mathias Payer in the 91x Wi-Fi driver, allowing a physically proximate attacker to crash the system. Read more