Language Selection

English French German Italian Portuguese Spanish

Installing Beryl on Mandriva, really easy with screenshots

Filed under
Howtos

Read at Beryl on Mandriva

More in Tux Machines

Programming: LLVM Clang, GCC, 30 Years of Haskell and More

  • Intel GCC Patches + PRM Update Adds SERIALIZE Instruction, Confirm Atom+Core Hybrid CPUs

    Intel has seemingly just updated their public programming reference manual as well as sending out some new patches to the GCC compiler for supporting new instructions on yet-to-be-released CPUs. Hitting the mailing list early this morning was support for TSXLDTRK. TSXLDTRK is Intel TSX Suspend Load Address Tracking and is confirmed as coming with Sapphire Rapids / Golden Cove. With that is the XSUSLDTRK to suspend tracking load addresses and XRESLDTRK so that software developers can choose the memory accesses that do not need to be tracked by a TSX (Transactional Synchronization Extensions) read set.

  • Upstreaming LLVM's Fortran "Flang" Front-End Has Been Flung Back Further

    Upstreaming of LLVM's Fortran front-end developed as "f18" and being upstreamed with the Flang name was supposed to happen back in January. Three months later, the developers still are struggling to get the code into shape for integration.

  • LLVM Clang 10.0 Compiler Performance On Intel + AMD CPUs Under Linux

    With last week's release of LLVM/Clang 10.0, here are our first benchmarks looking at the stable release of the Clang 10.0 C/C++ compiler compared to its previous (v9.0.1) release on various Intel and AMD processors under Ubuntu Linux.

  • GCC 11 Will Likely Support Using LLVM's libc++

    While GCC 10 isn't even out for a few more weeks, looking ahead to next year's GCC 11 release is already one interesting planned change. GCC 11's C++ front-end (G++) will likely offer support for using LLVM's libc++ standard library. There was recently a question asked on the GCC mailing list over the ability to do -stdlib=libc++ for using LLVM's C++ standard library in conjunction with the GCC C++ compiler.

  • How does kanban relate to DevOps?

    Kanban means "visual signal" and has its roots in the Toyota manufacturing industry. It was developed by Taiichi Ohno to improve manufacturing efficiency. When we jump a few decades into the future, kanban complements agile and lean, often used with frameworks such as scrum, Scaled Agile Framework, and Disciplined Agile to visualize and manage work.

  • Joachim Breitner: 30 years of Haskell

    Vitaly Bragilevsky, in a mail to the GHC Steering Committee, reminded me that the first version of the Haskell programming language was released exactly 30 years ago. On April 1st. So that raises the question: Was Haskell just an April fool's joke that was never retracted?

  • Monthly Report - March

    I lost a friend of mine, Jeff Goff (aka DrForr), who passed away on 13th March, 2020, while snorkeling with a group in the Bahamas. He will be missed by many of his friends. May his soul rest in peace. Most of the time last month was occupied by COVID-19. Being a type-2 diabetic didn't help the cause either. I have suffered with consistent cough all my life. It is really scary when think from COVID-19 point of view. I have survived so far by the grace of ALLAH s.w.t. I have been working from home since the first week of March. I have been kind of self quarantined. Kids, specially the twins (3 years old) not allowed to play with me. It is really hard to focus on work but somehow I have managed so far. I am getting used to it now.

IBM/Red Hat Leftovers

  • World Backup Day: A plan of action

    World Backup Day reminds us all of just how important backups are. You don't get how important they are, perhaps, until you've experienced an outage that you can't recover from by any troubleshooting method. Backups are a pain but they are a necessary evil and can save you when things go bad. And things always go bad. This article helps you make a plan.

  • Running an event-driven health management business process through a few scenarios: Part 1

    In the previous series of articles, Designing an event-driven business process at scale: A health management example (which you need to read to fully understand this one), you designed and implemented an event-driven scalable business process for the population health management use case. Now, you will run this process through a few scenarios.

  • Getting to open hybrid cloud

    So, you’ve read our e-book and are convinced that adopting an open hybrid cloud Platform is a key part of digitally transforming. Great! Now how do you get your applications and associated infrastructure there? There are many aspects that should be considered when digitally transforming and adopting an open hybrid cloud including people, culture, process, and technology. While these are all important, in this post we will focus on process and technology. A common way of speaking about migrating or modernizing workloads to the cloud was popularized in 2016 by Amazon Web Services in their post, "6 Strategies for Migration Applications to the Cloud." We will use the categorization popularized in that article to explore how Red Hat is making it quicker and easier to move your applications and their associated infrastructure to the open hybrid cloud.

  • Command and control: The Red Hat Ceph Storage 4 Dashboard changes the game

    Ease of use was a key development theme for Red Hat Ceph Storage 4. In our last post, we covered the role that the new install UI plays in enabling administrators to deploy Ceph Storage 4 in a simple and guided manner, without prior Ceph expertise. Simplifying installation is only the first step—the second step is simplifying day-to-day management. To meet this challenge, Ceph Storage 4 introduces a new graphical user interface called the Dashboard.

  • Red Hat DNF 4.2.21 Package Manager Released Today!

    DNF 4.2.21 Released Today: DNF is otherwise named as Dandified YUM Package Manager. DNF is basically developed by Red Hat for RPM based distributions. The team Red Hat developers announced the latest version of DNF 4.2.21 has been released. They promised that the new version may have many new essential bugs fixes and software tweaks.

  • Three ways our hybrid cloud architecture makes it easy to add AI to fulfillment
  • Gain transparency into fulfillment decisions

    In a previous blog, I introduced IBM Sterling Fulfillment Optimizer With Watson® and provided answers to five frequently asked questions. Once clients have implemented this AI-powered solution to optimize fulfillment, they tend to have another question: Why did Sterling Fulfillment Optimizer make the decisions that it did? In this blog, we’ll look at what’s in Watson’s head. When an order is sent to Sterling Fulfillment Optimizer, the order goes through many rules, configurations, constraints, and cost-optimization comparisons to determine the best fulfillment option. Sometimes, as a user, the recommendation intuitively feels right, but other times it may not – particularly if you’re dealing with complex orders and a complex fulfillment network. If an order is placed in Chicago and Sterling Fulfillment Optimizer recommends that different order lines for the order be fulfilled from nodes in Los Angeles and Dallas, you may have difficulty understanding why that was the best choice to maximize profits. What isn’t immediately evident is that behind the scenes, Sterling Fulfillment Optimizer is using big data analytics, AI, and machine learning to look for trends and patterns. It analyzes sell-through patterns, rate-of-sale and probability-of-sale data to determine the risk of stockouts or markdowns for each SKU node combination, automatically calculating the lowest overall fulfillment cost at that moment. This is critical because that moment in time is always changing as the fulfillment network and sell-through patterns continuously change, and business preferences may change as well. Remember from the last blog that I discussed how you may decide to prioritize one or more factors over the total cost due to promotions or seasonality. In this example, where the order is fulfilled from Los Angeles and Dallas, the solution determined — based on visibility into real-time data and balancing multiple factors simultaneously — that if the order had been fulfilled from a single node in Chicago, which at that moment was low on inventory, the risk of stockout would have been high.

GTK 3.98.2

When we released 3.98.0, we promised more frequent snapshots, as the remaining GTK 4 features are landing. Here we are a few weeks later, and 3.98.1 and 3.98.2 snapshots have quietly made it out. Read more Also: GTK 3.98.2 Released As Another Step Towards GTK4

Servers: ZFS Tuning for HPC, Rancher 2.4, QEMU, LXD and Kubernetes

  • ZFS Tuning for HPC

    If you manage storage servers, chances are you are already aware of ZFS and some of the features and functions it boasts. In short, ZFS is a combined all-purpose filesystem and volume manager that simplifies data storage management while offering some advanced features, including drive pooling with software RAID support, file snapshots, in-line data compression, data deduplication, built-in data integrity, advanced caching (to DRAM and SSD), and more. ZFS is licensed under the Common Development and Distribution License (CDDL), a weak copyleft license based on the Mozilla Public License (MPL). Although open source, ZFS and anything else under the CDDL was, and supposedly still is, incompatible with the GNU General Public License (GPL). This hasn’t stopped ZFS enthusiasts from porting it over to the Linux kernel, where it remains a side project under the dominion of the ZFS on Linux (ZoL) project.

  • From Web Scale to Edge Scale: Rancher 2.4 Supports 2,000 Clusters on its Way to 1 Million

    Rancher 2.4 is here – with new under-the-hood changes that pave the way to supporting up to 1 million clusters. That’s probably the most exciting capability in the new version. But you might ask: why would anyone want to run thousands of Kubernetes clusters – let alone tens of thousands, hundreds of thousands or more? At Rancher Labs, we believe the future of Kubernetes is multi-cluster and fully heterogeneous. This means ‘breaking the monolith’ into many clusters and running the best Kubernetes distribution for each environment and use case.

  • QEMU 5.0-rc1 Released For Linux Virtualization With The Stable Update Coming This Month

    QEMU 5.0-rc1 was released on Tuesday as the latest development release in the path to QEMU 5.0.0 expected to be achieved later this month.

  • New 4.0 LTS releases for LXD, LXC and LXCFS
    Hello,
    
    The LXD, LXC and LXCFS teams are very proud to announce their 4.0 LTS releases!
    
    LTS versions of all 3 projects are released every 2 years, starting 6
    years ago. Those LTS versions benefit from 5 years of security and
    bugfix support from upstream and are ideal for production environments.
    
    # LXD
    LXD is our system container and virtual machine manager. It's a Go
    application based on LXC and QEMU. It can run several thousand
    containers on a single machine, mix in some virtual machines, offers a
    simple REST API and can be easily clustered to handle large scale
    deployments.
    
    It takes seconds to setup on a laptop or a cloud instance, can run just
    about any Linux distribution and supports a variety of resource limits
    and device passthrough. It's used as the basis for Linux applications on
    Chromebooks and is behind Travis-CI's recent Arm, IBM Power and IBM Z
    testing capability.
    
    
  • Building a Three-Node Kubernetes Cluster | Quick Guide

    There are many ways to build a Kubernetes cluster. One of them is using a tool called kubeadm. Kubeadm is the official tool for “first-paths” when creating your first Kubernetes cluster. With the ease of getting up and running, I thought I would put together this quick guide to installing a Kubernetes cluster using kubeadm!

  • Kubernetes Topology Manager Moves to Beta - Align Up!

    This blog post describes the TopologyManager, a beta feature of Kubernetes in release 1.18. The TopologyManager feature enables NUMA alignment of CPUs and peripheral devices (such as SR-IOV VFs and GPUs), allowing your workload to run in an environment optimized for low-latency. Prior to the introduction of the TopologyManager, the CPU and Device Manager would make resource allocation decisions independent of each other. This could result in undesirable allocations on multi-socket systems, causing degraded performance on latency critical applications. With the introduction of the TopologyManager, we now have a way to avoid this.