Language Selection

English French German Italian Portuguese Spanish

Red Hat

Fedora Project and IBM/Red Hat

Filed under
Red Hat
  • GNOME Internet Radio Locator 3.0.1 for Fedora Core 32

    GNOME Internet Radio Locator 3.0.1 features updated language translations, new, improved map marker palette and now also includes radio from Washington, United States of America; London, United Kingdom; Berlin, Germany; Radio Eins, and Paris, France; France Inter/Info/Culture, as well as 118 other radio stations from around the world with audio streaming implemented through GStreamer.

  • Fedora program update: 2020-27

    Here’s your report of what has happened in Fedora this week. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.

  • Outreachy design internship: budget templates and infographics

    Hey, I’m Smera. I’m one of the Outreachy interns this year, working on creating new designs for the Fedora Project. I work with Marie Nordin (FCAIC) and the Fedora Design team. I started on the 19th of May and this is what I have been up to!

  • Will Red Hat Rule the Supercomputing Industry with Red Hat Enterprise Linux (RHEL)?

    Red Hat Enterprise Linux has achieved a significant milestone after serving as an operating system for the world's fastest supercomputer, according to Top500. This opens up the debate on why Linux is the most preferred operating system for supercomputers.

    Supercomputers process vast datasets and conduct complex simulations much faster than traditional computers. From weather modeling, disease control, energy efficiency,nuclear testing, and quantum mechanics, supercomputers can tackle numerous scientific challenges. Countries like the U.S. and China have forever been in the race to develop the most powerful and fastest supercomputers. However, this year technological superpower Japan stole the show, when its Fugaku ARM-based supercomputer was ranked the no.1 supercomputer in the world by the Top500 list. The system runs on the Red Hat Enterprise Linux (RHEL) platform. In fact, the June 2020 Top500 list of supercomputers declared that the top three supercomputers in the world and four out of the top 10 supercomputers run on the Red Hat Enterprise Linux (RHEL) platform. That is a pretty powerful validation of RHEL’s capability to meet demanding computing environments.

  • A developer-centered approach to application development

    Do you dream of a local development environment that’s easy to configure and works independently from the software layers that you are currently not working on? I do!

    As a software engineer, I have suffered the pain of starting projects that were not easy to configure. Reading the technical documentation does not help when much of it is outdated, or even worse, missing many steps. I have lost hours of my life trying to understand why my local development environment was not working.

  • Automate workshop setup with Ansible playbooks and CodeReady Workspaces

    At Red Hat, we do many in-person and virtual workshops for customers, partners, and other open source developers. In most cases, the workshops are of the “bring your own device” variety, so we face a range of hardware and software setups and corporate endpoint-protection schemes, as well as different levels of system knowledge.

    In the past few years, we’ve made heavy use of Red Hat CodeReady Workspaces (CRW). Based on Eclipse Che, CodeReady Workspaces is an in-browser IDE that is familiar to most developers and requires no pre-installation or knowledge of system internals. You only need a browser and your brain to get hands-on with this tech.

    We’ve also built a set of playbooks for Red Hat Ansible to automate our Quarkus workshop. While they are useful, the playbooks are especially helpful for automating at-scale deployments of CodeReady Workspaces for Quarkus development on Kubernetes. In this article, I introduce our playbooks and show you how to use them for your own automation efforts.

  • What does a scrum master do?

    Turning a love of open source communities into a career is possible, and there are plenty of directions you can take. The path I'm on these days is as a scrum master.

    Scrum is a framework in which software development teams deliver working software in increments of 30 days or less called "sprints." There are three roles: scrum master, product owner, and development team. A scrum master is a facilitator, coach, teacher/mentor, and servant/leader that guides the development team through executing the scrum framework correctly.

IBM/Red Hat/Fedora: Fedora 33, Fedora 32 New Builds, Change Data Capture (CDC), OpenPOWER, HPC and More

Filed under
Red Hat
  • Fedora 33 SwapOnZRam Test Day 2020-07-06

    The Workstation Working Group has proposed a change for Fedora 33 to use swap on zram. This would put swap space on a compressed RAM drive instead of a disk partition. The QA team is organizing a test day on Monday, July 06, 2020. Refer to the wiki page for links to the test cases and materials you’ll need to participate. Read below for details.

  • F32-20200701 Updated Live isos released

    The Fedora Respins SIG is pleased to announce the latest release of Updated F32-20200701-Live ISOs, carrying the 5.6.19-300 kernel.

    This set of updated isos will save considerable amounts of updates after install. ((for new installs.)(New installs of Workstation have about 900+MB of updates)).

    A huge thank you goes out to irc nicks dowdle, dbristow, nasirhm, Southern-Gentleman for testing these iso.

  • Build a simple cloud-native change data capture pipeline

    Change data capture (CDC) is a well-established software design pattern for a system that monitors and captures data changes so that other software can respond to those events. Using KafkaConnect, along with Debezium Connectors and the Apache Camel Kafka Connector, we can build a configuration-driven data pipeline to bridge traditional data stores and new event-driven architectures.

    This article walks through a simple example.

  • OpenPOWER Reboot – New Director, New Silicon Partners, Leveraging Linux Foundation Connections

    Earlier this week the OpenPOWER Foundation announced the contribution of IBM’s A21 Power processor core design to the open source community. Roughly this time last year, IBM announced open sourcing its Power instruction set (ISA) and Open Coherent Accelerator Processor Interface (OpenCAPI) and Open Memory Interface (OMI). That’s also when IBM said OpenPOWER would become a Linux Foundation entity. Then a few weeks ago, OpenPOWER named a new executive director, James Kulina.

    Change is afoot at the OpenPOWER Foundation. Will it be enough to prompt wider (re)consideration and adoption of the OpenPOWER platform and ecosystem?

  • Red Hat Powers the Future of Supercomputing with Red Hat Enterprise Linux

    Fugaku is the first Arm-based system to take first place on the TOP500 list, highlighting Red Hat’s commitment to the Arm ecosystem from the data center to the high-performance computing laboratory. Sierra, Summit and Marconi-100 all boast IBM POWER9-based infrastructure with NVIDIA GPUs; combined, these four systems produce more than 680 petaflops of processing power to fuel a broad range of scientific research applications.

    In addition to enabling this immense computation power, Red Hat Enterprise Linux also underpins six out of the top 10 most power-efficient supercomputers on the planet according to the Green500 list. Systems on the list are measured in terms of both performance results and the power consumed achieving those. When it comes to sustainable supercomputing the premium is put on finding a balanced approach for the most energy-efficient performance.

  • Red Hat Powers the Future of Supercomputing with Red Hat Enterprise Linux

    Modern supercomputers are no longer purpose-built monoliths constructed from expensive bespoke components. Each supercomputer deployment powered by Red Hat Enterprise Linux uses hardware that can be purchased and integrated into any datacenter, making it feasible for organizations to use enterprise systems that are similar to those breaking scientific barriers. Regardless of the underlying hardware, Red Hat Enterprise Linux provides the common control plane for supercomputers to be run, managed and maintained in the same manner as traditional IT systems.

    Red Hat Enterprise Linux also opens supercomputing applications up to advancements in enterprise IT, including Linux containers. Working closely in open source communities with organizations like the Supercomputing Containers project, Red Hat is helping to drive advancements to make Podman, Skopeo and Buildah, components of Red Hat’s distributed container toolkit, more accessible for building and deploying containerized supercomputing applications.

  • Red Hat Enterprise Linux serves as operating system for supercomputers

    Red Hat announced that Red Hat Enterprise Linux provides the operating system backbone for the top three supercomputers in the world and four out of the top 10, according to the newest TOP500 ranking.

    Already serving as a catalyst for enterprise innovation across the hybrid cloud, these rankings also show that the world’s leading enterprise Linux platform can deliver a foundation to meet even the most demanding computing environments.

  • Lessons learned from standing up a front-end development program at IBM

    In 2015, we created the FED@IBM program to support front-end developers and give them the opportunity to learn new skills and teach other devs about their specific areas of expertise. While company programs often die out due to lack of funding, executive backing, interest, or leadership, our community is thriving in spite of losing the funding, executive support, and resources we had at the program’s inception.

    What’s the secret behind the success of this grassroots employee support program? As I have been transitioning leadership of the FED@IBM Program and Community, I have been reflecting on our program’s success and how to define how we have been able to sustain the program.

Between Two Releases of Ubuntu 20.04 and Fedora 32

Filed under
Red Hat
Ubuntu

Both Ubuntu Focal Fossa and Fedora 32 released in the same time April this year. They are two operating systems from different families namely Debian and Red Hat. One of their most interesting things in common is the arrival of computer companies like Dell and Star Labs (and Lenovo's coming) that sell special preinstalled laptops and PCs. I make this summary to remind myself and inform you all growth of these great operating systems. Enjoy!

Read more

IBM/Red Hat/Fedora: Systemd, Containers, Ansible, IBM Cloud Pak and More

Filed under
Red Hat
  • Systemd 246 Is On The Way With Many Changes

    With it already having been a few months since systemd 245 debuted with systemd-homed, the systemd developers have begun their release dance for what will be systemd 246.

  • Containers: Understanding the difference between portability, compatibility and supportability

    Portability alone does not offer the entire promise of Linux containers. You also need Compatibility and Supportability.

  • Red Hat Updates Ansible Automation Platform

    Red Hat recently announced key enhancements to the Ansible Automation portfolio, including the latest version of Red Hat Ansible Automation Platform and new Red Hat Certified Ansible Content Collections available on Automation Hub.

  • IBM Cloud Pak for Integration in 2 minutes
  • Introducing modulemd-tools

    A lot of teams are involved in the development of Fedora Modularity and vastly more people are affected by it as packagers and end-users. It is obvious, that each group has its own priorities, use-cases and therefore different opinions on what is good or bad about the current state of the project. Personally, I was privileged (or maybe doomed) to represent yet another, often forgotten, group of users - third-party build systems.

    Our team is directly responsible for the development and maintenance of Copr and a few years ago we decided to support building modules alongside building just regular packages. We stumbled upon many frustrating pitfalls that I don’t want to discuss right now but the major one was definitely not enough tools for working with modules. That was understandable in the early stages of the development process but it has been years and we still don’t have the right tools for building modules on our own, without relying on the Fedora infrastructure. You may recall me expressing the need for them at the Flock 2019 conference.

  • GSoC 2020 nmstate project update for June

    This blog is about my experience working in nmstate project and first month in GSoC coding period. I was able to start working on implementing the varlink support mid of community bonding period. This was very helpful because I was able to identify some issues in the python varlink package that was not mentioned in documentation and I had to spend more time finding the cause of the issue. There have been minor changes to proposed code structure and project timeline after the feedback from the community members. In the beginning it was difficult to identify syntax errors in varlink interface definitions. This has been slow progress because of new issues and following are the tasks I have completed so far.

Storage Instantiation Daemon in Fedora, IBM/Spark and Talospace Project/POWER

Filed under
Red Hat
  • Fedora Looks To Introduce The Storage Instantiation Daemon

    As one of the last minute change proposals for Fedora 33 is to introduce the Red Hat backed Storage Instantiation Daemon "SID" though at least for this first release would be off by default. The Storage Instantiation Daemon is one of the latest storage efforts being worked on by Red Hat engineers.

    The Storage Instantiation Daemon is intended to help manage Linux storage device state tracking atop udev and reacts to changes via uevents. This daemon can offer an API for various device subsystems and provides insight into the Linux storage stack. More details on this newer open-source effort via sid-project.github.io.

  • Explore best practices for Spark performance optimization

    I am a senior software engineer working with IBM’s CODAIT team. We work on open source projects and advocacy activities. I have been working on open source Apache Spark, focused on Spark SQL. I have also been involved with helping customers and clients with optimizing their Spark applications. Apache Spark is a distributed open source computing framework that can be used for large-scale analytic computations. In this blog, I want to share some performance optimization guidelines when programming with Spark. The assumption is that you have some understanding of writing Spark applications. These are guidelines to be aware of when developing Spark applications.

    [...]

    Spark has a number of built-in user-defined functions (UDFs) available. For performance, check to see if you can use one of the built-in functions since they are good for performance. Custom UDFs in the Scala API are more performant than Python UDFs. If you have to use the Python API, use the newly introduced pandas UDF in Python that was released in Spark 2.3. The pandas UDF (vectorized UDFs) support in Spark has significant performance improvements as opposed to writing a custom Python UDF. Get more information about writing a pandas UDF.

  • The Talospace Project: Firefox 78 on POWER

    Firefox 78 is released and is running on this Talos II. This version in particular features an updated RegExp engine but is most notable (notorious) for disabling TLS 1.0/1.1 by default (only 1.2/1.3). Unfortunately, because of craziness at $DAYJOB and the lack of a build waterfall or some sort of continuous integration for ppc64le, a build failure slipped through into release but fortunately only in the (optional) tests. The fix is trivial, another compilation bug in the profiler that periodically plagues unsupported platforms, and I have pushed it upstream in bug 1649653. You can either apply that bug to your tree or add ac_add_options --disable-tests to your .mozconfig. Speaking of, as usual, the .mozconfigs we use for debug and optimized builds have been stable since Firefox 67.

IBM/Red Hat/Fedora Leftovers

Filed under
Red Hat
  • Ask the experts during Red Hat Summit Virtual Experience: Open House

    One of the most popular activities during the Red Hat Summit Virtual Experience was the Ask the Experts sessions, where attendees could engage with Red Hat experts and leadership in real time, so we're bringing it back for our Open House in July.

  • Making open source more inclusive by eradicating problematic language

    Open source has always been about differing voices coming together to share ideas, iterate, challenge the status quo, solve problems, and innovate quickly. That ethos is rooted in inclusion and the opportunity for everyone to meaningfully contribute, and open source technology is better because of the diverse perspectives and experiences that are represented in its communities. Red Hat is fortunate to be able to see the impact of this collaboration daily, and this is why our business has also always been rooted in these values.

    Like so many others, Red Hatters have been coming together the last few weeks to talk about ongoing systemic injustice and racism. I’m personally thankful to Red Hat’s D+I communities for creating awareness and opportunities for Red Hatters to listen in order to learn, and I’m grateful that so many Red Hatters are taking those opportunities to seek understanding.

  • The latest updates to Red Hat Runtimes

    Today, we are happy to announce that the latest release of Red Hat Runtimes is now available. This release includes updates that build upon the work the team has done over the past year for building modern, cloud-native applications.

    Red Hat Runtimes, part of the Red Hat Application Services portfolio, is a set of products, tools and components for developing and maintaining cloud-native applications. It offers lightweight runtimes and frameworks for highly-distributed cloud architectures, such as microservices or serverless applications. We continuously make updates and improvements to meet the changing needs of our customers, and to help developers better build business-critical applications. Read on for the latest.

  • Kourier: A lightweight Knative Serving ingress

    Until recently, Knative Serving used Istio as its default networking component for handling external cluster traffic and service-to-service communication. Istio is a great service mesh solution, but it can add unwanted complexity and resource use to your cluster if you don’t need it.

    That’s why we created Kourier: To simplify the ingress side of Knative Serving. Knative recently adopted Kourier, so it is now a part of the Knative family! This article introduces Kourier and gets you started with using it as a simpler, more lightweight way to expose Knative applications to an external network.

    Let’s begin with a brief overview of Knative and Knative Serving.

  • CodeTheCurve: A blockchain-based supply chain solution to address PPE shortages

    This past April, creative techies from all over the world gathered online for CodeTheCurve, a five-day virtual hackathon organized by the United Nations Educational, Scientific, and Cultural Organization (UNESCO) in partnership with IBM and SAP. Participants all worked toward the goal of creating digital solutions to address the global pandemic.

    Our team focused on the goal of improving the efficiency of the personal protective equipment (PPE) supply chain in order to prevent shortages for health care workers. With the rise of the current global pandemic, supplies of medical equipment have become more critical, particularly PPE for medical workers. In many places, PPE shortages have been a serious problem. To address this challenge, we proposed that a blockchain-based supply chain could help make this process faster and more reliable, thereby connecting health ministries, hospitals, producers, and banks, and making it easier to track and report information on supplies.

  • Analyze your Spark application using explain

    It is important that you have some understanding of Spark execution plan when you are optimizing your Spark applications. Spark provides an explain API to look at the Spark execution plan for your Spark SQL query. In this blog, I will show you how to get the Spark query plan using the explain API so you can debug and analyze your Apache Spark application. The explain API is available on the Dataset API. You can use it to know what execution plan Spark will use for your Spark query without actually running it. Spark also provides a Spark UI where you can view the execution plan and other details when the job is running. For Spark jobs that have finished running, you can view the Spark plan that was used if you have the Spark history server set up and enabled on your cluster. This is useful when tuning your Spark jobs for performance optimizations.

  • What’s new in Apache Spark 3.0

    The Apache Spark community announced the release of Spark 3.0 on June 18 and is the first major release of the 3.x series. The release contains many new features and improvements. It is a result of more than 3,400 fixes and improvements from more than 440 contributors worldwide. IBM Center of Open Source for Data and AI Technology (CODAIT) focuses on a number of selective open source technologies on machine learning, AI workflow, trusted AI, metadata, and big data process platform, etc. has delivered approximate hundreds of commits, including a couple of key features in this release.

  • GSoC Progress Report: Dashboard for Packit

    Hi, I am Anchit, a 19 y.o. from Chandigarh, India. I love programming, self-hosting, gaming, reading comic books, and watching comic-book based movies/tv.

    The first version of Fedora I tried was 21 when I came across it during my distro-hopping spree. I used it for a couple of months and then moved on to other distros. I came back to Fedora in 2017 after a couple of people on Telegram recommended it and have been using it ever since. A big reason why I stuck with Fedora this time is the community. Shout out to @fedora on Telegram. They’re nice, wholesome and helpful. They also got me into self-hosting and basic sys-admin stuff.

  • Fedora Looking To Offer Better Upstream Solution For Hiding/Showing GRUB Menu

    Fedora for the past few releases doesn't show the GRUB boot-loader menu by default when only Fedora is installed on the system as there is little purpose for most users and it just interrupts the boot flow. But for those wanting to access the GRUB bootloader menu on reboot, they offer integration in GNOME to easily reboot into this menu. The other exception is the menu will be shown if the previous boot failed. This functionality has relied on downstream patches but now they are working towards a better upstream solution.

    Hans de Goede of Red Hat who led the original GRUB hidden boot menu functionality is looking to clean up this feature for Fedora 33. The hope is to get the relevant bits upstream into GNOME and systemd for avoiding the downstream patches they have been carrying. This reduces their technical debt and also makes it easier for other distributions to provide similar functionality.

  • Fedora Developers Discussing Possibility Of Dropping Legacy BIOS Support

    Fedora stakeholders are debating the merits of potentially ending legacy BIOS support for the Linux distribution and to only support UEFI-based installations.

    Given Fedora 33 GRUB changes planned and things being easier if they were to just switch to the UEFI-based systemd sd-boot as well as Intel planning to end legacy BIOS support in 2020 and UEFI being very common to x86_64 systems for many years now, Fedora developers are discussing whether it's a good time yet for their bleeding-edge platform to also begin phasing out legacy BIOS support.

IBM/Red Hat: Sysadmins, Success Stories, Apache Kafka and IBM "AI" Marketing/Hype

Filed under
Red Hat
  • Sysadmin stories from the trenches: Funny user mistakes

    I was a noob IT guy in the late 90s. I provided desktop support to a group of users who were, shall we say, not the most technical of users. I sometimes wonder where those users are today, and I silently salute the staff that's had to support them since I left long ago.

    I suffered many indignities during that time. I can chuckle about the situations now.

  • Sneak peek: Podman's new REST API

    This one is just between you and me, don't tell anyone else! Promise? Okay, I have your word, so here goes: There's a brand new REST API that is included with version 2.0 of Podman! That release has just hit testing on the Fedora Project and may have reached stable by the time this post is published. With this new REST API, you can call Podman from platforms such as cURL, Postman, Google's Advanced REST client, and many others. I'm going to describe how to begin using this new API.

    The Podman service only runs on Linux. You must do some setup on Linux to get things going.

  • Red Hat Success Stories: Creating a foundation for a containerized future

    Wondering how Red Hat is helping its customers succeed? We regularly publish customer success stories that highlight how we're helping customers gain efficiency, cut costs, and transform the way they deliver software. This month we'll look at how Slovenská sporiteľňa and Bayport Financial Services have worked with Red Hat to improve their business.

  • Apache Kafka and Kubernetes is making real time processing in payments a bit easier

    The introduction of the real time payments network in the United States has presented an unique opportunity for organizations to revisit their messaging infrastructure. The primary goal of real time payments is to support real time processing, but a secondary goal is to reduce the toil of the ongoing operations and make real time ubiquitous across the organization.

    Traditional message systems, have been around for quite some time, but have been a bit clunky to operate. Many times, tasks such as software upgrades and routine patches meant the messaging infrastructure would be down while the update was performed, causing delays in payment processing.This may have been reasonable in a world where payment processing was not expected outside of normal banking hours, but in our always-on digital world, customers expect their payments to clear and settle in real time. Today, outages and delays disrupt both business processes and customer experience.

  • IBM and LFAI move forward on trustworthy and responsible AI

    For over a century, IBM has created technologies that profoundly changed how humans work and live: the personal computer, ATM, magnetic tape, Fortran Programming Language, floppy disk, scanning tunneling microscope, relational database, and most recently, quantum computing, to name a few. With trust as one of our core principles, we’ve spent the past century creating products our clients can trust and depend on, guiding their responsible adoption and use, and respecting the needs and values of all users and communities we serve.

    Our current work in artificial intelligence (AI) is bringing a transformation of similar scale to the world today. We infuse these guiding principles of trust and transparency into all of our work in AI. Our responsibility is to not only make the technical breakthroughs required to make AI trustworthy and ethical, but to ensure these trusted algorithms work as intended in real-world AI deployments.

  • IBM donates "Trusted AI" projects to Linux Foundation AI

    IBM on Monday announced it's donating a series of open-source toolkits designed to help build trusted AI to a Linux Foundation project, the LF AI Foundation. As real-world AI deployments increase, IBM says the contributions can help ensure they're fair, secure and trustworthy.

    "Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation," IBM said in a blog post, penned by Todd Moore, Sriram Raghavan and Aleksandra Mojsilovic.

  • IBM donates AI toolkits to Linux Foundation to ‘mitigate bias’ in datasets

    As artificial intelligence (AI) deployments increase around the world, IBM says it’s determined to ensure that they’re fair, secure and trustworthy.

    To that end, it has donated a series of open-source toolkits designed to help build trusted AI to a Linux Foundation project, the LF AI Foundation, as reported in ZDNet.

    “Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation,” IBM said in a blog post, penned by Todd Moore, Sriram Raghavan and Aleksandra Mojsilovic.

  • PionerasDev wins IBM Open Source Community Grant to increase women’s participation in programming

    Last fall, IBM’s open source community announced a new quarterly grant to award nonprofit organizations that are dedicated to education, inclusiveness, and skill-building for women, underrepresented minorities, and underserved communities in the open source world. The Open Source Community Grant aims to help create new tech opportunities for underrepresented communities and foster the adoption and use of open source.

  • Ansible 101 live streaming series - a retrospective

    That last metric can be broken down further: on average, I spent 3.5 hours prepping for each live stream, 1 hour doing the live stream, and then 1 hour doing post-production (setting chapter markers, reading chat messages, downloading the recording, etc.).

    So each video averaged $30 in ad revenue, and by ad revenue alone, the total hourly wage equivalent based on direct video revenue is... $5.45/hour.

    Subtract the cost of the equipment I use for the streaming (~$1,000, most of it used, though I already owned it), and now I'm a bit in the hole!

What Are Fedora Labs and How Are They Useful to You?

Filed under
Red Hat

Fedora Labs are pre-built images of Fedora 32 Workstation, a Linux distribution known for solid performance and new software packages. What the Labs do is provide users of a few common use cases access to an image that comes with all of the software they’d want in order to hit the ground running after they install the system.

There are eight different labs right now, covering everything from astronomy to gaming to design. They’re all live systems, so there is no need to install anything to your system, which is potentially an attractive option for those users who have a system already up and running. Let’s look at all eight in brief.

1. The Fedora Astronomy Lab

The Astronomy Lab comes with a wide array of tools useful in astronomy, including visualization software, scientific Python tools, and free astronomical image processing software. Also of note is a library designed to support the control of astronomical instruments. This Lab will absolutely be great for both experienced and amateur astronomers.

2. The Fedora Comp-Neuro Lab

The Comp-Neuro Lab is similar in its philosophy to the Astronomy Lab: it comes pre-installed with an array of free Neuroscience modelling software to allow you to get to work quickly. This includes SciPy, a scientific Python library, and NEURON, a detailed neuron simulation environment that allows you to work down to the single-neuron level.

Read more

virt-manager is deprecated in RHEL (but only RHEL)

Filed under
Red Hat

I'm the primary author of virt-manager. virt-manager is deprecated in RHEL8 in favor of cockpit, but ONLY in RHEL8 and future RHEL releases. The upstream project virt-manager is still maintained and is still relevant for other distros.

Google 'virt-manager deprecated' and you'll find some discussions suggesting virt-manager is no longer maintained, Cockpit is replacing virt-manager, virt-manager is going to be removed from every distro, etc. These conclusions are misinformed.

The primary source for this confusion is the section 'virt-manager has been deprecated' from the RHEL8 release notes virtualization deprecation section.

Read more

Also: RHEL Deprecating The Virt-Manager UI In Favor Of The Cockpit Web Console

There's A Proposal To Switch Fedora 33 On The Desktop To Using Btrfs

Filed under
Red Hat

More than a decade ago Fedora was routinely trying to pursue the Btrfs file-system by default but those hopes were abandoned long ago. Heck, Red Hat Enterprise Linux no longer even supports Btrfs. While all Red Hat / Fedora interests in Btrfs seemed abandoned years ago especially with Red Hat developing their Stratis storage technology, there is a new (and serious) proposal about moving to Btrfs for Fedora 33 desktop variants.

There is a new proposal to use Btrfs as the default file-system for desktop variants starting with Fedora 33. This proposal is being backed by various Fedora developers, Facebook, and other stakeholders in believing Btrfs is more featureful than the current EXT4 while now is stable enough following years of testing.

Read more

Also: Fedora program update: 2020-26

Syndicate content

More in Tux Machines

SolydXK 10.4 Distro Released, Based on Debian GNU/Linux 10.4 “Buster”

As its version number suggests, SolydXK 10.4 is based on Debian GNU/Linux 10.4, which was released in early May 2020 with more than 50 security updates and over 100 bug fixes. The SolydXK team has worked hard over the past several months to bring you SolydXK 10.4, which includes the latest Linux 4.19 kernel and up-to-date packages from the Debian Buster repositories. On top of that, the new release comes with some important under-the-hood changes. For example, the /usr directories have been merged and the /bin, /sbin and /lib directories have now become symbolic links to /usr/bin, /usr/sbin and /usr/lib. Read more

Android Leftovers

today's leftovers

  • Upcoming SAVVY-V Open Source RISC-V Cluster Board Supports 10GbE via Microsemi PolarFire 64-bit RISC-V SoC

    RISC-V based PolarFire SoC FPGA by Microsemi may be coming up in the third quarter of this year, but Ali Uzel has been sharing a few details about SAVVY-V advanced open-source RISC-V cluster board made by FOSOH-V (Flexible Open SOurce Hardware for RISC-V) community of developers. It’s powered by Microsemi Polarfire RISC-V SoC MPFS250T with four 64-bit RISC-V cores, a smaller RV64IMAC monitor core, and FPGA fabric that allows 10GbE via SFP+ cages, and exposes six USB Type-C ports. The solution is called a cluster board since up to six SAVVY-V boards can be stacked via a PC/104+ connector and interfaced via the USB-C ports.

  • Some PSAs for NUC owners

    I’ve written before, in Contemplating the Cute Brick, that I’m a big fan of Intel’s NUC line of small-form-factor computers. Over the last week I’ve been having some unpleasant learning experiences around them. I’m still a fan, but I’m shipping this post where the search engines can see it in support of future NUC owners in trouble. Two years ago I bought an NUC for my wife Cathy to replace her last tower-case PC – the NUC8i3BEH1. This model was semi-obsolete even then, but I didn’t want one of the newer i5 or i7 NUCs because I didn’t think it would fit my wife’s needs as well. What my wife does with her computer doesn’t tax it much. Web browsing, office work, a bit of gaming that does not extend to recent AAA titles demanding the latest whizzy graphics card. I thought her needs would be best served by a small, quiet, low-power-consumption machine that was cheap enough to be considered readily disposable at the end of its service life. The exact opposite of my Great Beast… The NUC was an experiment that made Cathy and me happy. She especially likes the fact that it’s small and light enough to be mounted on the back of her monitor, so it effectively takes up no desk space or floor area in her rather crowded office. I like the NUC’s industrial design and engineering – lots of nice little details like the four case screws being captive to the baseplate so you cannot lose them during disassembly. Also. Dammit, NUCs are pretty. I say dammit because I feel like this shouldn’t matter to me and am a bit embarrassed to discover that it does. I like the color and shape and feel of these devices. Someone did an amazing job of making them unobtrusively attractive. [...] When I asked if Simply NUC knew of a source for a fan that would fit my 8i3BEH1 – a reasonable question, I think, to ask a company that loudly claims to be a one-stop shop for all NUC needs – the reply email told me I’d have to do “personal research” on that. It turns out that if the useless drone who was Simply NUC “service” had cared about doing his actual job, he could have the read the fan’s model number off the image I had sent him into a search box and found multiple sources within seconds, because that’s what I then did. Of course this would have required caring that a customer was unhappy, which apparently they don’t do at Simply NUC. Third reason I know this: My request for a refund didn’t even get refused; it wasn’t even answered.

  • GNU Binutils 2.35 Preparing For Release

    Binutils 2.35 was branched this weekend as this important component to the open-source Linux ecosystem. Binutils 2.35 has been branched meaning feature development is over for this next version of this collection of GNU tools. GNU Binutils 2.356 drops x86 Native Client (NaCl) support with Google having deprecated it in favor of WebAssembly, new options added for the readelf tool, many bug fixes, and an assortment of other changes albeit mostly on the minor side.

  • Using CPU Subsets for Building Software

    NetBSD has a somewhat obscure tool named psrset that allows creating “sets” of cores and running tasks on one of those sets. Let’s try it: [...]

  • What a TLS self signed certificate is at a mechanical level

    To simplify a lot, a TLS certificate is a bundle of attributes wrapped around a public key. All TLS certificates are signed by someone; we call this the issuer. The issuer for a certificate is identified by their X.509 Subject Name, and also at least implicitly by the keypair used to sign the certificate (since only an issuer TLS certificate with the right public key can validate the signature).

  • Security Researchers Attacked Google’s Mysterious Fuchsia OS: Here’s What They Found

    A couple of things that Computer Business Review has widely covered are important context for the security probe. (These won’t be much surprise to Fuchsia’s followers of the past two years.)

    i.e. Fuschsia OS is based on a tiny custom kernel from Google called Zircon which has some elements written in C++, some in Rust. Device drivers run in what’s called “user mode” or “user land”, meaning they’re not given fully elevated privileges. This means they can be isolated better.

    In user land, everything that a driver does has to go via the kernel first before hitting the actually computer’s resources. As Quark Labs found, this is a tidy way of reducing attack surface. But with some sustained attention, its researchers managed to get what they wanted: “We are able to gain kernel code execution from a regular userland process.”

  • What have you been playing on Linux? Come and have a chat

    Ah Sunday, that special day that's a calm before the storm of another week and a time for a community chat here on GOL. Today, it's our birthday! If you didn't see the post earlier this week, GamingOnLinux as of today has hit the big 11 years old! Oh how time sure flies by. Onto the subject of gaming on Linux: honestly, the majority of my personal game time has been taken up by Into the Breach. It's so gorgeously streamlined, accessible, fun and it's also ridiculously complex at the same time. Tiny maps that require a huge amount of forward thinking, as you weigh up each movement decision against any possible downsides. It's like playing chess, only with big mecha fighting off aliens trying to take down buildings. [...] I've also been quite disappointed in Crayta on Stadia, as it so far hasn't lived up to even my smallest expectations for the game maker. It just seems so half-baked, with poor/stiff animations and a lack of any meaningful content to start with. I'll be checking back on it in a few months but for now it's just not fun.

Programming Leftovers (LLVM Clang, R, Perl and Python)

  • Arm Cortex-A77 Support Upstreamed Finally To LLVM Clang 11

    While the Arm Cortex-A77 was announced last year and already has been succeeded by the Cortex-A78 announcement, support for the A77 has finally been upstreamed to the LLVM Clang compiler. The Cortex-A77 support was added to the GCC compiler last year while seemingly as an oversight the A77 support wasn't added to LLVM/Clang until this week.

  • Dirk Eddelbuettel: Rcpp now used by 2000 CRAN packages–and one in eight!

    As of yesterday, Rcpp stands at exactly 2000 reverse-dependencies on CRAN. The graph on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time. Rcpp was first released in November 2008. It probably cleared 50 packages around three years later in December 2011, 100 packages in January 2013, 200 packages in April 2014, and 300 packages in November 2014. It passed 400 packages in June 2015 (when I tweeted about it), 500 packages in late October 2015, 600 packages in March 2016, 700 packages last July 2016, 800 packages last October 2016, 900 packages early January 2017, 1000 packages in April 2017, 1250 packages in November 2017, 1500 packages in November 2018 and then 1750 packages last August. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is available too.

  • YouTube: The [Perl] Weekly Challenge - 067
  • The [Perl] Weekly Challenge #067

    This week both tasks had one thing in common i.e. pairing two or more list. In the past, I have taken the help from CPAN module Algorithm::Combinatorics for such tasks.

  • Weekly Python StackOverflow Report: (ccxxxiv) stackoverflow python report
  • Flask project setup: TDD, Docker, Postgres and more - Part 1

    There are tons of tutorials on Internet that tech you how to use a web framework and how to create Web applications, and many of these cover Flask, first of all the impressive Flask Mega-Tutorial by Miguel Grinberg (thanks Miguel!). Why another tutorial, then? Recently I started working on a small personal project and decided that it was a good chance to refresh my knowledge of the framework. For this reason I temporarily dropped the clean architecture I often recommend, and started from scratch following some tutorials. My development environment quickly became very messy, and after a while I realised I was very unsatisfied by the global setup. So, I decided to start from scratch again, this time writing down some requirements I want from my development setup. I also know very well how complicated the deploy of an application in production can be, so I want my setup to be "deploy-friendly" as much as possible. Having seen too many project suffer from legacy setups, and knowing that many times such issues can be avoided with a minimum amount of planning, I thought this might be interesting for other developers as well. I consider this setup by no means better than others, it simply addresses different concerns.