Language Selection

English French German Italian Portuguese Spanish

Red Hat

Fedora: Belated Flock Coverage

Filed under
Red Hat
  • Sausage Factory: Modules – Fake it till you make it

    Last week during Flock to Fedora, we had a discussion about what is needed to build a module outside of the Fedora infrastructure (such as through COPR or OBS). I had some thoughts on this and so I decided to perform a few experiments to see if I could write up a set of instructions for building standalone modules.

    To be clear, the following is not a supported way to build modules, but it does work and covers most of the bases.

  • Fedora: Flock Budapest 2019

    Probably the best from FLOCK was to being able to record several members from our community who kindly accepted to say their names, the places where they come from and the language they speak, and create a small video showing how Diverse and Inclusive Fedora is. Produce a short 2min video in such a chaotic schedule is challenging enough, so after 3 hours of recording, and a rough 2:30hs of editing, I ended up finishing the render of the video just as I was plugin my laptop to the main stage… People usually don’t know how long it takes to do something like that, but I’m just glad everyone seemed to like it and that my laptop didn’t died in the process.

    While working on the video, I was able to have small interviews with several folks from Fedora and got to ask them how comfortable they felt in the community. It was satisfactory to learn from them that the overall care we have take to make minorities feel more included has worked, however, it was a bit sad to learn how hard has been for our contributors to deal with burn out, how tired they are of putting fires out instead doing new projects and mainly getting a general sense of getting stuck into the same routine.

    As our team says, our labor is not only to help with the diversity efforts for making everyone feel comfortable, but we also need to work more to include more effective ways to give people a sense of purpose, provide new challenges that put them on a fun path and give them the recognition they deserve. Fedora has always put a lot of effort into bringing new people to contribute, but I’ve seen that the old contributors are getting on a side because “everything is working” and we need to take care of that. They need the same attention (and I would dare to say that probably more) than new contributors do. At the end, is this amazing group of people who has to mentor new contributors. Feel free to reach me or any member of the Diversity and Inclusion Team if you feel that this words got your attention and you’re willing to share some thoughts. Anonymity is a top priority.

  • Flock to Fedora 2019 Trip Report

    I just flew back from Flock 2019 in Budapest, Hungary, and boy are my arms tired!

    Flock is the Fedora Project’s annual contributor-focused conference. This was my first time attending Flock, and I’ve only attended a handful of previous conference in general, so I wasn’t sure what to expect. It was also my first-ever experience presenting at a conference, and I’m not a fan of long flights in cramped seats—so I arrived for the conference with a bit of anxiety in addition to jet lag. However, sampling the local food and beverage choices helped me adjust.

    I found the four days of events to be filled with interesting sessions that sometimes required difficult choices when deciding what to attend.

    Based on my impression of sessions I attended and discussions in which I participated or observed, here are several topics that seemed to be generating a lot of interest and activity within the Fedora community.

Announcing EPEL-8.0 Official Release

Filed under
Red Hat

The EPEL Steering Committee is pleased to announce that the initial EPEL-8 is ready for release. We would like to thank everyone in the community for helping us get the initial set of builds out to mirrors and to consumers worldwide. Special thanks go to Patrick Uiterwijk, Jeroen van Meeuwen, Robert Scheck, and many others in the community who helped in the last 6 months to get this release done.

EPEL-8.0 has packages for the x86_64, ppc64le, aarch64, and now the s390x platforms.
What is EPEL?

EPEL stands for Extra Packages for Enterprise Linux and is a subcommunity of the Fedora and CentOS projects aimed at bringing a subset of packages out of Fedora releases ready to be used and installed on various Red Hat Enterprise Linux (RHEL). It is not a complete rebuild of Fedora or even of previous EPEL releases. EPEL is also a community and not a product. As such we need community members to help get packages into the repository more than done in Fedora.

Read more

Blankets give them enough warm but not Education!

Filed under
GNU
Linux
Red Hat

Operating System?

Hanthana Linux, a Fedora remix bundle with bunch of Educational tools and Sugar Desktop.

Software?

LibreOffice, Firefox, VLC, Educational Tools, Gnome/Sugar Desktop.

Read more

Taz Brown: How Do You Fedora?

Filed under
Red Hat

We recently interviewed Taz Brown on how she uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

Taz Brown is a seasoned IT professional with over 15 years of experience. “I have worked as a systems administrator, senior Linux administrator, DevOps engineer and I now work as a senior Ansible automation consultant at Red Hat with the Automation Practice Team.” Originally Taz started using Ubuntu, but she started using CentOS, Red Hat Enterprise Linux and Fedora as a Linux administrator in the IT industry.

Taz is relatively new to contributing to open source, but she found that code was not the only way to contribute. “I prefer to contribute through documentation as I am not a software developer or engineer. I found that there was more than one way to contribute to open source than just through code.”

Read more

More on Fedora (Flock), IBM/Red Hat and Servers/HPC

Filed under
Red Hat
Server
  • Stephen Gallagher: Flock 2019 Trip Report

    As usual, the conference began with Matthew Miller’s traditional “State of Fedora” address wherein he uses pretty graphs to confound and amaze us. Oh, and reminds us that we’ve come a long way in Fedora and we have much further to go together, still.

    Next was a keynote by Cate Huston of Automattic (now the proud owners of both WordPress and Tumblr, apparently!). She talked to us about the importance of understanding when a team has become dysfunctional and some techniques for getting back on track.

    After lunch, Adam Samalik gave his talk, “Modularity: to modularize or not to modularize?”, describing for the audience some of the cases where Fedora Modularity makes sense… and some cases where other packaging techniques are a better choice. This was one of the more useful sessions for me. Once Adam gave his prepared talk, the two of us took a series of great questions from the audience. I hope that we did a good job of disambiguating some things, but time will tell how that works out. We also got some suggestions for improvements we could make, which were translated into Modularity Team tickets: here and here.

  • IBM Cloud: No shift, Sherlock

    IBM’s cloud strategy has gone through a number of iterations as it attempts to offer a compelling hybrid cloud to shift its customers from traditional IT architectures to modern cloud computing.

    IBM is gambling those customers who have yet to embrace the public cloud fully, remain committed to private and hybrid cloud-based infrastructure, and, if they do use public clouds, they want a cloud-agnostic approach to move workloads. In July, IBM closed the $34bn purchase of Red Hat, an acquisition it hopes will finally enable it to deliver cloud-agnostic products and services.

    To tie in with the completion of the acquisition of Red Hat, IBM commissioned Forrester to look at the benefits to those organisations that are both Red Hat and IBM customers.

  • Red Hat Shares ― Not just open source, *enterprise* open source

    Open source software (OSS), by definition, has source code that’s available for anyone to see, learn from, use, modify, and distribute. It’s also the foundation for a model of collaborative invention that empowers communities of individuals and companies to innovate in a way that proprietary software doesn't allow.

    Enterprise open source software is OSS that’s supported and made more secure―by a company like Red Hat―for enterprise use. It plays a strategic role in many organizations and continues to gain popularity.

  • Taashee Linux Services Joins Bright Computing Partner Program

Linux Stressed in Fedora, Red Hat/IBM and Security

Filed under
Red Hat
Security
  • Fedora Developers Discuss Ways To Improve Linux Interactivity In Low-Memory Situations

    While hopefully the upstream Linux kernel code can be improved to benefit all distributions for low-memory Linux desktops, Fedora developers at least are discussing their options for in the near-term improving the experience. With various easy "tests", it's possible to easily illustrate just how poorly the Linux desktop responds when under memory pressure. Besides the desktop interactivity becoming awful under memory pressure, some argue that an unprivileged task shouldn't be able to cause such behavior to the system in the first place.

  • How open source can help banks combat fraud and money laundering

    Jump ahead a few years to the Fourth EU AML Directive - a regulation which required compliance by June 2017 - demanding enhanced Customer Due Diligence procedures must be adhered to when cash transactions reach an aggregated amount of more than $11,000 U.S. dollars (USD). (The Fifth EU AML Directive is on the way, with a June 2020 deadline.) In New Zealand’s Anti-Money Laundering and Countering Financing of Terrorism Amendment Act of 2017 it is stated that banks and other financial entities must provide authorities with information about clients making cash transactions over $6,500 USD and international monetary wire transfers from New Zealand exceeding $650 USD. In 2018, the updated open banking European Directive on Payment Services (PSD2) that requires fraud monitoring also went into effect. And the Monetary Authority of Singapore is developing regulations regarding the use of cryptocurrencies for terrorist funding and money laundering, too.

  • Automate security in increasingly complex hybrid environments

    As new technologies and infrastructure such as virtualization, cloud, and containers are introduced into enterprise networks to make them more efficient, these hybrid environments are becoming more complex—potentially adding risks and security vulnerabilities.

    According to the Information Security Forum’s Global Security Threat Outlook for 2019, one of the biggest IT trends to watch this year is the increasing sophistication of cybercrime and ransomware. And even as the volume of ransomware attacks is dropping, cybercriminals are finding new, more potent ways to be disruptive. An article in TechRepublic points to cryptojacking malware, which enables someone to hijack another's hardware without permission to mine cryptocurrency, as a growing threat for enterprise networks.

    To more effectively mitigate these risks, organizations could invest in automation as a component of their security plans. That’s because it takes time to investigate and resolve issues, in addition to applying controlled remediations across bare metal, virtualized systems, and cloud environments -- both private and public -- all while documenting changes.

  • Josh Bressers: Appsec isn’t people

    The best way to think about this is to ask a different but related question. Why don’t we have training for developers to write code with fewer bugs? Even the suggestion of this would be ridiculed by every single person in the software world. I can only imagine the university course “CS 107: Error free development”. Everyone would fail the course. It would probably be a blast to teach, you could spend the whole semester yelling at the students for being stupid and not just writing code with fewer bugs. You don’t even have to grade anything, just fail them all because you know the projects have bugs.

    Humans are never going to write bug free code, this isn’t a controversial subject. Pretending we can somehow teach people to write bug free code would be a monumental waste of time and energy so we don’t even try.

    Now it’s time for a logic puzzle. We know that we can’t train humans to write bug free code. All security vulnerabilities are bugs. So we know we can’t train humans to write vulnerability free code. Well, we don’t really know it, we think we can if you look at history. The last twenty years has had an unhealthy obsession with getting humans to change their behaviors to be “more secure”. The only things that have come out of these efforts are 1) nobody likes security people anymore 2) we had to create our own conferences and parties because we don’t get invited to theirs 3) they probably never liked us in the first place.

IBM/Red Hat, Fedora and Servers

Filed under
Red Hat
  • Red Hat technologies make open hybrid cloud a reality

    It’s important to make the distinction between open hybrid cloud and multi-cloud environments. A hybrid cloud features coordination between the tasks running in the different environments. Multi-cloud, on the other hand, simply uses different clouds without coordinating or orchestrating tasks among them.

    Red Hat solutions are certified on all major cloud providers, including Alibaba Cloud, Amazon Web Services, the Google Cloud Platform, IBM Cloud, and Microsoft Azure. As you’re defining your hybrid cloud strategy, you can be confident that you won’t be going it alone as you work with a cloud provider. You won’t be the first person to try things on Cloud x; you’ll have the promise of a proven provider that works with your hybrid architecture.

  • Successful OpenShift 4.1 Disconnected install

    My new position has me working with Red Hat customers in the financial services industry. These customers have strict regulations for controlling access to machines. When it comes to installing OpenShift, we often are deploying into an environment that we call “Air Gapped.” What this means in practice is that all install media need to be present inside the data center, and cannot be fetched from online on demand. This approach is at odds with the conveniences of doing an on-demand repository pull of a container image. Most of the effort involves setting up intern registries and repositories, and getting X509 certificates properly created and deployed to make access to those repositories secure.

    The biggest things we learned is that automation counts. When you need to modify a file, take the time to automate how you modify it. That way, when you need to do it again (which you will) you don’t make a mistake in the modification. In our case, we were following a step-by-step document that got us about halfway through before we realized we made a mistake. Once we switched from manual edits to automated, we were far more likely to rollback to a VM snapshot and roll forward to make progress. At this point, things really started getting smoother.

  • NEST 2.18.0 (and 2.16.0) are ready for use on NeuroFedora

    After a bit of work and testing, NEST 2.18.0 and 2.16.0 are now both available for use on NeuroFedora.

  • Capture and playback UDP packets

    Generating some random statsd communication is easy, it’s text-based UDP protocol and all you need to have is netcat. However things change when statsd server is integrated with real application flodding it with thousands of packets of various attributes.

  • Apache Hive vs. Apache HBase: Which is the query performance champion?

    It's super easy to get lost in the world of big data technologies. There are so many of them that it seems a day never passes without the advent of a new one. Still, such fast development is only half the trouble. The real problem is that it's difficult to understand the functionality and the intended use of the existing technologies.

    To find out what technology suits their needs, IT managers often contrast them. We've also conducted an academic study to make a clear distinction between Apache Hive and Apache HBase—two important technologies that are frequently used in Hadoop implementation projects.

  • Geeking outside the office

    Sysadmins have plush, easy desk jobs, right? We sit in a nice climate-controlled office and type away in our terminals, never really forced to exert ourselves. At least, it might look that way. As I write this during a heat wave here in my hometown, I'm certainly grateful for my air-conditioned office.

    Being a sysadmin, though, carries a lot of stress that people don't see. Most sysadmins have some level of on call. In some, places it's a rotation. In others, it's 24/7. That's because some industries demand a quick response, and others maybe a little less. We're also expected to know everything and solve problems quickly. I could write a whole separate article on how keeping calm in an emergency is a pillar of a good sysadmin.

    The point I'm trying to make is that we are, in fact, under a lot of pressure, and we need to keep it together. While in some cases profit margins are at stake, in other cases lives could be. Let's face it, in this digital world almost everything depends on a sysadmin to keep the lights on. Maintaining all of this infrastructure pushes many sysadmins (and network admins, and especially information security professionals) to the brink of burnout.

    So, this article addresses how getting away from the day job can help you keep your sanity.

  • Rook v1.0 Adds Support for Ceph Nautilus, EdgeFS, and NFS Operator

    Rook, a storage orchestrator for Kubernetes, has released version 1.0 for production-ready workloads that use file, block, and object storage in containers. Highlights of Rook 1.0 include support for storage providers through operators like Ceph Nautilus, EdgeFS, and NFS. For instance, when a pod requests an NFS file system, Rook can provision it without any manual intervention.

    Rook was the first storage project accepted into the Cloud Native Computing Foundation (CNCF), and it helps storage administrators to automate everyday tasks like provisioning, configuration, disaster recovery, deployment, and upgrading storage providers. Rook turns a distributed file system into storage services that scale and heal automatically by leveraging the Kubernetes features with the operator pattern. When administrators use Rook with a storage provider like Ceph, they only have to worry about declaring the desired state of the cluster and the operator will be responsible for setting up and configuring the storage layer in the cluster.

Flathub, brought to you by…

Filed under
Red Hat

Mythic Beasts is a UK-based “no-nonsense” hosting provider who provide managed and un-managed co-location, dedicated servers, VPS and shared hosting. They are also conveniently based in Cambridge where I live, and very nice people to have a coffee or beer with, particularly if you enjoy talking about IPv6 and how many web services you can run on a rack full of Raspberry Pis. The “heart” of Flathub is a physical machine donated by them which originally ran everything in separate VMs – buildbot, frontend, repo master – and they have subsequently increased their donation with several VMs hosted elsewhere within their network. We also benefit from huge amounts of free bandwidth, backup/storage, monitoring, management and their expertise and advice at scaling up the service.

Starting with everything running on one box in 2017 we quickly ran into scaling bottlenecks as traffic started to pick up. With Mythic’s advice and a healthy donation of 100s of GB / month more of bandwidth, we set up two caching frontend servers running in virtual machines in two different London data centres to cache the commonly-accessed objects, shift the load away from the master server, and take advantage of the physical redundancy offered by the Mythic network.

As load increased and we brought a CDN online to bring the content closer to the user, we also moved the Buildbot (and it’s associated Postgres database) to a VM hosted at Mythic in order to offload as much IO bandwidth from the repo server, to keep up sustained HTTP throughput during update operations. This helped significantly but we are in discussions with them about a yet larger box with a mixture of disks and SSDs to handle the concurrent read and write load that we need.

Even after all of these changes, we keep the repo master on one, big, physical machine with directly attached storage because repo update and delta computations are hugely IO intensive operations, and our OSTree repos contain over 9 million inodes which get accessed randomly during this process. We also have a physical HSM (a YubiKey) which stores the GPG repo signing key for Flathub, and it’s really hard to plug a USB key into a cloud instance, and know where it is and that it’s physically secure.

Read more

Fedora Project is Planning to Rebuild Fedora Packages Using Modern CPU Architecture

Filed under
Red Hat

There was an important discussion opened up in the Fedora developer mailing list on 22 July 2019 about x86-64 micro-architecture update.

Fedora currently uses the original K8 micro-architecture (without 3DNow! and other AMD-specific parts) as the baseline for its x86_64 architecture.

This is updated a decade back and last updated on 2003. Due to this, performance of Fedora is not as good as it could be on current CPUs.

So, they are planning to rebuild Fedora packages using modern CPU micro-architecture to something more recent.

The Fedora Project is planning to add this features starting from Fedora 32.

After preliminary discussions with CPU vendors, they came to the conclusion to use AVX2 as the new baseline. AVX2 support was introduced into CPUs from 2013 to 2015.

Along with AVX2, it makes sense to enable certain other CPU features which are not strictly implied by AVX2, such as CMPXCHG16B, FMA, and earlier vector extensions such as SSE 4.2.

Read more

Red Hat/IBM Leftovers

Filed under
Red Hat
  • Red Hat Innovation Awards 2020 Now Open for Nominations

    The Red Hat Innovation Awards have been recurring annually every years since 2007, and the nominations for the 2020 awards are now open. The Red Hat Innovation Awards recognize organizations for the transformative projects and outstanding results they have experienced with Red Hat’s open source solutions.

    Open source has helped transform technology from the datacenter to the cloud and the Red Hat Innovation Awards showcase its transformative impact in organizations around the world. Users should nominate organizations that showcase successful IT implementation and projects that made a difference using open source.

  • IBM offers explainable AI toolkit, but it’s open to interpretation

    Decades before today's deep learning neural networks compiled imponderable layers of statistics into working machines, researchers were trying to figure out how one explains statistical findings to a human.

    IBM this week offered up the latest effort in that long quest to interpret, explain, and justify machine learning, a set of open-source programming resources it calls "AI 360 Explainability."

  • SD Times Open-Source Project of the Week: AI Explainability 360

    The toolkit offers IBM explainability algorithms, demos, tutorials, guides and other resources to explain machine learning outcomes. IBM explained there are many ways to go about understanding the decisions made by algorithms.

    “It is precisely to tackle this diversity of explanations that we’ve created AI Explainability 360 with algorithms for case-based reasoning, directly interpretable rules, post hoc local explanations, post hoc global explanations, and more,” Aleksandra Mojsilovic, IBM Fellow at IBM Research wrote in a post.

    The company believes this work can benefit doctors who are comparing various cases to see whether they are similar, or an application whose loan was denied can use the research to see the main reason for rejection.

Syndicate content

More in Tux Machines