Language Selection

English French German Italian Portuguese Spanish

Login

Enter your Tux Machines username.
Enter the password that accompanies your username.

More in Tux Machines

The 10 Best Linux Guitar Tools: The Guitarist’s Essential Toolkit

Linux guitar tools are helping the guitarists for a long time. I always say that Linux is a great environment for music composers. Yet some people have different arguments. In their logic, Linux is not that useful for multimedia because of the lack of some popular paid tools. It’s a partial truth. But still, there are a lot of free Linux tools available for acoustics and mixing. You know the electric guitar completely relies on electronic devices and software. Even there are some great tuner and amp tools for the acoustic guitars also. As a music enthusiast, I love tinkering with these audio-related programs. Read more

today's leftovers

  • Google Summer of Code 2020: [Final Report] Enhancing Syzkaller support for NetBSD

    This report was written by Ayushu Sharma as part of Google Summer of Code 2020. This post is a follow up of the first report and second report. Post summarizes the work done during the third and final coding period for the Google Summer of Code (GSoc’20) project - Enhance Syzkaller support for NetBSD

  • libsecret is accepting Outreachy interns as well – Daiki Ueno

    libsecret is a library that allows applications to store/retrieve user secrets (typically passwords). While it usually works as a client against a separate D-Bus service, it can also use a local file as database. The project is about refactoring the file database so it can easily gain more advanced features like hardware-based security, etc. That might sound intimidating as it touches cryptography, but don’t worry and reach out to us if you are interested

  • TSDgeos' blog: Akademy-es call for papers expanded to October 27

    This year Akademy-es is a bit special since it is happening in the Internet so you don't need to travel to Spain to participate.

  • Twitter and Facebook: unfck the algorithms

    Our socially distant reality is pretty damn weird, let’s be honest. Social networks shouldn’t make it any weirder — or more dangerous. And yet they are making it more dangerous while promising to “bring the world closer together.” Extremists are finding each other in Facebook groups to plan insurrections and other not-very-good-for-civic life things. Facebook has to do better. Over on Twitter, bots and organized mobs have all-too-easily hijacked trends to spread dangerous misinformation and hate speech. Like this and this. Twitter too has to do better.

  • Mozilla Mornings on addressing online harms through advertising transparency

    On 29 October, Mozilla will host the next installment of Mozilla Mornings – our regular breakfast series that brings together policy experts, policymakers and practitioners for insight and discussion on the latest EU digital policy developments. A key focus of the upcoming Digital Services Act and European Democracy Action Plan initiatives is platform transparency – transparency about content curation, commercial practices, and data use to name a few. This installment of Mozilla Mornings will focus on transparency of online advertising, and in particular, how mechanisms for greater transparency of ad placement and ad targeting could mitigate the spread and impact of illegal and harmful content online.

  •                  
    Come on, Amazon: If you're going to copy open-source code for a new product, at least credit the creator
                     
                       

    It broke no law in doing so – the software is published under the permissive Apache License v2 – and developers expect such open-source projects will be copied forked. But Amazon's move didn't win any fans for failing to publicly acknowledge the code's creator.

                       

    There is a mention buried in the NOTICE.txt file bundled with the CloudWatch extension that credits Headless Recorder, under its previous name "puppeteer-recorder," as required by the license. But there's an expectation among open source developers that biz as big as AWS should show more courtesy.

Software:
  • Rdiff-backup - A Local and Remote Backup Tool for Linux

    The Rdiff-backup tool is a simple yet powerful backup tool that can be used to back up data either locally or remotely. It's a cross-platform tool written in python that works on both Linux, macOS and even FreeBSD. Rdiff-backup, just like rsync, is mostly a reverse incremental backup tool that updates the differences from the previous backup to the next one and ensures that you get the latest backup. Additionally, you can easily restore the backup and access your files. In this guide, you will learn how to install Rdiff-backup - A local and remote backup tool for Linux. The Rdiff-backup tool uses the SSH protocol to back up directories over the network. This provides a secure safe and secure transfer of data thanks to the SSH protocol. The remote system ends up with a replica of the source directory and subsequent backups are synced incrementally. Without much further ado, let's dive in and see how the tool is used.

  • GNU recutils - News: GNU recutils is back to active development [Savannah]

    During the last few years I somehow stopped adding new features to the GNU recutils, limiting its development to the resolution of important bugs, and releasing every one or another year. The reason for this was that I considered the recutils to be, mostly, "finished". However, as of recent some projects have adopted recutils as part of their infrastructure (guix, GNUnet) and it seemst hat Fred's and George's favorite tools are getting popular in the internets... and what is more, people are sending patches! o_O So I have decided to put the GNU recutils back under active development, for the immense joy of adults and children (and turtles.)

IBM/Red Hat and Oracle

  • Get started with Node.js 14 on Red Hat OpenShift - Red Hat Developer

    In April, the Node.js development team released Node.js 14. This major version release, code-named Fermium, will become a long-term support (LTS) release in October 2020. Node.js 14 incorporates improvements and new features from the V8 8.1 JavaScript engine. I’ll introduce two of them: Optional chaining and the nullish coalescing operator. I will also show you how to deploy Node.js 14 on Red Hat OpenShift. See the end of the article for a list of resources for learning more about improvements and new features in Node.js 14.

  • Unbreakable Linux Network (ULN) IP address changing on October 30, 2020

    Unbreakable Linux Network (ULN) will be undergoing planned maintenance beginning on October 30th 2020 starting at 6pm Pacific time. This planned maintenance event is scheduled to be completed by 10pm Pacific time on the same date. During this planned maintenance event, the content delivery component of the Unbreakable Linux Network will move to a new IP address.

  • IBM, ServiceNow Join Hands For New Integrated Solution
  • Join IBM and Red Hat at NodeConf – IBM Developer

    NodeConf remote is coming November 2-6. While the conference will be a bit different this year with everyone remote, it will continue to be a premier showcase and reunion of the Node community. IBM is excited to return as a sponsor and to work with Red Hat as our partner in order to provide updates through speaking sessions and workshops. In this blog post, you will find a detailed list of sessions and workshops where you can learn from and interact with Node.js developers and community leaders from Red Hat and IBM. We also look forward to talking to you at the Red Hat and IBM booths which are a great opportunity to catch up on what our Node.js team is up to as well as how Red Hat and IBM can help you succeed in your Node.js deployments. Make sure to join our community members and leaders through these talks and workshops.

Debian and Ubuntu Leftovers

  • Steve Kemp: Offsite-monitoring, from my desktop.

    I've been hosting my services with Hetzner (cloud) recently, and their service is generally pretty good. Unfortunately I've started to see an increasing number of false-alarms. I'd have a server in Germany, with the monitoring machine in Helsinki (coincidentally where I live!). For the past month I've started to get pinged with a failure every three/four days on average, "service down - dns failed", or "service down - timeout". When the notice would wake me up I'd go check and it would be fine, it was a very transient failure. To be honest the reason for this is my monitoring is just too damn aggressive, I like to be alerted immediately in case something is wrong. That means if a single test fails I get an alert, as rather than only if a test failed for something more reasonable like three+ consecutive failures. I'm experimenting with monitoring in a less aggressive fashion, from my home desktop. Since my monitoring tool is a single self-contained golang binary, and it is already packaged as a docker-based container deployment was trivial. I did a little work writing an agent to receive failure-notices, and ping me via telegram - instead of the previous approach where I had an online status-page which I could view via my mobile, and alerts via pushover. So far it looks good. I've tweaked the monitoring to setup a timeout of 15 seconds, instead of 5, and I've configured it to only alert me if there is an outage which lasts for >= 2 consecutive failures. I guess the TLDR is I now do offsite monitoring .. from my house, rather than from a different region. The only real reason to write this post was mostly to say that the process of writing a trivial "notify me" gateway to interface with telegram was nice and straightforward, and to remind myself that transient failures are way more common than we expect.

  • Video Decoding « etbe - Russell Coker

    I’ve had a saga of getting 4K monitors to work well. My latest issue has been video playing, the dreaded mplayer error about the system being too slow. My previous post about 4K was about using DisplayPort to get more than 30Hz scan rate at 4K [1]. I now have a nice 60Hz scan rate which makes WW2 documentaries display nicely among other things. But when running a 4K monitor on a 3.3GHz i5-2500 quad-core CPU I can’t get a FullHD video to display properly. Part of the process of decoding the video and scaling it to 4K resolution is too slow, so action scenes in movies lag. When running a 2560*1440 monitor on a 2.4GHz E5-2440 hex-core CPU with the mplayer option “-lavdopts threads=3” everything is great (but it fails if mplayer is run with no parameters). In doing tests with apparent performance it seemed that the E5-2440 CPU gains more from the threaded mplayer code than the i5-2500, maybe the E5-2440 is more designed for server use (it’s in a Dell PowerEdge T320 while the i5-2500 is in a random white-box system) or maybe it’s just because it’s newer. I haven’t tested whether the i5-2500 system could perform adequately at 2560*1440 resolution.

  • Ubuntu Weekly Newsletter Issue 653

    Welcome to the Ubuntu Weekly Newsletter, Issue 653 for the week of October 11 – 17, 2020. The full version of this issue is available here.