Language Selection

English French German Italian Portuguese Spanish

Linux Journal

Syndicate content
Updated: 6 hours 7 sec ago

Virtual Machine Startup Shells Closes the Digital Divide One Cloud Computer at a Time

Thursday 14th of January 2021 05:00:00 PM
Image Startup turns devices you probably already own - from smartphones and tablets to smart TVs and game consoles - into full-fledged computers.

Shells (shells.com), a new entrant in the virtual machine and cloud computing space, is excited to launch their new product which gives new users the freedom to code and create on nearly any device with an internet connection.  Flexibility, ease, and competitive pricing are a focus for Shells which makes it easy for a user to start-up their own virtual cloud computer in minutes.  The company is also offering multiple Linux distros (and continuing to add more offerings) to ensure the user can have the computer that they “want” to have and are most comfortable with.

The US-based startup Shells turns idle screens, including smart TVs, tablets, older or low-spec laptops, gaming consoles, smartphones, and more, into fully-functioning cloud computers. The company utilizes real computers, with Intel processors and top-of-the-line components, to send processing power into your device of choice. When a user accesses their Shell, they are essentially seeing the screen of the computer being hosted in the cloud - rather than relying on the processing power of the device they’re physically using.

Shells was designed to run seamlessly on a number of devices that most users likely already own, as long as it can open an internet browser or run one of Shells’ dedicated applications for iOS or Android. Shells are always on and always up to date, ensuring speed and security while avoiding the need to constantly upgrade or buy new hardware.

Shells offers four tiers (Lite, Basic, Plus, and Pro) catering to casual users and professionals alike. Shells Pro targets the latter, and offers a quad-core virtual CPU, 8GB of RAM, 160GB of storage, and unlimited access and bandwidth which is a great option for software engineers, music producers, video editors, and other digital creatives.

Using your Shell for testing eliminates the worry associated with tasks or software that could potentially break the development environment on your main computer or laptop. Because Shells are running round the clock, users can compile on any device without overheating - and allow large compile jobs to complete in the background or overnight. Shells also enables snapshots, so a user can revert their system to a previous date or time. In the event of a major error, simply reinstall your operating system in seconds.

“What Dropbox did for cloud storage, Shells endeavors to accomplish for cloud computing at large,” says CEO Alex Lee. “Shells offers developers a one-stop shop for testing and deployment, on any device that can connect to the web. With the ability to use different operating systems, both Windows and Linux, developers can utilize their favorite IDE on the operating system they need. We also offer the added advantage of being able to utilize just about any device for that preferred IDE, giving devs a level of flexibility previously not available.”

“Shells is hyper focused on closing the digital divide as it relates to fair and equal access to computers - an issue that has been unfortunately exacerbated by the ongoing pandemic,” Lee continues. “We see Shells as more than just a cloud computing solution - it’s leveling the playing field for anyone interested in coding, regardless of whether they have a high-end computer at home or not.”

Follow Shells for more information on service availability, new features, and the future of “bring your own device” cloud computing:

Website: https://www.shells.com

Twitter: @shellsdotcom

Facebook: https://www.facebook.com/shellsdotcom

Instagram: https://www.instagram.com/shellscom

#virtual-machine #cloud-computing #Shells

An Introduction to Linux Gaming thanks to ProtonDB

Wednesday 6th of January 2021 05:00:00 PM
by Zachary Renz Video Games On Linux? 

In this article, the newest compatibility feature for gaming will be introduced and explained for all you dedicated video game fanatics. 

Valve releases its new compatibility feature to innovate Linux gaming, included with its own community of play testers and reviewers.

In recent years we have made leaps and strides on making Linux and Unix systems more accessible for everyone. Now we come to a commonly asked question, can we play games on Linux? Well, of course! And almost, let me explain. 

Proton compatibility layer for Steam client 

With the rising popularity of Linux systems, valve is going ahead of the crowd yet again with proton for their steam client (computer program that runs your purchased games from Steam). Proton is a variant of Wine and DXVK that lets Microsoft Games run on Linux operating systems. Proton is backed by Valve itself and can easily be added to any steam account for Linux gaming, through an integration called "Steam Play." 

Lately, there has been a lot of controversy as Microsoft is rumored to someday release its own app store and disable downloading software online. In response, many companies and software developers are pressured to find a new "haven" to share content with the internet. Proton might be Valve's response to this and is working to make more of its games accessible to Linux users. 

Activating Proton with Steam Play 

Proton is integrated into the Steam Client with "Steam Play." To activate proton, go into your steam client and click on Steam in the upper right corner. Then click on settings to open a new window.

Steam Client's settings window

 

From here, click on the Steam Play button at the bottom of the panel. Click "Enable Steam Play for Supported Titles." After, it will ask you to restart steam, click yes and you are ready to play after the restart.

Your computer will now play all of steam's whitelisted games seamlessly. But, if you would like to try other games that are not guaranteed to work on Linux, then click "Enable Steam Play for All Other Titles."

What Happens if a Game has Issues?

Don't worry, this can and will happen for games that are not in Steam's whitelisted games archive. But, there is help for you online on steam and in proton's growing community. Be patient and don't give up! There will always be a solution out there.

Go to Full Article

How To Use GUI LVM Tools

Wednesday 23rd of December 2020 05:00:00 PM
by Ares Lee

The LVM is a powerful storage management module which is included in all the distributions of Linux now. It provides users with a variety of valuable features to fit different requirements. The management tools that come with LVM are based on the command line interface, which is very powerful and suitable for automated/batch operations. But LVM's operations and configuration are quite complex because of its own complexity. So many software companies including Red Hat have launched some GUI-based LVM tools to help users manage LVM more easily. Let’s review them here to see the similarities and differences between individual tools.

system-config-lvm (alternate name LVM GUI)

Provider: Red Hat

The system-config-lvm is the first GUI LVM tool which was originally released as part of Red Hat Linux. It is also called LVM GUI because it is the first one. Later, Red Hat also created an installation package for it. So system-config-lvm is able to be used in other Linux distributions. The installation package includes RPM packages and DEB packages.

The main panel of system-config-lvm

The system-config-lvm only supports lvm-related operations. Its user interface is divided into three parts. The left part is tree view of disk devices and LVM devices (VGs); the middle part is the main view which shows VG usage, divided into LV and PV columns.

There are zoom in/zoom out buttons in the main view to control display ratio, but it is not enough for displaying complex LVM information.The right part displays details of the selected related objects (PV/LV/VG).

The different versions of system-config-lvm are not completely consistent in the organized way of devices. Some of them show both LVM devices and non-lvm devices (disk), the others show LVM devices only. I have tried two versions, one shows LVM devices existing in the system, namely PV/VG/LV only, no other devices; The other can display non-lvm disks and PV can be removed in disk view.

The version which shows non-lvm disks

Supported operations

PV Operations

  • Delete PV
  • Migrate PV

VG Operations

  • Create VG
  • Append PV to VG/Remove PV from VG
  • Delete VG (Delete last PV in VG)

LV Operations

Go to Full Article

Boost Up Productivity in Bash - Tips and Tricks

Wednesday 16th of December 2020 09:45:00 PM
by Antonio Riso Introduction

When spending most of your day around bash shell, it is not uncommon to waste time typing the same commands over and over again. This is pretty close to the definition of insanity.

Luckily, bash gives us several ways to avoid repetition and increase productivity.

Today, we will explore the tools we can leverage to optimize what I love to call “shell time”.

Aliases

Bash aliases are one of the methods to define custom or override default commands.

You can consider an alias as a “shortcut” to your desired command with options included.

Many popular Linux distributions come with a set of predefined aliases.

Let’s see the default aliases of Ubuntu 20.04, to do so simply type “alias” and press [ENTER].

By simply issuing the command “l”, behind the scenes, bash will execute “ls -CF”.

It's as simple as that.

This is definitely nice, but what if we could specify our own aliases for the most used commands?! The answer is, of course we can!

One of the commands I use extremely often is “cd ..” to change the working directory to the parent folder. I have spent so much time hitting the same keys…

One day I decided it was enough and I set up an alias!

To create a new alias type “alias ” the alias name, in my case I have chosen “..” followed by “=” and finally the command we want an alias for enclosed in single quotes.

Here is an example below.

Functions

Sometimes you will have the need to automate a complex command, perhaps accept arguments as input. Under these constraints, aliases will not be enough to accomplish your goal, but no worries. There is always a way out!

Functions give you the ability to create complex custom commands which can be called directly from the terminal like any other command.

For instance, there are two consecutive actions I do all the time, creating a folder and then cd into it. To avoid the hassle of typing “mkdir newfolder” and then “cd newfolder” i have create a bash function called “mkcd” which takes the name of the folder to be created as argument, create the folder and cd into it.

To declare a new function, we need to type the function name “mkcd ” follower by “()” and our complex command enclosed in curly brackets “{ mkdir -vp "$@" && cd "$@"; }”

Go to Full Article

Case Study: Success of Pardus GNU/Linux Migration

Tuesday 8th of December 2020 05:00:00 PM
by Huseyin GUC

Eyüpsultan Municipality decided to use an open source operating system in desktop computers in 2015.

The most important goal of the project was to ensure information security and reduce foreign dependency.

As a result of the research and analyzes prepared, a detailed migration plan was prepared.

As a first step, licensed office software installed on all computers has been removed. LibreOffice software was installed instead.

Later, LibreOffice training was given to the municipal staff.

Meanwhile, preparations were made for the operating system migration.

Instead of the existing licensed operating system, Turkey's developed Pardus GNU / Linux distribution was decided to use.

Applications on the Pardus GNU / linux operating system were examined in detail and unnecessary applications were removed.

And a new ISO file was created with the applications used in Eyüpsultan municipality.

This process automated the setup steps and reduced setup time.

While the project continued at full speed, the staff were again trained on LibreOffice and Pardus GNU / linux.

After their training, the users took the exam.

The Pardus GNU / Linux operating system was installed on the computers of the successful ones.

Those who failed were retrained and took the exam again.

As of 2016, 25% of a computer's operating system migration was completed.

Immigration Project Implementation Steps Analysis

A detailed inventory of all software and hardware products used in the institution was created. The analysis should go down to the department, unit and personnel details.

It should be evaluated whether extra costs will arise in the migration project.

Planning

Migration plan should be prepared, migration targets should be determined.

The duration of the migration should be calculated and the team that will carry out the migration should be determined.

Production

You can use an existing Linux distribution.

Or you can customize the distribution you will use according to your own preferences.

Making a customized ISO file will give you speed and flexibility.

It also helps you compensate for the loss of time caused by incorrect entries.

Test

Start using the ISO file you have prepared in a lab environment consisting of the hardware you use.

Look for solutions, noting any problems encountered during and after installation.

Go to Full Article

BPF For Observability: Getting Started Quickly

Wednesday 2nd of December 2020 05:00:00 PM
by Kevin Dankwardt How and Why for BPF

BPF is a powerful component in the Linux kernel and the tools that make use of it are vastly varied and numerous. In this article we examine the general usefulness of BPF and guide you on a path towards taking advantage of BPF’s utility and power. One aspect of BPF, like many technologies, is that at first blush it can appear overwhelming. We seek to remove that feeling and to get you started.

What is BPF?

BPF is the name, and no longer an acronym, but it was originally Berkeley Packet Filter and then eBPF for Extended BPF, and now just BPF. BPF is a kernel and user-space observability scheme for Linux.

A description is that BPF is a verified-to-be-safe, fast to switch-to, mechanism, for running code in Linux kernel space to react to events such as function calls, function returns, and trace points in kernel or user space.

To use BPF one runs a program that is translated to instructions that will be run in kernel space. Those instructions may be interpreted or translated to native instructions. For most users it doesn’t matter the exact nature.

While in the kernel, the BPF code can perform actions for events, like, create stack traces, count the events or collect counts into buckets for histograms.

Through this BPF programs provide both fast and immensely powerful and flexible means for deep observability of what is going on in the Linux kernel or in user space. Observability into user space from kernel space is possible, of course, because the kernel can control and observe code executing in user mode.

Running BPF programs amounts to having a user program make BPF system calls which are checked for appropriate privileges and verified to execute within limits. For example, in the Linux kernel version 5.4.44, the BPF system call checks for privilege with:

if (sysctl_unprivileged_bpf_disabled && !capable(CAP_SYS_ADMIN)) return -EPERM;

The BPF system call checks for a sysctl controlled value and for a capability. The sysctl variable can be set to one with the command

sysctl kernel.unprivileged_bpf_disabled=1

but to set it to zero you must reboot and make sure to not have your system configured to set it to one at boot time.

Because BPF is doing the work in kernel space significant time and overhead is saved avoiding context switches and by not necessitating transferring large amounts of data back to user space.

Not all kernel functions can be traced. For example if you were to try funccount-bpfcc '*_copy_to_user' you may get output like:

cannot attach kprobe, Invalid argument Failed to attach BPF program b'trace_count_3' to kprobe b'_copy_to_user'

This is kind of mysterious. If you check the output from dmesg you would see something like:

Go to Full Article

A Linux Survey For Beginners

Friday 27th of November 2020 05:00:00 PM
by John Duchek

So you have decided to give the Linux operating system a try. You have heard it is a good stable operating system with lots of free software and you are ready to give it a shot. It is downloadable for free, so you get on the net and search for a copy, and you are in for a shock. Because there isn’t one “Linux”, there are many. Now you feel like a deer in the headlights. You want to make a wise choice, but have no idea where to start. Unfortunately, this is where a lot new Linux users give up. It is just too confusing.

The many versions of Linux are often referred to as “flavors” or distributions. Imagine yourself in an ice cream shop displaying 30+ flavors. They all look delicious, but it’s hard to pick one and try it. You may find yourself confused by the many choices but you can be sure you will leave with something delicious. Picking a Linux flavor should be viewed in the same way.

As with ice cream lovers, Linux users have their favorites, so you will hear people profess which is the “best”. Of course, the best is the one that you conclude, will fit your needs. That might not be the first one you try. According to linuxquestions.org there are currently 481 distributions, but you don’t need to consider every one. The same source lists these distributions as “popular”: Ubuntu, Fedora, Linux Mint, OpenSUSE, PCLinuxOS, Debian, Mageia, Slackware, CentOS, Puppy, Arch. Personally I have only tried about five of these and I have been a Linux user for more than 20 years. Today, I mostly use Fedora.

Many of these also have derivatives that are made for special purpose uses. For example, Fedora lists special releases for Astronomy, Comp Neuro, Design Suite, Games, Jam, Python Classroom, Security Lab, Robotics Suite. All of these are still Fedora, but the installation includes a large quantity of programs for the specific purpose. Often a particular set of uses can spawn a whole new distribution with a new name. If you have a special interest, you can still install the general one (Workstation) and update later.

Very likely one of these systems will suit you. Even within these there are subtypes and “windows treatments” to customize your operating system. Gnome, Xfce, LXDE, and so on are different windows treatments available in all of the Linux flavors. Some try to look like MS windows, some try to look like a Mac. Some try to be original, light weight, graphically awesome. But that is best left for another article. You are running Linux no matter which of those you choose. If you don’t like the one you choose, you can try another without losing anything. You also need to know that some of these distributions are related, so that can help simplify your choice.

 

Go to Full Article

Terminal Vitality

Tuesday 24th of November 2020 05:00:00 PM
by George F Rice

Ever since Douglas Engelbart flipped over a trackball and discovered a mouse, our interactions with computers have shifted from linguistics to hieroglyphics. That is, instead of typing commands at a prompt in what we now call a Command Line Interface (CLI), we click little icons and drag them to other little icons to guide our machines to perform the tasks we desire. 

Apple led the way to commercialization of this concept we now call the Graphical User Interface (GUI), replacing its pioneering and mostly keyboard-driven Apple // microcomputer with the original GUI-only Macintosh. After quickly responding with an almost unusable Windows 1.0 release, Microsoft piled on in later versions with the Start menu and push button toolbars that together solidified mouse-driven operating systems as the default interface for the rest of us. Linux, along with its inspiration Unix, had long championed many users running many programs simultaneously through an insanely powerful CLI. It thus joined the GUI party late with its likewise insanely powerful yet famously insecure X-Windows framework and the many GUIs such as KDE and Gnome that it eventually supported.

GUI Linux

But for many years the primary role for X-Windows on Linux was gratifyingly appropriate given its name - to manage a swarm of xterm windows, each running a CLI. It's not that Linux is in any way incompatible with the Windows / Icon / Mouse / Pointer style of program interaction - the acronym this time being left as an exercise for the discerning reader. It's that we like to get things done. And in many fields where the progeny of Charles Babbage's original Analytic Engine are useful, directing the tasks we desire is often much faster through linguistics than by clicking and dragging icons.

 

A tiling window manager makes xterm overload more manageable

 

A GUI certainly made organizing many terminal sessions more visual on Linux, although not necessarily more practical. During one stint of my lengthy engineering career, I was building much software using dozens of computers across a network, and discovered the charms and challenges of managing them all through Gnu's screen tool. Not only could a single terminal or xterm contain many command line sessions from many computers across the network, but I could also disconnect from them all as they went about their work, drive home, and reconnect to see how the work was progressing. This was quite remarkable in the early 1990s, when Windows 2 and Mac OS 6 ruled the world. It's rather remarkable even today.

Bashing GUIs

Go to Full Article

Building A Dashcam With The Raspberry Pi Zero W

Thursday 19th of November 2020 06:51:29 PM
by Ramon Persaud

I've been playing around with the Raspberry Pi Zero W lately and having so much fun on the command line. For those uninitiated it's a tiny Arm computer running Raspbian, a derivative of Debian. It has a 1 GHz processor that had the ability to be overclocked and 512 MB of RAM, in addition to wireless g and bluetooth.

A few weeks ago I built a garage door opener with video and accessible via the net. I wanted to do something a bit different and settled on a dashcam for my brother-in-law's SUV.

I wanted the camera and Pi Zero W mounted on the dashboard and to be removed with ease. On boot it should autostart the RamDashCam (RDC) and there should also be 4 desktop scripts dashcam.sh, startdashcam.sh, stopdashcam.sh, shutdownshutdown.sh. Also create and a folder named video on the Desktop for the older video files. I also needed a way to power the RDC when there is no power to the vehicle's usb ports. Lastly I wanted it's data accessible on the local LAN when the vehicle is at home.

Here is the parts list:

  1. Raspberry Pi Zero W kit (I got mine from Vilros.com)
  2. Raspberry Pi official camera
  3. Micro SD card, at least 32 gigs
  4. A 3d printed case from thingverse.com
  5. Portable charger, usually used to charge cell phones and tablets on the go
  6. Command strips, it's like double sided tape that's easy to remove or velcro strips

 

First I flashed the SD card with Raspbian, powered it up and followed the setup menu. I also set a static IP address.

Now to the fun stuff. Lets create a service so we can start and stop RDC via systemd. Using your favorite editor, navigate to "/etc/systemd/system/" and create "dashcam.service"  and add the following:

[Unit] Description=dashcam service After=network.target StartLimitIntervalSec=0 [Service] Type=forking Restart=on-failure RestartSec=1 User=pi WorkingDirectory=/home/pi/Desktop ExecStart=/bin/bash /home/pi/Desktop/startdashcam.sh [Install] WantedBy=multi-user.target

 

Now that's complete lets enable the service, run the following: sudo systemctl enable dashcam

I added these scripts to start and stop RDC on the Desktop so my brother-in-law doesn't have to mess around in the menus or command line. Remember to "chmod +x" these 4 scripts.

 

startdashcam.sh

#!/bin/bash # remove files older than 3 days find /home/pi/Desktopvideo -type f -iname '*.flv' -mtime +3 -exec rm {} \; # start dashcam service sudo systemctl start dashcam

 

stopdashcam.sh

Go to Full Article

SeaGL - Seattle GNU/Linux Conference Happening This Weekend!

Tuesday 10th of November 2020 09:56:38 PM
by Webmaster

This Friday, November 13th and Saturday, November 14th, from 9am to 4pm PST the 8th annual SeaGL will be held virtually. This year features four keynotes, and a mix of talks on FOSS tech, community and history. SeaGL is absolutely free to attend and is being run with free software!

Additionally, we are hosting a pre-event career expo on Thursday, November 12th from 1pm to 5pm. Counselors will be available for 30 minute video sessions to provide resume reviews and career guidance.

Mission

The Seattle GNU/Linux conference (SeaGL) is a free, as in freedom and tea, grassroots technical summit dedicated to spreading awareness and knowledge about free/libre/open source software, hardware, and culture.

SeaGL strives to be welcoming, enjoyable, and informative for professional technologists, newcomers, enthusiasts, and all other users of free software, regardless of their background knowledge; providing a space to bridge these experiences and strengthen the free software movement through mentorship, collaboration, and community.

Dates/Times
  • November 13th and 14th
  • Friday and Saturday
  • Main Event: 9am-4:30pm
  • TeaGL: 1-2:45pm, both days
  • Friday Social: 4:30-6pm
  • Saturday Party: 6-10pm
  • Pre-event Career Expo: 1-5pm, Thursday November 12th
  • All times in Pacific Timezone
Hashtags

- `#SeaGL2020`

- `#TeaGLtoasts`

Social Media Reference Links

Best contact: press@seagl.org

Go to Full Article

Hot Swappable Filesystems, as Smooth as Btrfs

Thursday 5th of November 2020 04:40:40 PM
by Tedley Meralus

Filesystems, like file cabinets or drawers, control how your operating system stores data. They also hold metadata like filetypes, what is attached to data, and who has access to that data. For windows or macOS users

Quite honestly, not enough people consider which file system to use for their computers.

Windows and macOS users have no valid reason to look into filesystems because they have one that’s been widely used since its inception. For Windows that’s NTFS and macOS that’s HFS+. For Linux users, there are plenty of different file system options to choose from. The current default in the Linux field is known as the Fourth Extended Filesystem or ext4.

Currently there is discussion for changes in the filesystem space of Linux. Much like the changes to the default init systems and the switch to systemd a few years ago, there has been a push for changing the default Linux filesystem to the Btrfs. No, I'm not using slang or trying to insult you. Btrfs stands for the B-Tree file system. Many Linux users and sysadmins were not too happy with its initial changes. That could be because people are generally hesitant to change, or because they change may have been too abrupt. A friend once said, "I've learned that fear limits you and your vision. It serves as blinders to what may be just a few steps down the road for you." In this article I want to help ease the understanding of Btrfs and make the transition as smooth as butter. Let’s go over a few things first.

What do Filesystems do?

Just to be clear, we can summarize what filesystems do and what they are used for. Like mentioned before filesystems are used for controlling how data is store after a program is no longer using it, how to access that data, where that data is located, and what is attached to the data itself. As a sysadmin, one of the many tasks and responsibilities is to maintain backups and manage filesystems. Partitioning filesystems help with separating different areas in business environments and is common practice for data retention. An example would be taking a 3TB hard disk and partitioning 1TB for your production environment, 1TB for your development environment, 1TB for company related documents and files. When accidents happen to a specific partition, only the data stored in that partition is affected, instead of the entire 3TB drive in this example. A fun example would be a user testing a script in a development application that begins filling up disk space in the dev partition. Filling up a filesystem accidentally, whether it be from an application or a user’s script or anything on the system, could cause an entire system to stop functioning. If data is partitioned to separate partitions, only the data in that partition will be full or affected, so the production and company data partitions are safe.

Go to Full Article

How to Try Linux Without a Classical Installation

Monday 2nd of November 2020 01:39:28 PM
by Antonio Riso

For many different reasons, you may not be able to install Linux on your computer.

Maybe you are not familiar with words like partitioning and bootloader, maybe you share the PC with your family, maybe you don’t feel comfortable to wipe out your hard drive and start over, or maybe you just want to see how it looks before proceeding with a full installation.

I know, it feels frustrating, but no worries, we have got you covered!

In this article, we will explore several ways to try Linux out without the hassle of a classical installation.

Choosing a distribution

In the Linux world, there are several distributions which are quite different between them.

Some are general purpose operating systems, some others are created with a specific use case in mind. That being said, I know how confusing this can be for a beginner.

If you are moving your first steps with Linux and you are still not sure how and why to pick a distribution instead of another one, there are several resources online available to help you.

A perfect example of these resources is the website https://distrochooser.de/ which will walk you through a questionnaire to understand your needs and advice on what distribution could be a good fit for your use case.

Once you have chosen your distribution, there are high chances it will have a live CD image available for testing before the installation. If this is the case, here below you can find many ways to “boot” your live CD ISO image.

MobaLiveCD

MobaLiveCD is an amazing open source application which lets run a live Linux on windows with nearly zero efforts.

Download the application from the official site download page available here and run it.

It will present a screen where you can choose either a Linux Live CD ISO file or a bootable USB drive.

Click on Run the LiveCD, select your ISO file, select no when asked if you want to create a hard disk.

Your Linux virtual machine will boot up “automagically”.

Go to Full Article

How to Create EC2 Duplicate Instance with Ansible

Wednesday 28th of October 2020 07:32:42 PM
by Tomasz Szandała

Many companies like mine use AWS infrastructure as a service (IaaS) heavily. Sometimes we want to perform a potentially risky operation on an EC2 instance. As long as we do not work with immutable infrastructure it is imperative to be prepared for instant revert.

One of the solutions is to use a script that will perform instance duplication, but in modern environments, where unification is an essence it would be wiser to use more common known software instead of making up a custom script.

Here comes the Ansible!

Ansible is a simple automation software. It handles configuration management, application deployment, cloud provisioning, ad-hoc task execution, network automation, and multi-node orchestration. It is marketed as a tool for making complex changes like zero-downtime rolling patching, therefore we have used it for this straightforward snapshotting task.

Requirements

For this example we will only need an Ansible, in my case it was version 2.9 - in subsequent releases there is a major change with introducing collections so let's stick with this one for simplicity.

Due to working with AWS we require a minimal set of permissions, which include permissions to create:

  • AWS snapshots
  • Register images (AMI)
  • Start and stop EC2
Environment preparation

Since I am forced to work on Windows I have utilized Vagrant instances. Please find below a Vagrantfile content.

We are launching a virtual machine, with Centos 7 and Ansible installed.

For security reasons Ansible, by default, has disabled reading configuration from mounted location, therefore we have to implcity indicate path /vagrant/ansible.cfg.

Listing 1. Vagrantfile for our research

Vagrant.configure("2") do |config| config.vm.box = "geerlingguy/centos7" config.vm.hostname = "awx" config.vm.provider "virtualbox" do |vb| vb.name = "AWX" vb.memory = "2048" vb.cpus = 3 end config.vm.provision "shell", inline: "yum install -y git python3-pip" config.vm.provision "shell", inline: "pip3 install ansible==2.9.10" config.vm.provision "shell", inline: "echo 'export ANSIBLE_CONFIG=/vagrant/ansible.cfg' >> /home/vagrant/.bashrc" end First tasks

In the first lines of the Ansible we specify few meta values. Most of them, like name, hosts and tasks are mandatory. Others provide auxiliary functions.

Listing 2. duplicate_ec2.yml playbook first lines

--- - name: yolo hosts: localhost connection: local gather_facts: false become: false vars: instance_id: i-deadbeef007

tasks:

Go to Full Article

TCP Analysis with Wireshark

Tuesday 27th of October 2020 06:50:16 PM
by Jeffrey Stewart

Transmission Control is an essential aspect of network activity and governs the behavior of many services we take for granted. When sending your emails or just browsing the web you are relying on TCP to send and receive your packets in a reliable fashion. Thanks to two DARPA scientists, Vinton Cerf and Bob Kahn who developed TCP/IP in 1970, we have a specific set of rules that define how we communicate over a network. When Vinton and Bob first conceptualized TCP/IP, they set up a basic network topology and a device that can interface between two other hosts.

In the Figure 1 we have two networks connected by a single gateway. The gateway plays an essential role in the development of any network and bares the responsibility of routing data properly between these two networks.

Since the gateway must understand the addresses of each host on the network, it is necessary to have a standard format in every packet that arrives. Vince and Bob called this the internetwork header prefixed to the packet by the source host.

The source and destination entries, along with the IP address, uniquely identify every host on the network so that the gateway can accurately forward packets.

The sequence number and byte count identifies each packet sent from the source, and accounts for all of the text within the segment. The receiver can use this to determine if it has already seen the packet and discard if necessary.

The check sum is used to validate each packet being sent to ensure error free transmission. This checksum uses a false header and encapsulates the data of the original TCP header, such as source/destination entries , header length and byte count .

Go to Full Article

How to Add a Simple Progress Bar in Shell Script

Monday 26th of October 2020 06:40:25 PM
by Nawaz Abbasi

At times, we need to write shell scripts that are interactive and user executing them need to monitor the progress. For such requirements, we can implement a simple progress bar that gives an idea about how much task has been completed by the script or how much the script has executed.

To implement it, we only need to use the “echo” command with the following options and a backslash-escaped character.

-n : do not append a newline -e : enable interpretation of backslash escapes \r : carriage return (go back to the beginning of the line without printing a newline)

For the sake of understanding, we will use “sleep 2” command to represent an ongoing task or a step in our shell script. In a real scenario, this could be anything like downloading files, creating backup, validating user input, etc. Also, to give an example we are assuming only four steps in our script below which is why we are using 20,40,60,80 (%) as progress indicator. This can be adjusted as per the number of steps in a script. For instance, a script with three steps can be represented by 33,66,99 (%) or a script with ten steps can be represented by 10-90 (%) as progress indicator.

The implementation looks like the following:

echo -ne '>>> [20%]\r' # some task sleep 2 echo -ne '>>>>>>> [40%]\r' # some task sleep 2 echo -ne '>>>>>>>>>>>>>> [60%]\r' # some task sleep 2 echo -ne '>>>>>>>>>>>>>>>>>>>>>>> [80%]\r' # some task sleep 2 echo -ne '>>>>>>>>>>>>>>>>>>>>>>>>>>[100%]\r' echo -ne '\n'

In effect, every time the “echo” command executes, it replaces the output of the previous “echo” command in the terminal thus representing a simple progress bar. The last “echo” command simply enters a newline (\n) in the terminal to resume the prompt for the user.

The execution looks like the following:

Go to Full Article

Ubuntu 20.10 “Groovy Gorilla” Arrives With Linux 5.8, GNOME 3.38, Raspberry Pi 4 Support

Thursday 22nd of October 2020 06:21:51 PM
Article Images Image

Just two days ago, Ubuntu marked the 16th anniversary of its first ever release, Ubuntu 4.10 “Warty Warthog,” which showed Linux could be a more user friendly operating system.

Back to now, after the six months of development cycle and the release of the current long-term Ubuntu 20.04 “Focal Fossa,” Canonical has announced a new version called Ubuntu 20.10 “Groovy Gorilla” along with its seven official flavor: Kubuntu, Lubuntu, Ubuntu MATE, Ubuntu Kylin, Xubuntu, Ubuntu Budgie, and Ubuntu Studio.

Ubuntu 20.10 is a short term or non-LTS release, which means it will be supported for 9 months until July 2021. Though v20.10 does not seem a major release, it does come with a lot of exciting and new features. So, let’s see what Ubuntu 20.10 “Groovy Gorilla” has to offer:

New Features in Ubuntu 20.10 “Groovy Gorilla”

Ubuntu desktop for Raspberry Pi 4

Starting with one of the most important enhancements, Ubuntu 20.10 has become the first Ubuntu release to feature desktop images for the Raspberry Pi 4. Yes, you can now download and run Ubuntu 20.10 desktop on your Raspberry Pi models with at least 4GB of RAM.

Even both Server and Desktop images also support the new Raspberry Pi Compute Module 4. The 20.10 images may still boot on earlier models, but new Desktop images only built for the arm64 architecture and officially only support the Pi 4 variant with 4GB or 8GB RAM.

Linux Kernel 5.8

Upgrading the previous Linux kernel 5.4, the latest Ubuntu 20.10 ships the new Linux kernel 5.8, which is dubbed “the biggest release of all time” by Linus Torvalds as it contains the highest number of over 17595 commits.

So it’s obvious that Linux 5.8 brings numerous updates, new features, and hardware support. For instance, Kernel Event Notification Mechanism, Intel Tiger Lake Thunderbolt support, extended IPv6 Multi-Protocol Label Switching (MPLS) support, Inline Encryption hardware support, Thunderbolt support for Intel Tiger Lake and non-x86 systems, and initial support for booting POWER10 processors.

GNOME 3.38 Desktop Environment

Another key change that Ubuntu 20.10 includes is the latest version of GNOME desktop environment, which enhances the visual appearance, performance, and user experience of Ubuntu.

One of my favorite features that GNOME 3.38 introduces is a much-needed separate “Restart” button in the System menu.

Among other enhancements, GNOME 3.38 also includes:

  • Better multi-monitor support
  • Revamped GNOME Screenshot app
  • Customizable App Grid with no “Frequent Apps” tab
  • Battery percentage indicator
  • New Welcome Tour app written in Rust
  • Core GNOME apps improvements
Share Wi-Fi hotspot Via QR Code

If you’re the person who wants to share the system’s Internet with other devices wirelessly, this feature of sharing Wi-Fi hotspot through QR code will definitely please you.

Thanks to GNOME 3.38, you can now turn your Linux system into a portable Wi-Fi hotspot by sharing QR code with the devices like laptops, tablets, and mobiles.

Add events in GNOME Calendar app

Forget to remember the events? A pre-installed GNOME Calendar app now lets you add new events (birthday, meetings, reminders, releases), which displays in the message tray. Instead of adding new events manually, you can also sync your events from Google, Microsoft, or Nextcloud calendars after adding online accounts from the settings.

Active Directory Support

In the Ubiquity installer, Ubuntu 20.10 has also added an optional feature to enable Active Directory (AD) integration. If you check the option, you’ll be directed to configure the AD by giving information about the domain, administrator, and password.

Tools and Software upgrade

Ubuntu 20.10 also features the updated tools, software, and subsystems to their new versions. This includes:

  • glibc 2.32, GCC 10, LLVM 11
  • OpenJDK 11
  • rustc 1.41
  • Python 3.8.6, Ruby 2.7.0, PHP 7.4.9
  • perl 5.30
  • golang 1.13
  • Firefox 81
  • LibreOffice 7.0.2
  • Thunderbird 78.3.2
  • BlueZ 5.55
  • NetworkManager 1.26.2
Other enhancements to Ubuntu 20.10:
  • Nftables replaces iptables as default backend for the firewall
  • Better support for fingerprint login
  • Cloud images with KVM kernels boot without an initramfs by default
  • Snap pre-seeding optimizations for boot time improvements

A full release notes of Ubuntu 20.10 is also available to read right from here.

How To Download Or Upgrade To Ubuntu 20.10

If you’re looking for a fresh installation of Ubuntu 20.10, download the ISO image available for several platforms such as Desktop, Server, Cloud, and IoT.

But if you’re already using the previous version of Ubuntu, you can also easily upgrade your system to the Ubuntu 20.10. For upgrading, you must be using Ubuntu 20.04 LTS as you cannot directly reach 20.10 from 19.10, 19.04, 18.10, 18.04, 17.04, or 16.04. You should first hop on to v20.04 and then to the latest v20.10.

As Ubuntu 20.10 is a non-LTS version and by design, Ubuntu only notifies a new LTS release, you need to upgrade manually by either choosing a GUI method using the built-in Software Updater tool or a command line method using the terminal.

For command line method, open terminal and run the following commands:

sudo apt update && sudo apt upgrade

sudo do-release-upgrade -d -m desktop

Or else, if you’re not a terminal-centric person, here’s an official upgrade guide using a GUI Software Updater.

Enjoy Groovy Gorilla!

Ubuntu Groovy Gorilla GNOME GNOME 3.0 Raspberry Pi kernel

Btrfs on CentOS: Living with Loopback

Tuesday 20th of October 2020 03:24:25 PM
by Charles Fisher Introduction

The btrfs filesystem has taunted the Linux community for years, offering a stunning array of features and capability, but never earning universal acclaim. Btrfs is perhaps more deserving of patience, as its promised capabilities dwarf all peers, earning it vocal proponents with great influence. Still, none can argue that btrfs is unfinished, many features are very new, and stability concerns remain for common functions.

Most of the intended goals of btrfs have been met. However, Red Hat famously cut continued btrfs support from their 7.4 release, and has allowed the code to stagnate in their backported kernel since that time. The Fedora project announced their intention to adopt btrfs as the default filesystem for variants of their distribution, in a seeming juxtaposition. SUSE has maintained btrfs support for their own distribution and the greater community for many years.

For users, the most desirable features of btrfs are transparent compression and snapshots; these features are stable, and relatively easy to add as a veneer to stock CentOS (and its peers). Administrators are further compelled by adjustable checksums, scrubs, and the ability to enlarge as well as (surprisingly) shrink filesystem images, while some advanced btrfs topics (i.e. deduplication, RAID, ext4 conversion) aren't really germane for minimal loopback usage. The systemd init package also has dependencies upon btrfs, among them machinectl and systemd-nspawn. Despite these features, there are many usage patterns that are not directly appropriate for use with btrfs. It is hostile to most databases and many other programs with incompatible I/O, and should be approached with some care.

Go to Full Article

How to Secure Your Website with OpenSSL and SSL Certificates

Friday 16th of October 2020 04:10:47 PM
by Tedley Meralus

The Internet has become the number one resources for news, information, events, and all things social. As most people know there are many ways to create a website of your own and capture your own piece of the internet to share your stories, ideas, or even things you like with others. When doing so it is important to make sure you stay protected on the internet the same way you would in the real world. There are many steps to take in the real world to stay safe, however, in this article we will be talking about staying secure on the web with an SSL certificate.

OpenSSL is a command line tool we can use as a type of "bodyguard" for our webservers and applications. It can be used for a variety of things related to HTTPS, generating private keys and CSRs (certificate signing requests), and other examples. This article will break down what OpenSSL is, what it does, and examples on how to use it to keep your website secure. Most online web/domain platforms provide SSL certificates for a fixed yearly price. This method, although it takes a bit of technical knowledge, can save you some money and keep you secure on the web.

* For example purposes we will use testmastersite.com for commands and examples

How this guide may help you:

  • Using OpenSSL to generate and configure CSRs
  • Understanding SSL certificates and their importance
  • Learn about certificate signing requests (CSRs)
  • Learn how to create your own CSR and private key
  • Learn about OpenSSL and its common use cases

Requirements

OpenSSL

The first thing to do would be to generate a 2048-bit RSA key pair on your machine. This pair i'm referring to is both your private and public key. You can use a list of tools online to do so, but for this example we will be working with OpenSSL.

What are SSL certificates and who cares?

According to GlobalSign.com an SSL certificate is a small data file that digitally binds a cryptographic key to an organizations details. When installed on a webserver, it activates the padlock and the https protocol and allows secure connections from a web server to a browser. Let me break that down for you. An SSL certificate is like a bodyguard for your website. To confirm that a site is using an SSL you can typically check that the site has an https in the url rather than an http string in the name. the "s" stands for Secure.

  • Example SECURE Site: https://www.testmastersite.com/

Go to Full Article

Pretty Good Privacy (PGP) and Digital Signatures

Wednesday 14th of October 2020 03:29:57 PM
by Ankur Kothiwal

If you have sent any plaintext confidential emails to someone (most likely you did), have you ever questioned yourself about the mail being tampered with or read by anyone during transit? If not, you should!

Any unencrypted email is like a postcard. It can be seen by anyone (crackers/security hackers, corporations, governments, or anyone with the required skills), during its transit.

In 1991 Phil Zimmermann, a free speech activist, and anti-nuclear pacifist developed Pretty Good Privacy (PGP), the first software available to the general public that utilized RSA (a public key cryptosystem, will discuss it later) for email encryption and signing. Zimmermann, after having had a friend post the program on the worldwide Usenet, got prosecuted by the U.S. government; later he was charged by the FBI for illegal weapon export because encryption tools were considered as such (all charges were eventually dropped). Zimmermann later founded PGP Inc., which is now part of Symantec Corporation.

In 1997 PGP Inc. submitted a standardization proposal to the Internet Engineering Task Force. The standard was called OpenPGP and was defined in 1998 in the IETF document RFC 2440. The latest version of the OpenPGP standard is described in RFC 4880, published in 2007.

Nowadays there are many OpenPGP-compliant products: the most widespread is probably GnuPG (GNU Privacy Guard, or GPG for short) which has been developed since 1999 by Werner Koch. GnuPG is free, open-source, and available for several platforms. It is a command-line only tool.

PGP is used for digital signature, encryption (and decrypting obviously, nobody will use software which only encrypts!), compression, Radix-64 conversion.

In this article, we will explain encryption and digital signatures.

So what encryption is, how does it work, and how does it benefit us?

Encryption (Confidentiality)

Encryption is the process of conversion of any information to a ciphertext or an unreadable form. A very simple example of encrypting text is:

Hello this is Knownymous and this is a ciphertext.

Uryyb guvf vf Xabjalzbhf naq guvf vf n pvcuregrkg.

If you read it carefully, you will notice that every letter of the English alphabet is converted to its next 13th letter in the English alphabet, so 13 is the key here, needed to decrypt it. It was known as Caesar cipher (Yes, the method is named after Julius Caesar).

Since then there are many encryption techniques (Cryptography) developed like- Diffie–Hellman key exchange (DH), RSA.

The techniques can be used in two ways:

Go to Full Article

Mark Text vs. Typora: Best Markdown Editor For Linux?

Tuesday 13th of October 2020 05:22:34 PM
by Sarvottam Kumar

Markdown is a widely used markup language, which is now not only used for creating documentation or notes but also for creating static websites (using Hugo or Jekyll). It is supported by major sites like GitHub, Bitbucket, GitLab, Stack Exchange, and Reddit.

Markdown follows a simple easy-to-read and easy-to-write plain text formatting syntax. By just using non-alphabetic characters like asterisk (*), hashtag (#), backtick (`), or dash (-), you can format text as bold, italics, lists, headings, tables and so on.

Now, to write in Markdown, you can choose any Markdown applications available for Windows, macOS, and Linux desktop. You can even use web-based in-browser Markdown editors like StackEdit. But if you’re specifically looking for the best Markdown editor for Linux desktop, I present you two Markdown editors: Mark Text and Typora.

I’ve also tried other popular Markdown apps available for Linux platforms such as Joplin, Remarkable, ReText, and Mark My Words. But the reason I chose Mark Text and Typora is the seamless live preview features with distraction free user interface. Unlike other Markdown editors, these two do not have a dual panel (writing and preview window) interface, which is why I find both the most distinguishable applications among others.

Before I start discussing the extensive dissimilarities between Typora and Mark Text, let me briefly tell you the common features that both of them offer.

Similarities Between Mark Text And Typora
  • Real time preview
  • Export to HTML and PDF
  • GitHub Flavored Markdown
  • Inline styles
  • Code and Math Blocks
  • Support for Flowchart, Sequence diagram
  • Light and Dark Themes
  • Source Code, Typewriter, and Focus mode
  • Auto save
  • Paste images directly from clipboard
  • Available for Linux, macOS, and Windows
Differences Between Mark Text And Typora Installation

If you’re a beginner and using non-Debian Linux distribution, you may find it difficult to install Typora. This is because Typora is packaged and tested only on Ubuntu, hence, you can install it easily on Debian-based distros like Ubuntu and Linux Mint by using commands or Debian packages, but not on other distros like Arch, or Void, where you’ve to build from binary packages for which official command is also not available.

Go to Full Article

More in Tux Machines

Proprietary Software and Digital Restrictions (DRM)

  • GitHub still won’t explain if it fired someone for saying ‘Nazi,’ and employees are pissed

    The current conflict began the day of the riots in Washington, DC when a Jewish employee told co-workers: “stay safe homies, nazis are about.” Some colleagues took offense to the language, although neo-Nazi organizations were, in fact, present at the riots. One engineer responded: “This is untasteful conduct for workplace [in my opinion], people have the right to protest period.”

  • Amazon Web Services opens first office in Greece

    It said services covered areas from big data analytics and mobile, web and social media applications to enterprise business applications and the internet of things.

  • Critical Microsoft Defender Bug Actively Exploited; Patch Tuesday Offers 83 Fixes

    Researchers believe the vulnerability, tracked as CVE-2021-1647, has been exploited for the past three months and was leveraged by hackers as part of the massive SolarWinds attack. Last month, Microsoft said state-sponsored hackers had compromised its internal network and leveraged additional Microsoft products to conduct further attacks.

    Affected versions of Microsoft Malware Protection Engine range from 1.1.17600.5 to 1.1.17700.4 running on Windows 10, Windows 7 and 2004 Windows Server, according to the security bulletin.

  • Making Clouds Rain :: Remote Code Execution in Microsoft Office 365

    TL;DR; This post is a story on how I found and exploited CVE-2020-168751, a remote code execution vulnerability in Exchange Online and bypassed two different patches for the vulnerability. Exchange Online is part of the Office 365 suite that impacted multiple cloud servers operated by Microsoft that could have resulted in the access to millions of corporate email accounts.

  • Dropbox lays off 11% of its workforce as COO departs

    Dropbox in November provided revenue guidance of $497 million to $499 million for the fourth quarter. The company said at the time that it’s aiming to achieve margins of 28% to 30% in the long term.

  • Technical Error 'Saw 150,000 U.K. Police Records Wiped' From Databases

    Police have been asked to assess if there is a threat to public safety after it was revealed that thousands of police records were deleted in error, including data on fingerprints, DNA, and arrest histories.

    The error, first reported in the Times, saw 150,000 files lost, with fears it could mean offenders go free. A coding error is thought to have caused the earmarking of the files for deletion.

    The U.K. Home Office said the lost entries related to people who were arrested and then released without further action and no records of criminal or dangerous people had been deleted. Home secretary Priti Patel is now under pressure to explain the mistake, which the opposition Labour party said "presents huge dangers" for public safety.

  • January 2021 Linux Foundation Newsletter: Bootcamp Sale, SolarWinds Orion, New Kubernetes & WebAssembly Classes, LFX Webinar Series
  • How I hijacked the top-level domain of a sovereign state

    Note: This issue has been resolved and the .cd ccTLD no longer sends NS delegations to the compromised domain.

    TL;DR: Imagine what could happen if the country-code top-level domain (ccTLD) of a sovereign state fell into the wrong hands. Here’s how I (@Almroot) bought the domain name used in the NS delegations for the ccTLD of the Democratic Republic of Congo (.cd) and temporarily took over 50% of all DNS traffic for the TLD that could have been exploited for MITM or other abuse.

  • Apple begins blocking M1 Mac users from side loading iPhone and iPad applications

    As a refresher, Apple Silicon Macs allow users to run iOS and iPad applications on their Mac, but developers can opt out of allowing their apps to be installed on the Mac. This is the path that many developers have taken, making the necessary change in App Store Connect to remove their app from the Mac App Store.

    But with that being said, until today, you could manually install iOS apps like Netflix, Instagram, and Facebook on an M1 Mac by using their respective IPA files downloaded under a valid Apple ID. Many people were using tools such as iMazing to complete this process.

    9to5Mac has now confirmed that, starting today, this is no longer possible unless the application is available on the Mac App Store. Apple has flipped the necessary sever-side switch to block iPhone and iPad applications from being installed on Apple Silicon Macs.

  • Apple is blocking Apple Silicon Mac users from sideloading iPhone apps

    Apple has turned off users’ ability to unofficially install iOS apps onto their M1 Macs (via 9to5Mac). While iOS apps are still available in the Mac App Store, many apps, such as Dark Sky and Netflix, don’t have their developer’s approval to be run on macOS. Up until now, there was a workaround that allowed the use of third-party software to install the apps without having to use the Mac App Store, but it seems like Apple has remotely disabled it.

    When we tried to install an unsupported app on an M1 Mac running macOS 11.1, we got an error message saying that we couldn’t install it and should “try again later”. You can see a screenshot at the top of this article.

  • Apple TV Plus Free Subscriptions Extended Again, This Time Through July 2021

    The tech giant is extending the free-access period for Apple TV Plus customers who have signed up through its 12-month free subscription offer through July 2021. That’s after it had previously pushed that gratis period to February. So if you were among the first to take the one-year-free deal back in November 2019, that’s turned into 21 months free of Apple TV Plus.

  • Spotify Enters Settlement Talks With PRO Music Rights Founder Jake P. Noch

    But a new legal filing, shared with DMN this afternoon, reveals that Spotify and Noch have officially entered settlement talks. The involved parties “jointly” moved for a 60-day stay, “including discovery and all deadlines,” so that they can “attempt to negotiate a resolution of this matter,” the three-page-long document (dated January 13th, 2021) indicates.

    Furthermore, the filing specifies that Sosa Entertainment, Jake P. Noch, and Spotify “have recently made progress towards a potential resolution of the litigation.” The joint motion doesn’t elaborate upon the terms of this possible agreement – though Noch said in a statement that he’s eager to begin working towards an “excellent resolution” in earnest.

  • The FSF fights for your right to repair

    It is this example of automated vehicles that served as inspiration for the FSF's animated video Fight to Repair.

    However, any technology we use could potentially be co-opted by the proprietary, DRM-controlled subscription model Tesla and the tractor manufacturers are proposing. Imagine your "smart home" having a broken lock, or worse, being broken into, and not having the control, or the simple right to repair the bug. Countless other examples can be found showing us that the key to a free future is the right to repair. We need to fight for a future in which the software used is free in order to maintain ownership and control not only over our technology, but over our lives.

Debian Developers: Christian Kastner, Junichi Uekawa, and Michael Prokop

  • Christian Kastner: Keeping your Workstation Silent

    I've tried numerous coolers in the past, some of monstrous proportions (always thinking that more mass must be better, and reputable brands are equally good), but I was never really satisfied; hence, I was doubtful that trying yet another cooler would make a difference. I'm glad I tried the Noctua NH-D15 anyway. With some tweaking to the fan profile in the BIOS, it's totally inaudible at normal to medium workloads, and just a very gentle hum at full load—subtle enough to disappear in the background. For the past decade, I've also regularly purchased sound-proofed cases, but this habit appears anachronistic now. Years ago, sound-proofed cases helped contain the noise of a few HDDs. However, all of my boxes now contain NVMe drives (which, to me, are the biggest improvement to computing since CPUs going multi-core). On the other hand, some of my boxes now contain powerful GPUs used for GPGPU computing, and with the recent higher-end Nvidia and AMD cards all pulling in over 300W, there is a lot of heat to manage. The best way to quickly dump heat is with good airflow. Sound-proofing works against that. Its insulation restricts airflow, which ultimately causes even more noise, as the GPU's fans need to spin at very high RPMs. This is, of course, totally obvious in hindsight.

  • Junichi Uekawa: It's been 20 years since I became a Debian Developer.

    It's been 20 years since I became a Debian Developer. Lots of fun things happened, and I think fondly of the team. I am no longer active for the past 10 years due to family reasons, and it's surprising that I have been inactive for that long. I still use Debian, and I still participate in the local Debian meetings.

  • Michael Prokop: Revisiting 2020

    Mainly to recall what happened last year and to give thoughts and plan for the upcoming year(s) I’m once again revisiting my previous year (previous editions: 2019, 2018, 2017, 2016, 2015, 2014, 2013 + 2012). Due to the Coronavirus disease (COVID-19) pandemic, 2020 was special™ for several reasons, but overall I consider myself and my family privileged and am very grateful for that. In terms of IT events, I planned to attend Grazer Linuxdays and DebConf in Haifa/Israel. Sadly Grazer Linuxdays didn’t take place at all, and DebConf took place online instead (which I didn’t really participate in for several reasons). I took part in the well organized DENOG12 + ATNOG 2020/1 online meetings. I still organize our monthly Security Treff Graz (STG) meetups, and for half of the year, those meetings took place online (which worked OK-ish overall IMO). Only at the beginning of 2020, I managed to play Badminton (still playing in the highest available training class (in german: “Kader”) at the University of Graz / Universitäts-Sportinstitut, USI). For the rest of the year – except for ~2 weeks in October or so – the sessions couldn’t occur. Plenty of concerts I planned to attend were cancelled for obvious reasons, including the ones I would have played myself. But I managed to attend Jazz Redoute 2020 – Dom im Berg, Martin Grubinger in Musikverein Graz and Emiliano Sampaio’s Mega Mereneu Project at WIST Moserhofgasse (all before the corona situation kicked in). The concert from Tonč Feinig & RTV Slovenia Big Band occurred under strict regulations in Summer. At the beginning of 2020, I also visited Literaturshow “Roboter mit Senf” at Literaturhaus Graz.

Games: Familiars.io, Valve and Godot

  • Familiars.io is a MMO monster catching game where the creatures have permadeath

    Well this is quite unusual. You've played monster catching games before but not like this. Familiars.io put a fresh spin on it all and it's quite ingenious. Developed as a pixel-art retro-looking browser game, it's super accessible since you can play it on pretty much anything that can run some simple graphics in a browser window. It's an MMO too, so you can join up with others and chill out. When you want to, go off and catch some monsters, engage is some PvP and perhaps find a new favourite game waiting for you.

  • What we expect to come from Valve to help Linux gaming in 2021 | GamingOnLinux

    By now you've probably heard either through us in our previous article or elsewhere that Valve are cooking something up to help Linux gaming even further. We have an idea on what one part of it is. Valve already do quite a lot. There's the Steam Play Proton compatibility layer, the new container runtime feature to have Linux games both natively supported and Windows games in Proton run through a contained system to ensure compatibility, their work on Mesa drivers and much more. In Valve's review of Steam in 2020 that we covered in the link above, one thing caught our eye and has been gaining attention. Valve mentioned for 2021 they will be "putting together new ways for prospective users to get into Linux gaming and experience these improvements" so what exactly does that mean? Well, a big part of that might have already been suggested directly.

  • Godot Engine - Dev snapshot: Godot 3.2.4 beta 6

    While our main focus stays on the 4.0 branch, the current stable 3.2 branch is receiving a lot of great improvements, and the upcoming 3.2.4 release is going to be packed with many new features.

Zeroshell 3.9.5 Released

Zeroshell 3.9.5 is ready. In this release TLS 1.0 has been disabled and TLS 1.2 enabled for HTTPS. This improves security and compatibility with new browser releases. Read more