Language Selection

English French German Italian Portuguese Spanish

About Tux Machines

Wednesday, 11 Dec 19 - Tux Machines is a community-driven public service/news site which has been around for over a decade and a half and primarily focuses on GNU/LinuxSubscribe now Syndicate content

Search This Site

Quick Roundup

Typesort icon Title Author Replies Last Post
goblinxfc srlinuxx 26/04/2007 - 6:30pm
nixsys.com srlinuxx 24/09/2007 - 11:24pm
wolvixondisk srlinuxx 02/10/2007 - 10:49pm
arnybw srlinuxx 18/10/2007 - 3:39pm
webpathinlovelinux srlinuxx 07/02/2008 - 3:44pm
bluewhite srlinuxx 25/03/2008 - 10:44pm
pclos srlinuxx 15/06/2008 - 11:18pm
nixsys2 srlinuxx 18/08/2008 - 7:12am
nixsys3 srlinuxx 18/08/2008 - 7:22am
gg 480x60 srlinuxx 03/09/2008 - 11:55am

Fedora and Red Hat: Fedora's Modularity Initiative, Git, Servers, Buildah and Ansible

Filed under
Red Hat
  • Fedora's modularity mess

    Fedora's Modularity initiative has been no stranger to controversy since its inception in 2016. Among other things, there were enough problems with the original design that Modularity went back to the drawing board in early 2018. Modularity has since been integrated with both the Fedora and Red Hat Enterprise Linux (RHEL) distributions, but the controversy continues, with some developers asking whether it's time for yet another redesign — or to abandon the idea altogether. Over the last month or so, several lengthy, detailed, and heated threads have explored this issue; read on for your editor's attempt to integrate what was said.
    The core idea behind Modularity is to split the distribution into multiple "streams", each of which allows a user to follow a specific project (or set of projects) at a pace that suits them. A Fedora user might appreciate getting toolchain updates as soon as they are released upstream while sticking with a long-term stable release of LibreOffice, for example. By installing the appropriate streams, this sort of behavior should be achievable, allowing a fair degree of customization.

    Much of the impetus — and development resources — behind Modularity come from the RHEL side of Red Hat, which has integrated Modularity into the RHEL 8 release as "Application Streams". This feature makes some sense in that setting; RHEL is famously slow-moving, to the point that RHEL 7 did not even support useful features like Python 3. Application Streams allow Red Hat (or others) to make additional options available with support periods that differ from that of the underlying distribution, making RHEL a bit less musty and old, but only for the applications a specific user cares about.

    The use case for Modularity in Fedora is arguably less clear. A given Fedora release has a support lifetime of 13 months, so there are limits to the level of stability that it can provide.

  • Moving bugzilla overrides to dist-git

    A while ago Fedora had pkgdb to configure ACLs for each package repo and package related admin actions. When we moved to 'pagure over dist-git', pagure already provided some of these capabilities. pkgdb would have needed a lot of effort to make it work with the modern package branching (modularity) [1] with different lifecycles for each package that are unrelated to Fedora releases and thus we've decided to retire it and replace it with a different solution.

    One of the missing parts after retireing pkgdb was the ability to set different default bugzilla assignees for EPEL and Fedora. This was solved by creating a new repository called fedora-scm-requests [2]. A script would then parse the contents of the repository, merge that information with the main package admins and repo watchers from dist-git and sync this information to bugzilla so that new bugs get assigned to the correct maintainers and all the interested parties get put on CC:. Each change required a pull request to this repo and someone from the infrastructure team to review and merge the patch. It is obvious that this doesn't scale with the huge number of packages that Fedora and EPEL have.

  • Red Hat customers want the hybrid cloud

    If you listen to some people, everyone and their corner office wants to move to the public cloud. Red Hat's global customers have a different take. Thirty-one percent of Red Hat's customers say "hybrid" describes their strategy best, 21% are leaning toward a private cloud approach, while only 4% see the public cloud as their first choice. There's only one little problem: Finding the staff with the right skills to make the jump from old-school IT to the cloud.

    Businesses prefer the hybrid cloud strategy for many different reasons -- but, overall, data security, cost benefits, and data integration led the pack. For years, the hybrid cloud wasn't that popular. With the rise of the Kubernetes-based hybrid cloud model and with Red Hat being one of the new-model hybrid cloud's leading proponents, customers are embracing the hybrid cloud.

  • Building with Buildah: Dockerfiles, command line, or scripts
  • How to write a multitask playbook in ansible

VirtualBox 6.1 Officially Released with Linux Kernel 5.4 Support, Improvements

Filed under
Software

Oracle released today the final version of the VirtualBox 6.1 open-source and cross-platform virtualization software for GNU/Linux, macOS, and Windows operating systems.
VirtualBox 6.1 is the first major release in the VirtualBox 6 series of the popular virtualization platform and promises some exciting new features, such as support for the latest and greatest Linux 5.4 kernel series, the ability to import virtual machines from the Oracle Cloud Infrastructure, as well as enhanced support for nested virtualization.

"Support for nested virtualization enables you to install a hypervisor, such as Oracle VM VirtualBox or KVM, on an Oracle VM VirtualBox guest. You can then create and run virtual machines in the guest VM. Support for nested virtualization allows Oracle VM VirtualBox to create a more flexible and sophisticated development and testing environment," said Oracle.

Read more

Programming Leftovers

Filed under
Development
  • A static-analysis framework for GCC

    One of the features of the Clang/LLVM compiler that has been rather lacking for GCC may finally be getting filled in. In a mid-November post to the gcc-patches mailing list, David Malcolm described a new static-analysis framework for GCC that he wrote. It could be the starting point for a whole range of code analysis for the compiler.

    According to the lengthy cover letter for the patch series, the analysis runs as an interprocedural analysis (IPA) pass on the GIMPLE static single assignment (SSA) intermediate representation. State machines are used to represent the code parsed and the analysis looks for places where bad state transitions occur. Those state transitions represent constructs where warnings can be emitted to alert the user to potential problems in the code.

    There are two separate checkers that are included with the patch set: malloc() pointer tracking and checking for problems in using the FILE * API from stdio. There are also some other proof-of-concept state machines included: one to track sensitive data, such as passwords, that might be leaked into log files and another to follow potentially tainted input data that is being used for array indexes and the like.

    The malloc() state machine is found in sm-malloc.cc, which is added by this patch, looks for typical problems that can occur with pointers returned from malloc(): double free, null dereference, passing a non-heap pointer to free(), and so on. Similarly, one of the patches adds sm-file.c for the FILE * checking. It looks for double calls to fclose() and for the failure to close a file.

  • RUST howto getting started – hello world

    if one is viewing this site using Firefox or Gecko-Engine… one is running RUST already.

    At the beginning – one was big fan of Java – Java was/still is all the rage – theoretically write once – run anywhere linux, osx and (thanks to Google) on mobile and even on the closed source OS who’s name shall not be mentioned, nobody knows what the Java Virtual Machine does besides running bytecode, Java on slow ARM CPUs is kind of a burden.

  • Async Interview #2: cramertj, part 3

    This blog post is continuing my conversation with cramertj. This will be the last post.

    In the first post, I covered what we said about Fuchsia, interoperability, and the organization of the futures crate.

    In the second post, I covered cramertj’s take on the Stream, AsyncRead, and AsyncWrite traits. We also discussed the idea of attached streams and the importance of GATs for modeling those.

  • Python 3.7.6rc1 and 3.6.10rc1 are now available for testing

    Python 3.7.6rc1 and 3.6.10rc1 are now available. 3.7.6rc1 is the release preview of the next maintenance release of Python 3.7;  3.6.10rc1 is the release preview of the next security-fix release of Python 3.6. Assuming no critical problems are found prior to 2019-12-18, no code changes are planned between these release candidates and the final releases. These release candidates are intended to give you the opportunity to test the new security and bug fixes in 3.7.6 and security fixes in 3.6.10. While we strive to not introduce any incompatibilities in new maintenance and security releases, we encourage you to test your projects and report issues found to bugs.python.org as soon as possible. Please keep in mind that these are preview releases and, thus, their use is not recommended for production environments.

  • Print all git repos from a user (only curl and grep)
  • Linux Fu: Debugging Bash Scripts

    A recent post about debugging constructs surprised me. There were quite a few comments about how you didn’t need a debugger, as long as you had printf. For that matter, we’ve all debugged systems where you had nothing but an LED to flash or otherwise turn on to communicate with the user. However, it is hard to deny that a debugger can help with complex code.

    To say you only need printf would be like saying you only need machine language. Technically accurate — you can do anything in machine language. But it sure makes things easier to have an assembler or some language to help you work out your problem. If you write a simple bash script, you can use the equivalent to printf — maybe that’s the echo command, although there is usually a printf command on a typical system, if you want to use it. However, there are other things you can do with bash including a pretty cool debugger if you know how to find it.

    I assume you already know how to use echo and printf, but let’s dig into how to use trace execution line by line without the need for echo statements on every other line. Along the way, you’ll learn how to get started with the bash debugger.

Kernel: LWN Articles and Radeon Linux 5.6 Changes

Filed under
Linux
  • Fixing SCHED_IDLE

    The scheduler implements many "scheduling classes", an extensible hierarchy of modules, and each class may further encapsulate "scheduling policies" that are handled by the scheduler core in a policy-independent way. The scheduling classes are described below in descending priority order; the Stop class has the highest priority, and Idle class has the lowest.

    The Stop scheduling class is a special class that is used internally by the kernel. It doesn't implement any scheduling policy and no user task ever gets scheduled with it. The Stop class is, instead, a mechanism to force a CPU to stop running everything else and perform a specific task. As this is the highest-priority class, it can preempt everything else and nothing ever preempts it. It is used by one CPU to stop another in order to run a specific function, so it is only available on SMP systems. The Stop class creates a single, per-CPU kernel thread (or kthread) named migration/N, where N is the CPU number. This class is used by the kernel for task migration, CPU hotplug, RCU, ftrace, clock events, and more.

    The Deadline scheduling class implements a single scheduling policy, SCHED_DEADLINE, and it handles the highest-priority user tasks in the system. It is used for tasks with hard deadlines, like video encoding and decoding. The task with the earliest deadline is served first under this policy. The policy of a task can be set to SCHED_DEADLINE using the sched_setattr() system call by passing three parameters: the run time, deadline, and period.

    To ensure deadline-scheduling guarantees, the kernel must prevent situations where the current set of SCHED_DEADLINE threads is not schedulable within the given constraints. The kernel thus performs an admittance test when setting or changing SCHED_DEADLINE policy and attributes. This admission test calculates whether the change can be successfully scheduled; if not, sched_setattr() fails with the error EBUSY.

    The POSIX realtime (or RT) scheduling class comes after the deadline class and is used for short, latency-sensitive tasks, like IRQ threads. This is a fixed-priority class that schedules higher-priority tasks before lower-priority tasks. It implements two scheduling policies: SCHED_FIFO and SCHED_RR. In SCHED_FIFO, a task runs until it relinquishes the CPU, either because it blocks for a resource or it has completed its execution. In SCHED_RR (round-robin), a task will run for the maximum time slice; if the task doesn't block before the end of its time slice, the scheduler will put it at the end of the round-robin queue of tasks with the same priority and select the next task to run. The priority of the tasks under the realtime policies range from 1 (low) to 99 (high).

  • Virtio without the "virt"

    One might ask why it makes sense to implement virtio devices in hardware. After all, they were originally designed for hypervisors and have been optimized for software rather than hardware implementation. Now that virtio support is widespread, the network effects allow hardware implementations to reuse the guest drivers and infrastructure. The virtio 1.1 specification defines ten device types, among them a network interface, SCSI host bus adapter, and console. Implementing a standards-compliant device interface lets hardware implementers focus on delivering the best device instead of designing a new device interface and writing guest drivers from scratch. Moreover, existing guests will work with the device out of the box, and applications utilizing user-space drivers, such as the DPDK packet processing toolkit, do not need to be relinked with new drivers — this is especially helpful when static linking is utilized.

    Implementing virtio in hardware also makes it easy to switch between hardware and software implementations. A software device can be substituted without changing guest drivers if the hardware device is acting up. Similarly, if the driver is acting up, it is possible to substitute a software device to make debugging the driver easier. It is possible to assign hardware devices to performance-critical guests while assigning software devices to the other guests; this decision can be changed in the future to balance resource needs. Finally, implementing virtio in hardware makes it possible to live-migrate virtual machines more easily. The destination host can have either software or hardware virtio devices.

  • 5.5 Merge window, part 1

    The 5.5 merge window got underway immediately after the release of the 5.4 kernel on November 24. The first week has been quite busy despite the US Thanksgiving holiday landing in the middle of it. Read on for a summary of what the first 6,300 changesets brought for the next major kernel release.

  • Radeon Linux 5.6 Changes Begin Queuing - Better Power Management, Adds DMCUB Controller

    While the Linux 5.5 merge window has just been over for less than one week, AMD has already submitted their first batch of feature updates to DRM-Next of new graphics driver material aiming for Linux 5.6 early next year.

Screencasts and Shows: Pisi Linux 2.1.2 Run Through, Linux Headlines, Going Linux, FLOSS Weekly and Selling Keynotes/Tweets at the Linux Foundation

Filed under
GNU
Linux

GNOME at the Back End and GNOME Shell 3.35.2

Filed under
GNOME
  • Molly de Blanc: Keeping the (server) lights on

    Building and maintaining infrastructure for the GNOME project is one of the many activities of the GNOME Foundation, and it’s one of the most important. Building software like the GNOME desktop environment requires a lot of technical support, including managing servers and providing collaboration tools. Since GNOME is focused on being a self-sustaining community, we look as much as possible to managing our own services and software, and making sure it is free and open source.

    The GNOME Infrastructure Team currently supports a total of 34 virtual machines hosted on a total of eight bare metal nodes. These virtual machines allow us to run services like the Openshift Container Platform (OSCP), which provides self-service access to the community to run any of their workflows on an automated and containarized fashion.

    GNOME is build using self-hosted FOSS. We collaboratively build GNOME using a GitLab instance, which has a total of 15k accounts. We do shared storage using NextCloud. Community discussion is handled over Mailman, Discourse, and MoinMoin. We are currently using Indico and Connfa for our event planning and management.

  • GNOME Shell 3.35.2 Begins Launching Spawned Processes Within Systemd Scopes

    Out today is a new development release of GNOME Shell on the road to GNOME 3.36 in March.

    Among the changes in this new GNOME Shell snapshot include:

    - Spawned processes are now placed within systemd scopes in order to improve out-of-memory behavior for applications, an easy means of being able to kill other processes when the shell is restarted, and other use-cases. Systemd scopes allow managing of processes for organization and resource management purposes.

Security: Proprietary Software Holes and More

Filed under
Security
  • It's the end of the 20-teens, and your Windows PC can still be pwned by nothing more than a simple bad font

    With the year winding to a close and the holiday parties set to kick off, admins will want to check out the December Patch Tuesday load from Microsoft, Adobe, Intel, and SAP and get them installed before downing the first of many egg nogs.

    [...]

    Also of note is CVE-2019-1471, a critical hypervisor escape bug that would allow an attacker running on a guest VM to execute code on the host box.

    The bulk of this month's critical fixes were for a series of five remote code execution flaws in Git for Visual Studio. In each of the flaws, said to be caused by improper handling of command-line input, an attacker would launch the exploit by convincing the target to clone a malicious repo.

    The remaining critical patch is for CVE-2019-1468, a play on the tried-and-true font-parsing vulnerability. In the wild, an attacker would embed the poisoned font file in a webpage and attack any system that visits.

  • Exploring Legacy Unix Security Issues

    The operating system SGI IRIX 6.5.22 was declared end of life in 2003, so it has limited use as a production system. I decided I could relive the good old days by looking for new vulnerabilities on an old system in my spare time. It was also an excuse to write some C code, and refresh my memory.

    One of my favorite vulnerabilities is the Insecure Temporary File (CWE-377). This involves manipulating files created in /tmp in an insecure manner. A file is created in /tmp by a piece of software that doesn’t check if the file exists before creating it. Allowing a malicious local user to symlink that file to a critical system file and overwriting it with the contents of what is written to the file in /tmp.

    So I started looking under the /usr/sbin directory for binaries to target. I did a quick examination of binaries and scripts in using the find command to give myself a starting point.

  • Private Internet Access updates Linux desktop client to prevent against [CVE-2019-14899]

    The Breakpointing Bad team at the University of New Mexico recently reported a VPN vulnerability that affects Linux, MacOS, iOS, Android, and more. The vulnerability allows malicious actors to not only see your VPN IP address, but also identify sites you are visiting and inject data into connections. The team consists of William J. Tolley, Beau Kujath, and Jedidiah R. Crandall and the public was notified on December 4th, 2019. Designated [CVE-2019-14899], the vulnerability shook the VPN industry due to the breadth of affected systems. [CVE-2019-14899] affects many different types of VPN protocols including OpenVPN, WireGuard, and IKEv2/IPSec.

    Private Internet Access has released an update to its Linux client that mitigates [CVE-2019-14899] from being used to infer any information about our users’ VPN connections. To our knowledge, Private Internet Access is the first commercial VPN to release a new client that prevents this ongoing security vulnerability.

  • Chrome now warns you when your password has been stolen

    Google is rolling out Chrome version 79 today, and it includes a number of password protection improvements. The biggest addition is that Chrome will now warn you when your password has been stolen as part of a data breach. Google has been warning about reused passwords in a separate browser extension or in its password checkup tool, but the company is now baking this directly into Chrome to provide warnings as you log in to sites on the web.

8 of the worst open source innovations of the decade

Filed under
OSS

Over the years, Linux and open source have been a master class on slow burn success. From out of nothing, Linux has become the champion of the cloud, IoT, and containers. And although it hasn't reached the "world domination" status it swore in the early 2000s, Linux desktop is still very much alive and building momentum.

But that doesn't mean it's been all success; in fact, there have been a few stumbles along the way. Let's take a look at some of the worst open source failures of the decade.

Read more

9 of the biggest open source stories in 2019

Filed under
OSS

The year is 2019. Although cries of "world domination" still echo in the hallowed halls of Linux land, everyone knows this great event will have to wait for another year, but that doesn't mean all those who are invested in open source need to hang their heads in shame. Failure was never an option, and it wasn't an issue--not in the year of subtle takeover.

If I have to give 2019 a title for open source, it is just that--subtle takeover. Why? Because subtle things happened, many of which will have reverberations for years to come.

Let's take a look at the some of the moments that defined the year for Linux and open source.

Read more

Heroku Review apps available for Treeherder

Filed under
Development
Moz/FF

In bug 1566207 I added support for Heroku Review Apps (link to official docs). This feature allows creating a full Treeherder deployment (backend, frontend and data ingestion pipeline) for a pull request. This gives Treeherder engineers the ability to have their own deployment without having to compete over the Treeherder prototype app (a shared deployment). This is important as the number of engineers and contributors increases.

Once created you get a complete Heroku environment with add-ons and workers configured and the deployment for it.

Looking back, there are few new features that came out of the work, however, Heroku Review apps are not used as widely as I would have hoped for.

Read more

Linux-driven RISC-V core to debut on an NXP i.MX SoC

Filed under
Linux

The OpenHW Group unveiled a Linux-driven “CORE-V Chassis” eval SoC due for tape-out in 2H 2020 based on an NXP i.MX SoC, but featuring its RISC-V-and PULP-based 64-bit, 1.5GHz CV64A CPU and 32-bit CV32E cores. Meanwhile, Think Silicon demonstrated a RISC-V-based NEOX|V GPU.

A not-for-profit, open source RISC-V initiative called the OpenHW Group that launched in June has announced that it plans to tape out a Linux-friendly CORE-V Chassis evaluation SoC in the second half of 2020 built around its 64-bit CV64A CPU core and 32-bit CV32E coprocessor. The RISC-V based cores will be integrated into an undefined, NXP i.MX heterogeneous, multi-core SoC design. The SoC was announced at this week’s RISC-V Summit in San Jose, Calif., where Think Silicon also demo’d an early version of a RISC-V-based NEOX|V GPU (see farther below).

The open source CV64A CPU core and 32-bit CV32E are based on RISC-V architecture PULP Platform cores developed by the University of ETH Zurich. The 64-bit CV64A core is based on ETH Zurich’s Ariane implementation of its RV64GC RISC-V core IP. RV64GC is also used by many other RISC-V projects, including SiFive’s U54.

Read more

today's howtos and leftovers

Filed under
Misc
HowTos

Juju 2.7: Enhanced k8s experience, improved networking and more

Filed under
Ubuntu

Canonical is proud to announce the availability of Juju 2.7. This new release introduces a range of exciting features and several improvements which enhance Juju across various areas.

To learn more about Juju, visit our page.

Kubernetes extensions

Juju is becoming the simplest way to deploy and manage your container-centric workloads. This release was aimed at bringing more Juju features to k8s charms and more k8s features to Juju.

K8s charms can now define actions, introspect agents, and communicate back to Juju via the addition of juju-run within the pod’s PATH environmental variable. Experienced k8s operators will feel more at home with the ability to set secrets, administer service accounts, and other k8s-native features from their charms directly.

Read more

Also: How using Charmed OSM helps telcos to accelerate their NFV transformation

Graphics: NVIDIA 440.44 Linux Driver, Microsoft Code, and WSL Performs Very Poorly

Filed under
Graphics/Benchmarks
  • NVIDIA 440.44 Linux Driver Brings Fixes, __GL_SYNC_DISPLAY_DEVICE Honored With Vulkan

    Out today is NVIDIA 440.44 as the latest stable Linux driver update in their new long-lived driver series. 

    Succeeding the 440.36 and 440.31 stable drivers, the 440.44 release isn't too exciting but at least NVIDIA should be introducing a new beta series shortly. 

  • Intel's OpenSWR OpenGL Software Rasterizer Pulls In Tessellator From Microsoft Direct3D Code

    OpenSWR is Intel's performance-minded software rasterizer for purposes like workstation visualizations and is where it outperforms the likes of LLVMpipe. This CPU-based OpenGL implementation can make use of not only AVX/AVX2 but also AVX-512 and other optimizations to support speedy CPU-based GL operations from laptops to Xeon Scalable hardware. Like LLVMpipe, OpenSWR does leverage LLVM in part. Those unfamiliar with this long-standing Intel open-source project can learn more at OpenSWR.org.

  • Windows Subsystem For Linux Performance At The End Of 2019

    Recently I wrapped up some benchmarks looking at the performance of Ubuntu on Microsoft's Windows Subsystem for Linux comparing WSL on Windows 10 Build 18362 (May 2019 Update) and then both WSL and WSL2 performance using the Windows 10 Build 19008 Insider's Preview (what will come as Windows 10 20H1 update) for looking at where the WSL performance is heading. Additionally, looking at the bare metal performance of Ubuntu 18.04 LTS for which the WSL instances were based plus Ubuntu 19.10. As well, for the Windows-compatible tests also looking at how the Windows performance itself was outside of WSL/WSL2.

Testing IPFire 2.23 - Core Update 139 and Latest Security Patches

Filed under
Security
  • IPFire 2.23 - Core Update 139 is available for testing

    the last Core Update for this decade is finally available for testing! If you have a couple of hours free over the holidays, please help us out by installing it and sending us your feedback!

  • Security updates for Wednesday

    Security updates have been issued by Arch Linux (crypto++ and thunderbird), Debian (cacti, freeimage, git, and jackson-databind), Fedora (nss), openSUSE (clamav, dnsmasq, munge, opencv, permissions, and shadowsocks-libev), Red Hat (nss, nss-softokn, nss-util, rh-maven35-jackson-databind, and thunderbird), Scientific Linux (nss, nss-softokn, nss-util, nss-softokn, and thunderbird), SUSE (caasp-openstack-heat-templates, crowbar-core, crowbar-openstack, crowbar-ui, etcd, flannel, galera-3, mariadb, mariadb-connector-c, openstack-dashboard-theme-SUSE, openstack-heat-templates, openstack-neutron, openstack-nova, openstack-quickstart, patterns-cloud, python-oslo.messaging, python-oslo.utils, python-pysaml2, libssh, and strongswan), and Ubuntu (git, libpcap, libssh, and thunderbird). 

Mozilla and Beyond: Daniel Stenberg on BearSSL, Mozilla Root Store Policy, The Weak Notes, Wladimir Palant on Avira

Filed under
Moz/FF
Security
  • Daniel Stenberg: BearSSL is curl’s 14th TLS backend

    curl supports more TLS libraries than any other software I know of. The current count stops at 14 different ones that can be used to power curl’s TLS-based protocols (HTTPS primarily, but also FTPS, SMTPS, POP3S, IMAPS and so on).

    The beginning

    The very first curl release didn’t have any TLS support, but already in June 1998 we shipped the first version that supported HTTPS. Back in those days the protocol was still really SSL. The library we used then was called SSLeay. (No, I never understood how that’s supposed to be pronounced)

    The SSLeay library became OpenSSL very soon after but the API was brought along so curl supported it from the start.

  • Announcing Version 2.7 of the Mozilla Root Store Policy

    After many months of discussion on the mozilla.dev.security.policy mailing list, our Root Store Policy governing Certificate Authorities (CAs) that are trusted in Mozilla products has been updated. Version 2.7 has an effective date of January 1st, 2020.

  • Week notes - 2019 w49 - worklog - The Weak Notes

    A week with a bad cold makes it more difficult to write week notes. So here my weak notes. Everything seems heavier to type, to push.

    This last week-end I was at JSConf JP. I wrote down some notes about it.

    The week starts with two days of fulltime diagnosis (Monday, Tuesday). Let's get to it: 69 open bugs for Gecko. We try to distribute our work across the team so we are sure that at least someone is on duty for each day of the week. When we have finished our shift, we can add ourselves for more days. That doesn't prevent us for working on bugs the rest of the week. Some of the bugs take longer.

  • Problematic monetization in security products, Avira edition

    A while back we’ve seen how Avast monetizes their users. Today we have a much smaller fish to fry, largely because the Avira’s extensions in question aren’t installed by default and require explicit user action for the additional “protection.” So these have far fewer users, currently 400 thousands on Firefox and slightly above a million on Chrome according to official add-on store numbers. It doesn’t make their functionality any less problematic however.

    That’s especially the case for Avira Browser Safety extension that Avira offers for Firefox and Opera. While the vendor’s homepage lists “Find the best deals on items you’re shopping for” as last feature in the list, the extension description in the add-on stores “forgets” to mention this monetization strategy. I’m not sure why the identical Chrome extension is called “Avira Safe Shopping” but at least here the users get some transparency.

    [...]

    The Avira Browser Safety extension is identical to Avira Safe Shopping and monetizes by offering “best shopping deals” to the users. This functionality is underdocumented, particularly in Avira’s privacy policy. It is also risky however, as Avira chose to implement it in such a way that it will execute JavaScript code from Avira’s servers on arbitrary websites as well as in the context of the extension itself. In theory, this allows Avira or anybody with control of this particular server to target individual users, spy on them or mess with their browsing experience in almost arbitrary ways.

    In addition to that, the security part of the extension is implemented in a suboptimal way and will upload the entire browsing history of the users to Avira’s servers without even removing potentially sensitive data first. Again, Avira’s privacy policy is severely lacking and won’t make any clear statements as to what happens with this data.

RISC-V based PolarFire SoC FPGA and Devkit Coming in Q3 2020

Filed under
Hardware
OSS

Microsemi unveiled PolarFire FPGA + RISC-V SoC about one year ago, but at the time, development was done on a $3,000 platform with SiFive U54 powered HiFive Unleashed board combined with an FPGA...

Read more

Syndicate content