Language Selection

English French German Italian Portuguese Spanish


Syndicate content
Planet Debian -
Updated: 2 hours 36 min ago

Keith Packard: more-iterative-splines

Tuesday 18th of February 2020 07:41:24 AM
Slightly Better Iterative Spline Decomposition

My colleague Bart Massey (who is a CS professor at Portland State University) reviewed my iterative spline algorithm article and had an insightful comment — we don't just want any spline decomposition which is flat enough, what we really want is a decomposition for which every line segment is barely within the specified flatness value.

My initial approach was to keep halving the length of the spline segment until it was flat enough. This definitely generates a decomposition which is flat enough everywhere, but some of the segments will be shorter than they need to be, by as much as a factor of two.

As we'll be taking the resulting spline and doing a lot more computation with each segment, it makes sense to spend a bit more time finding a decomposition with fewer segments.

The Initial Search

Here's how the first post searched for a 'flat enough' spline section:

t = 1.0f; /* Iterate until s1 is flat */ do { t = t/2.0f; _de_casteljau(s, s1, s2, t); } while (!_is_flat(s1)); Bisection Method

What we want to do is find an approximate solution for the function:

flatness(t) = tolerance

We'll use the Bisection method to find the value of t for which the flatness is no larger than our target tolerance, but is at least as large as tolerance - ε, for some reasonably small ε.

float hi = 1.0f; float lo = 0.0f; /* Search for an initial section of the spline which * is flat, but not too flat */ for (;;) { /* Average the lo and hi values for our * next estimate */ float t = (hi + lo) / 2.0f; /* Split the spline at the target location */ _de_casteljau(s, s1, s2, t); /* Compute the flatness and see if s1 is flat * enough */ float flat = _flatness(s1); if (flat <= SCALE_FLAT(SNEK_DRAW_TOLERANCE)) { /* Stop looking when s1 is close * enough to the target tolerance */ if (flat >= SCALE_FLAT(SNEK_DRAW_TOLERANCE - SNEK_FLAT_TOLERANCE)) break; /* Flat: t is the new lower interval bound */ lo = t; } else { /* Not flat: t is the new upper interval bound */ hi = t; } }

This searches for a place to split the spline where the initial portion is flat but not too flat. I set SNEK_FLAT_TOLERANCE to 0.01, so we'll pick segments which have flatness between 0.49 and 0.50.

The benefit from the search is pretty easy to understand by looking at the number of points generated compared with the number of _de_casteljau and _flatness calls:

Search Calls Points Simple 150 33 Bisect 229 25

And here's an image comparing the two:

A Closed Form Approach?

Bart also suggests attempting to find an analytical solution to decompose the spline. What we need is to is take the flatness function and find the spline which makes it equal to the desired flatness. If the spline control points are a, b, c, and d, then the flatness function is:

ux = (3×b.x - 2×a.x - d.x)² uy = (3×b.y - 2×a.y - d.y)² vx = (3×c.x - 2×d.x - a.x)² vy = (3×c.y - 2×d.y - a.y)² flat = max(ux, vx) + max(uy, vy)

When the spline is split into two pieces, all of the control points for the new splines are determined by the original control points and the 't' value which sets where the split happens. What we want is to find the 't' value which makes the flat value equal to the desired tolerance. Given that the binary search runs De Casteljau and the flatness function almost 10 times for each generated point, there's a lot of opportunity to go faster with a closed form solution.

Current Implementation /* * Copyright © 2020 Keith Packard <> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA. */ #include <stdbool.h> #include <stdio.h> #include <string.h> #include <stdint.h> #include <math.h> typedef float point_t[2]; typedef point_t spline_t[4]; uint64_t num_flats; uint64_t num_points; #define SNEK_DRAW_TOLERANCE 0.5f #define SNEK_FLAT_TOLERANCE 0.01f /* * This actually returns flatness² * 16, * so we need to compare against scaled values * using the SCALE_FLAT macro */ static float _flatness(spline_t spline) { /* * This computes the maximum deviation of the spline from a * straight line between the end points. * * From */ float ux = 3.0f * spline[1][0] - 2.0f * spline[0][0] - spline[3][0]; float uy = 3.0f * spline[1][1] - 2.0f * spline[0][1] - spline[3][1]; float vx = 3.0f * spline[2][0] - 2.0f * spline[3][0] - spline[0][0]; float vy = 3.0f * spline[2][1] - 2.0f * spline[3][1] - spline[0][1]; ux *= ux; uy *= uy; vx *= vx; vy *= vy; if (ux < vx) ux = vx; if (uy < vy) uy = vy; ++num_flats; /* *If we wanted to return the true flatness, we'd use: * * return sqrtf((ux + uy)/16.0f) */ return ux + uy; } /* Convert constants to values usable with _flatness() */ #define SCALE_FLAT(f) ((f) * (f) * 16.0f) /* * Linear interpolate from a to b using distance t (0 <= t <= 1) */ static void _lerp (point_t a, point_t b, point_t r, float t) { int i; for (i = 0; i < 2; i++) r[i] = a[i]*(1.0f - t) + b[i]*t; } /* * Split 's' into two splines at distance t (0 <= t <= 1) */ static void _de_casteljau(spline_t s, spline_t s1, spline_t s2, float t) { point_t first[3]; point_t second[2]; int i; for (i = 0; i < 3; i++) _lerp(s[i], s[i+1], first[i], t); for (i = 0; i < 2; i++) _lerp(first[i], first[i+1], second[i], t); _lerp(second[0], second[1], s1[3], t); for (i = 0; i < 2; i++) { s1[0][i] = s[0][i]; s1[1][i] = first[0][i]; s1[2][i] = second[0][i]; s2[0][i] = s1[3][i]; s2[1][i] = second[1][i]; s2[2][i] = first[2][i]; s2[3][i] = s[3][i]; } } /* * Decompose 's' into straight lines which are * within SNEK_DRAW_TOLERANCE of the spline */ static void _spline_decompose(void (*draw)(float x, float y), spline_t s) { /* Start at the beginning of the spline. */ (*draw)(s[0][0], s[0][1]); /* Split the spline until it is flat enough */ while (_flatness(s) > SCALE_FLAT(SNEK_DRAW_TOLERANCE)) { spline_t s1, s2; float hi = 1.0f; float lo = 0.0f; /* Search for an initial section of the spline which * is flat, but not too flat */ for (;;) { /* Average the lo and hi values for our * next estimate */ float t = (hi + lo) / 2.0f; /* Split the spline at the target location */ _de_casteljau(s, s1, s2, t); /* Compute the flatness and see if s1 is flat * enough */ float flat = _flatness(s1); if (flat <= SCALE_FLAT(SNEK_DRAW_TOLERANCE)) { /* Stop looking when s1 is close * enough to the target tolerance */ if (flat >= SCALE_FLAT(SNEK_DRAW_TOLERANCE - SNEK_FLAT_TOLERANCE)) break; /* Flat: t is the new lower interval bound */ lo = t; } else { /* Not flat: t is the new upper interval bound */ hi = t; } } /* Draw to the end of s1 */ (*draw)(s1[3][0], s1[3][1]); /* Replace s with s2 */ memcpy(&s[0], &s2[0], sizeof (spline_t)); } /* S is now flat enough, so draw to the end */ (*draw)(s[3][0], s[3][1]); } void draw(float x, float y) { ++num_points; printf("%8g, %8g\n", x, y); } int main(int argc, char **argv) { spline_t spline = { { 0.0f, 0.0f }, { 0.0f, 256.0f }, { 256.0f, -256.0f }, { 256.0f, 0.0f } }; _spline_decompose(draw, spline); fprintf(stderr, "flats %lu points %lu\n", num_flats, num_points); return 0; }

Holger Levsen: 20200217-SnowCamp

Monday 17th of February 2020 07:56:06 PM
SnowCamp 2020

This is just a late reminder that there are still some seats available for SnowCamp, taking place at the end of this week and during the whole weekend somewhere in the Italian mountains.

I believe it will be a really nice opportunity to hack on Debian things and thus I'd hope that there won't be empty seats, though atm this is the case.

The venue is reachable by train and Debian will be covering the cost of accomodation, so you just have to cover transportation and meals.

The event starts in three days, so hurry up and whatever you plans are, change them!

If you have any further questions, join #suncamp (yes!) on

Jonathan Dowland: Amiga floppy recovery project scope

Monday 17th of February 2020 04:05:28 PM

This is the eighth part in a series of blog posts. The previous post was First successful Amiga disk-dumping session. The whole series is available here: Amiga.

The main goal of my Amiga project is to read the data from my old floppy disks. After a bit of hiatus (and after some gentle encouragement from friends at FOSDEM) I'm nearly done, 150/200 disks attempted so far. Ultimately I intend to get rid of the disks to free up space in my house, and probably the Amiga, too. In the meantime, what could I do with it?

Gotek floppy emulator balanced on the Amiga

The most immediately obvious things are to improve the housing of the emulated floppy disk. My Gotek adaptor is unceremoniously balanced on top of the case. Housing it within the A500 would be much neater. I might try to follow this guide which requires no case modifications and no 3D printed brackets, but instead of soldering new push-buttons, add a separate OLED display and rotary encoder (knob) in a separate housing, such as this 3D-printed wedge-shaped mount on Thingiverse. I do wonder if some kind of side-mounted solution might be better, so the top casing could be removed without having to re-route the wires each time.

3D printed OLED mount, from Amibay

Next would be improving the video output. My A520 video modulator developed problems that are most likely caused by leaking or blown capacitors. At the moment, I have a choice of B&W RF out, or using a 30 year old Philips CRT monitor. The latter is too big to comfortably fit on my main desk, and the blue channel has started to fail. Learning the skills to fix the A520 could be useful as the same could happen to the Amiga itself. Alternatively replacements are very cheap on the second hand market. Or I could look at a 3rd-party equivalent like the RGB4ALL. I have tried a direct, passive socket adaptor on the off-chance my LCD TV supported 15kHz, but alas, it appears it doesn't. This list of monitors known to support 15kHz is very short, so sourcing one is not likely to be easy or cheap. It's possible to buy sophisticated "Flicker Fixers/Scan Doublers" that enable the use of any external display, but they're neither cheap nor common.

My original "tank" Amiga mouse (pictured above) is developing problems with the left mouse button. Replacing the switch looks simple (in this Youtube video) but will require me to invest in a soldering iron, multimeter and related equipment (not necessarily a bad thing). It might be easier to buy a different, more comfortable old serial mouse.

Once those are out of the way, It might be interesting to explore aspects of the system that I didn't touch on as a child: how do you program the thing? I don't remember ever writing any Amiga BASIC, although I had several doomed attempts to use "game makers" like AMOS or SEUCK. What programming language were the commercial games written in? Pure assembly? The 68k is supposed to have a pleasant instruction set for this. Was there ever a practically useful C compiler for the Amiga? I never networked my Amiga. I never played around with music sampling or trackers.

There's something oddly satisfying about the idea of taking a 30 year old computer and making it into a useful machine in the modern era. I could consider more involved hardware upgrades. The Amiga enthusiast community is old and the fans are very passionate. I've discovered a lot of incredible enhancements that fans have built to enhanced their machines, right up to FPGA-powered CPU replacements that can run several times faster than the fastest original m68ks, and also offer digital video out, hundreds of MB of RAM, modern storage options, etc. To give an idea, check out Epsilon's Amiga Blog, which outlines some of the improvements they've made to their fleet of machines.

This is a deep rabbit hole, and I'm not sure I can afford the time (or the money!) to explore it at the moment. It will certainly not rise above my more pressing responsibilities. But we'll see how things go.

Enrico Zini: AI and privacy links

Sunday 16th of February 2020 11:00:00 PM
Norman by MIT Media Lab ai 2020-02-17 Norman: World's first psychopath AI. Machine Learning Captcha ai comics 2020-02-17 Amazon's Rekognition shows its true colors ai consent privacy 2020-02-17 Mix together a bit of freely accessible facial recognition software and a free live stream of the public space, and what do you get? A powerful stalker tool. Self Driving ai comics 2020-02-17 So much of "AI" is just figuring out ways to offloa work onto random strangers. Information flow reveals prediction limits in online social activity privacy 2020-02-17 Information flow reveals prediction limits in online social activity Bagrow et al., arVix 2017 If I know your friends, then I know a lot about you! Suppose you don’t personally use a given app/serv… The NSA’s SKYNET program may be killing thousands of innocent people ai politics 2020-02-17 «In 2014, the former director of both the CIA and NSA proclaimed that "we kill people based on metadata." Now, a new examination of previously published Snowden documents suggests that many of those people may have been innocent.» What reporter Will Ockenden's metadata reveals about his life privacy 2020-02-17 We published ABC reporter Will Ockenden's metadata in full and asked you to analyse it. Here's what you got right - and wrong. Behind the One-Way Mirror: A Deep Dive Into the Technology of Corporate Surveillance privacy 2020-02-17 It's time to shed light on the technical methods and business practices behind third-party tracking. For journalists, policy makers, and concerned consumers, this paper will demystify the fundamentals of third-party tracking, explain the scope of the problem, and suggest ways for users and legislation to fight back against the status quo.

Ben Armstrong: Introducing Dronefly, a Discord bot for naturalists

Sunday 16th of February 2020 04:51:27 PM

In the past few years, since first leaving Debian as a free software developer in 2016, I’ve taken up some new hobbies, or more accurately, renewed my interest in some old ones.

Screenshot from Dronefly bot tutorial

During that hiatus, I also quietly un-retired from Debian, anticipating there would be some way to contribute to the project in these new areas of interest. That’s still an idea looking for the right opportunity to present itself, not to mention the available time to get involved again.

With age comes an increasing clamor of complaints from your body when you have a sedentary job in front of a screen, and hobbies that rarely take you away from it. You can’t just plunk down in front of a screen and do computer stuff non-stop & just bounce back again at the start of each new day. So in the past several years, getting outside more started to improve my well-being and address those complaints. That revived an old interest in me: nature photography. That, in turn, landed me at iNaturalist, re-ignited my childhood love of learning about the natural world, & hooked me on a regular habit of making observations & uploading them to iNat ever since.

Second, back in the late nineties, I wrote a little library loans renewal reminder project in Python. Python was a pleasure to work with, but that project never took off and soon was forgotten. Now once again, decades later, Python is a delight to be writing in, with its focus on writing readable code & backed by a strong culture of education.

Where Python came to bear on this new hobby was when the naturalists on the iNaturalist Discord server became a part of my life. Last spring, I stumbled upon this group & started hanging out. On this platform, we share what we are finding, we talk about those findings, and we challenge each other to get better at it. It wasn’t long before the idea to write some code to access the iNaturalist platform directly from our conversations started to take shape.

Now, ideally, what happened next would have been for an open platform, but this is where the community is. In many ways, too, other chat platforms (like irc) are not as capable vs. Discord to support the image-rich chat experience we enjoy. Thus, it seemed that’s where the code had to be. Dronefly, an open source Python bot for naturalists built on the Red DiscordBot bot framework, was born in the summer of 2019.

Dronefly is still alpha stage software, but in the short space of six months, has grown to roughly 3k lines of code and is used used by hundreds of users across 9 different Discord servers. It includes some innovative features requested by our users like the related command to discover the nearest common ancestor of one or more named taxa, and the map command to easily access a range map on the platform for all the named taxa. So far as I know, no equivalent features exist yet on the iNat website or apps for mobile. Commands like these put iNat data directly at users’ fingertips in chat, improving understanding of the material with minimal interruption to the flow of conversation.

This tutorial gives an overview of Dronefly’s features. If you’re intrigued, please look me up on the iNaturalist Discord server following the invite from the tutorial. You can try out the bot there, and I’d be happy to talk to you about our work. Even if this is not your thing, do have a look at iNaturalist itself. Perhaps, like me, you’ll find in this platform a fun, rewarding, & socially significant outlet that gets you outside more, with all the benefits that go along with that.

That’s what has been keeping me busy lately. I hope all my Debian friends are well & finding joy in what you’re doing. Keep up the good work!

Russell Coker: DisplayPort and 4K

Saturday 15th of February 2020 11:00:07 PM
The Problem

Video playback looks better with a higher scan rate. A lot of content that was designed for TV (EG almost all historical documentaries) is going to be 25Hz interlaced (UK and Australia) or 30Hz interlaced (US). If you view that on a low refresh rate progressive scan display (EG a modern display at 30Hz) then my observation is that it looks a bit strange. Things that move seem to jump a bit and it’s distracting.

Getting HDMI to work with 4K resolution at a refresh rate higher than 30Hz seems difficult.

What HDMI Can Do

According to the HDMI Wikipedia page [1], HDMI 1.3–1.4b (introduced in June 2006) supports 30Hz refresh at 4K resolution and if you use 4:2:0 Chroma Subsampling (see the Chroma Subsampling Wikipedia page [2] you can do 60Hz or 75Hz on HDMI 1.3–1.4b. Basically for colour 4:2:0 means half the horizontal and half the vertical resolution while giving the same resolution for monochrome. For video that apparently works well (4:2:0 is standard for Blue Ray) and for games it might be OK, but for text (my primary use of computers) it would suck.

So I need support for HDMI 2.0 (introduced in September 2013) on the video card and monitor to do 4K at 60Hz. Apparently none of the combinations of video card and HDMI cable I use for Linux support that.

HDMI Cables

The Wikipedia page alleges that you need either a “Premium High Speed HDMI Cable” or a “Ultra High Speed HDMI Cable” for 4K resolution at 60Hz refresh rate. My problems probably aren’t related to the cable as my testing has shown that a cheap “High Speed HDMI Cable” can work at 60Hz with 4K resolution with the right combination of video card, monitor, and drivers. A Windows 10 system I maintain has a Samsung 4K monitor and a NVidia GT630 video card running 4K resolution at 60Hz (according to Windows). The NVidia GT630 card is one that I tried on two Linux systems at 4K resolution and causes random system crashes on both, it seems like a nice card for Windows but not for Linux.

Apparently the HDMI devices test the cable quality and use whatever speed seems to work (the cable isn’t identified to the devices). The prices at a local store are $3.98 for “high speed”, $19.88 for “premium high speed”, and $39.78 for “ultra high speed”. It seems that trying a “high speed” cable first before buying an expensive cable would make sense, especially for short cables which are likely to be less susceptible to noise.

What DisplayPort Can Do

According to the DisplayPort Wikipedia page [3] versions 1.2–1.2a (introduced in January 2010) support HBR2 which on a “Standard DisplayPort Cable” (which probably means almost all DisplayPort cables that are in use nowadays) allows 60Hz and 75Hz 4K resolution.

Comparing HDMI and DisplayPort

In summary to get 4K at 60Hz you need 2010 era DisplayPort or 2013 era HDMI. Apparently some video cards that I currently run for 4K (which were all bought new within the last 2 years) are somewhere between a 2010 and 2013 level of technology.

Also my testing (and reading review sites) shows that it’s common for video cards sold in the last 5 years or so to not support HDMI resolutions above FullHD, that means they would be HDMI version 1.1 at the greatest. HDMI 1.2 was introduced in August 2005 and supports 1440p at 30Hz. PCIe was introduced in 2003 so there really shouldn’t be many PCIe video cards that don’t support HDMI 1.2. I have about 8 different PCIe video cards in my spare parts pile that don’t support HDMI resolutions higher than FullHD so it seems that such a limitation is common.

The End Result

For my own workstation I plugged a DisplayPort cable between the monitor and video card and a Linux window appeared (from KDE I think) offering me some choices about what to do, I chose to switch to the “new monitor” on DisplayPort and that defaulted to 60Hz. After that change TV shows on NetFlix and Amazon Prime both look better. So it’s a good result.

As an aside DisplayPort cables are easier to scrounge as the HDMI cables get taken by non-computer people for use with their TV.

Related posts:

  1. 4K Monitors A couple of years ago a relative who uses a...
  2. Sound Device Order with ALSA One problem I have had with my new Dell PowerEdge...
  3. Dell PowerEdge T30 I just did a Debian install on a Dell PowerEdge...

Keith Packard: iterative-splines

Saturday 15th of February 2020 05:55:57 AM
Decomposing Splines Without Recursion

To make graphics usable in Snek, I need to avoid using a lot of memory, especially on the stack as there's no stack overflow checking on most embedded systems. Today, I worked on how to draw splines with a reasonable number of line segments without requiring any intermediate storage. Here's the results from this work:

The Usual Method

The usual method I've used to convert a spline into a sequence of line segments is split the spline in half using DeCasteljau's algorithm recursively until the spline can be approximated by a straight line within a defined tolerance.

Here's an example from twin:

static void _twin_spline_decompose (twin_path_t *path, twin_spline_t *spline, twin_dfixed_t tolerance_squared) { if (_twin_spline_error_squared (spline) <= tolerance_squared) { _twin_path_sdraw (path, spline->a.x, spline->a.y); } else { twin_spline_t s1, s2; _de_casteljau (spline, &s1, &s2); _twin_spline_decompose (path, &s1, tolerance_squared); _twin_spline_decompose (path, &s2, tolerance_squared); } }

The _de_casteljau function splits the spline at the midpoint:

static void _lerp_half (twin_spoint_t *a, twin_spoint_t *b, twin_spoint_t *result) { result->x = a->x + ((b->x - a->x) >> 1); result->y = a->y + ((b->y - a->y) >> 1); } static void _de_casteljau (twin_spline_t *spline, twin_spline_t *s1, twin_spline_t *s2) { twin_spoint_t ab, bc, cd; twin_spoint_t abbc, bccd; twin_spoint_t final; _lerp_half (&spline->a, &spline->b, &ab); _lerp_half (&spline->b, &spline->c, &bc); _lerp_half (&spline->c, &spline->d, &cd); _lerp_half (&ab, &bc, &abbc); _lerp_half (&bc, &cd, &bccd); _lerp_half (&abbc, &bccd, &final); s1->a = spline->a; s1->b = ab; s1->c = abbc; s1->d = final; s2->a = final; s2->b = bccd; s2->c = cd; s2->d = spline->d; }

This is certainly straightforward, but suffers from an obvious flaw — there's unbounded recursion. With two splines in the stack frame, each containing eight coordinates, the stack will grow rapidly; 4 levels of recursion will consume more than 64 coordinates space. This can easily overflow the stack of a tiny machine.

De Casteljau Splits At Any Point

De Casteljau's algorithm is not limited to splitting splines at the midpoint. You can supply an arbitrary position t, 0 < t < 1, and you will end up with two splines which, drawn together, exactly match the original spline. I use 1/2 in the above version because it provides a reasonable guess as to how an arbitrary spline might be decomposed efficiently. You can use any value and the decomposition will still work, it will just change the recursion depth along various portions of the spline.

Iterative Left-most Spline Decomposition

What our binary decomposition does is to pick points t0 - tn such that splines t0..t1 through tn-1 .. tn are all 'flat'. It does this by recursively bisecting the spline, storing two intermediate splines on the stack at each level. If we look at just how the first, or 'left-most' spline is generated, that can be represented as an iterative process. At each step in the iteration, we split the spline in half:

S' = _de_casteljau(s, 1/2)

We can re-write this using the broader capabilities of the De Casteljau algorithm by splitting the original spline at decreasing points along it:

S[n] = _de_casteljau(s0, (1/2)ⁿ)

Now recall that the De Casteljau algorithm generates two splines, not just one. One describes the spline from 0..(1/2)ⁿ, the second the spline from (1/2)ⁿ..1. This gives us an iterative approach to generating a sequence of 'flat' splines for the whole original spline:

while S is not flat: n = 1 do Sleft, Sright = _decasteljau(S, (1/2)ⁿ) n = n + 1 until Sleft is flat result ← Sleft S = Sright result ← S

We've added an inner loop that wasn't needed in the original algorithm, and we're introducing some cumulative errors as we step around the spline, but we don't use any additional memory at all.

Final Code

Here's the full implementation:

/* * Copyright © 2020 Keith Packard <> * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation, either version 3 of the License, or * (at your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License along * with this program; if not, write to the Free Software Foundation, Inc., * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA. */ #include <stdbool.h> #include <stdio.h> #include <string.h> typedef float point_t[2]; typedef point_t spline_t[4]; #define SNEK_DRAW_TOLERANCE 0.5f /* Is this spline flat within the defined tolerance */ static bool _is_flat(spline_t spline) { /* * This computes the maximum deviation of the spline from a * straight line between the end points. * * From */ float ux = 3.0f * spline[1][0] - 2.0f * spline[0][0] - spline[3][0]; float uy = 3.0f * spline[1][1] - 2.0f * spline[0][1] - spline[3][1]; float vx = 3.0f * spline[2][0] - 2.0f * spline[3][0] - spline[0][0]; float vy = 3.0f * spline[2][1] - 2.0f * spline[3][1] - spline[0][1]; ux *= ux; uy *= uy; vx *= vx; vy *= vy; if (ux < vx) ux = vx; if (uy < vy) uy = vy; return (ux + uy <= 16.0f * SNEK_DRAW_TOLERANCE * SNEK_DRAW_TOLERANCE); } static void _lerp (point_t a, point_t b, point_t r, float t) { int i; for (i = 0; i < 2; i++) r[i] = a[i]*(1.0f - t) + b[i]*t; } static void _de_casteljau(spline_t s, spline_t s1, spline_t s2, float t) { point_t first[3]; point_t second[2]; int i; for (i = 0; i < 3; i++) _lerp(s[i], s[i+1], first[i], t); for (i = 0; i < 2; i++) _lerp(first[i], first[i+1], second[i], t); _lerp(second[0], second[1], s1[3], t); for (i = 0; i < 2; i++) { s1[0][i] = s[0][i]; s1[1][i] = first[0][i]; s1[2][i] = second[0][i]; s2[0][i] = s1[3][i]; s2[1][i] = second[1][i]; s2[2][i] = first[2][i]; s2[3][i] = s[3][i]; } } static void _spline_decompose(void (*draw)(float x, float y), spline_t s) { float t; spline_t s1, s2; (*draw)(s[0][0], s[0][1]); /* If s is flat, we're done */ while (!_is_flat(s)) { t = 1.0f; /* Iterate until s1 is flat */ do { t = t/2.0f; _de_casteljau(s, s1, s2, t); } while (!_is_flat(s1)); /* Draw to the end of s1 */ (*draw)(s1[3][0], s1[3][1]); /* Replace s with s2 */ memcpy(&s[0], &s2[0], sizeof (spline_t)); } (*draw)(s[3][0], s[3][1]); } void draw(float x, float y) { printf("%8g, %8g\n", x, y); } int main(int argc, char **argv) { spline_t spline = { { 0.0f, 0.0f }, { 0.0f, 256.0f }, { 256.0f, -256.0f }, { 256.0f, 0.0f } }; _spline_decompose(draw, spline); return 0; }

Russell Coker: Self Assessment

Saturday 15th of February 2020 03:57:00 AM
Background Knowledge

The Dunning Kruger Effect [1] is something everyone should read about. It’s the effect where people who are bad at something rate themselves higher than they deserve because their inability to notice their own mistakes prevents improvement, while people who are good at something rate themselves lower than they deserve because noticing all their mistakes is what allows them to improve.

Noticing all your mistakes all the time isn’t great (see Impostor Syndrome [2] for where this leads).

Erik Dietrich wrote an insightful article “How Developers Stop Learning: Rise of the Expert Beginner” [3] which I recommend that everyone reads. It is about how some people get stuck at a medium level of proficiency and find it impossible to unlearn bad practices which prevent them from achieving higher levels of skill.

What I’m Concerned About

A significant problem in large parts of the computer industry is that it’s not easy to compare various skills. In the sport of bowling (which Erik uses as an example) it’s easy to compare your score against people anywhere in the world, if you score 250 and people in another city score 280 then they are more skilled than you. If I design an IT project that’s 2 months late on delivery and someone else designs a project that’s only 1 month late are they more skilled than me? That isn’t enough information to know. I’m using the number of months late as an arbitrary metric of assessing projects, IT projects tend to run late and while delivery time might not be the best metric it’s something that can be measured (note that I am slightly joking about measuring IT projects by how late they are).

If the last project I personally controlled was 2 months late and I’m about to finish a project 1 month late does that mean I’ve increased my skills? I probably can’t assess this accurately as there are so many variables. The Impostor Syndrome factor might lead me to think that the second project was easier, or I might get egotistical and think I’m really great, or maybe both at the same time.

This is one of many resources recommending timely feedback for education [4], it says “Feedback needs to be timely” and “It needs to be given while there is still time for the learners to act on it and to monitor and adjust their own learning”. For basic programming tasks such as debugging a crashing program the feedback is reasonably quick. For longer term tasks like assessing whether the choice of technologies for a project was good the feedback cycle is almost impossibly long. If I used product A for a year long project does it seem easier than product B because it is easier or because I’ve just got used to it’s quirks? Did I make a mistake at the start of a year long project and if so do I remember why I made that choice I now regret?

Skills that Should be Easy to Compare

One would imagine that martial arts is a field where people have very realistic understanding of their own skills, a few minutes of contest in a ring, octagon, or dojo should show how your skills compare to others. But a YouTube search for “no touch knockout” or “chi” shows that there are more than a few “martial artists” who think that they can knock someone out without physical contact – with just telepathy or something. George Dillman [5] is one example of someone who had some real fighting skills until he convinced himself that he could use mental powers to knock people out. From watching YouTube videos it appears that such people convince the members of their dojo of their powers, and those people then faint on demand “proving” their mental powers.

The process of converting an entire dojo into believers in chi seems similar to the process of converting a software development team into “expert beginners”, except that martial art skills should be much easier to assess.

Is it ever possible to assess any skills if people trying to compare martial art skills often do it so badly?


It seems that any situation where one person is the undisputed expert has a risk of the “chi” problem if the expert doesn’t regularly meet peers to learn new techniques. If someone like George Dillman or one of the “expert beginners” that Erik Dietrich refers to was to regularly meet other people with similar skills and accept feedback from them they would be much less likely to become a “chi” master or “expert beginner”. For the computer industry seems the best solution to this, whatever your IT skills are you can find a meetup where you can meet people with more skills than you in some area.

Here’s one of many guides to overcoming Imposter Syndrome [5]. Actually succeeding in following the advice of such web pages is not going to be easy.

I wonder if getting a realistic appraisal of your own skills is even generally useful. Maybe the best thing is to just recognise enough things that you are doing wrong to be able to improve and to recognise enough things that you do well to have the confidence to do things without hesitation.

Related posts:

  1. Load Average Monitoring For my ETBE-Mon [1] monitoring system I recently added a...
  2. university degrees Recently someone asked me for advice on what they can...
  3. priorities for heartbeat services Currently I am considering the priority scheme to use for...

Anisa Kuci: Outreachy post 4 - Career opportunities

Friday 14th of February 2020 12:21:15 PM

As mentioned in my last blog posts, Outreachy is very interesting and I got to learn a lot already. Two months have already passed by quickly and there is still one month left for me to continue working and learning.

As I imagine all the other interns are thinking now, I am also thinking about what is going to be the next step for me. After such an interesting experience as this internship, thinking about the next steps is not that simple.

I have been contributing to Free Software projects for quite some years now. I have been part of the only FLOSS community in my country for many years and I grew up together with the community, advocating free software in and around Albania.

I have contributed to many projects, including Mozilla, OpenStreetMap, Debian, GNOME, Wikimedia projects etc. So, I am sure, the FLOSS world is definitely the right place for me to be. I have helped communities grow and I am very enthusiastic about it.

I have been growing up and evolved as a person through contributing to all the projects I have mentioned above. I have gained knowledge that I would not have had a chance to acquire, if it was not for the “sharing knowledge” ideology that is so strong in the FLOSS environment.

Through organizing big and small events from 300 people conferences to 30 people bug squashing parties to 5 people strategy workshops, I have been able to develop skills because the community trusted me with responsibility in event organizing even before I was able to prove myself. I have been supported by great mentors which helped me learn on the job and leave me with practical knowledge that I am happy to continue applying in the FLOSS community. I am thinking about formalizing my education in the marketing or communication areas to also learn some academic background and further strengthen the practical skills.

During Outreachy I have learned to use the bash command line much better. I have learned LaTeX as it was one of the tools that I needed to work on the fundraising materials. I have also improved a lot using git commands and feel much more confident now. I have worked a lot on fundraising while also learning Python very intensively, and programming is definitely a skill that I would love to profound.

I know that foreign languages are something that I enjoy, as I speak English, Italian, Greek and of course my native language Albanian, but lately I learned that programming languages can be as much fun as the natural languages and I am keen on learning more of both.

I love working with people, so I hope in the future I will be able to continue working in environments where you interact with a diverse set of people.

Dirk Eddelbuettel: RcppSimdJson 0.0.1 now on CRAN!

Friday 14th of February 2020 03:00:00 AM

A fun weekend-morning project, namely wrapping the outstanding simdjson library by Daniel Lemire (with contributions by Geoff Langdale, John Keiser and many others) into something callable from R via a new package RcppSimdJson lead to a first tweet on January 20, a reference to the brand new github repo, and CRAN upload a few days later—and then two weeks of nothingness.

Well, a little more than nothing as Daniel is an excellent “upstream” to work with who promptly incorporated two changes that arose from preparing the CRAN upload. So we did that. But CRAN being as busy and swamped as they are we needed to wait. The ten days one is warned about. And then some more. So yesterday I did a cheeky bit of “bartering” as Kurt wanted a favour with an updated digest version so I hinted that some reciprocity would be appreciated. And lo and behold he admitted RcppSimdJson to CRAN. So there it is now!

We have some upstream changes already in git, but I will wait a few days to let a week pass before uploading the now synced upstream code. Anybody who wants it sooner knows where to get it on GitHub.

simdjson is a gem. Via some very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in persing gigabytes of JSON parsed per second which is quite mindboggling. I highly recommend the video of the recent talk by Daniel Lemire at QCon (which was also voted best talk).

The NEWS entry (from a since-added NEWS file) for the initial RcppSimdJson upload follows.

Changes in version 0.0.1 (2020-01-24)
  • Initial CRAN upload of first version

  • Comment-out use of stdout (now updated upstream)

  • Deactivate use computed GOTOs for compiler compliance and CRAN Policy via #define

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Jonathan Carter: Initial experiments with the Loongson Pi 2K

Thursday 13th of February 2020 08:29:16 PM

Recently, Loongson made some Pi 2K boards available to Debian developers and Aron Xu was kind enough to bring me one to FOSDEM earlier this month. It’s a MIPS64 based board with 2GB RAM, 2 gigabit ethernet cards, an m.2 (SATA) disk slot and a whole bunch more i/o. More details about the board itself is available on the Debian wiki, here is a quick board tour from there:

On my previous blog post I still had the protective wrapping on the acrylic case. Here it is all peeled off and polished after Holger pointed that out to me on IRC. I’ll admit I kind of liked the earthy feel that the protective covers had, but this is nice too.

The reason why I wanted this board is that I don’t have access to any MIPS64 hardware whatsoever, and it can be really useful for getting Calamares to run properly on MIPS64 on Debian. Calamares itself builds fine on this platform, but calamares-settings-debian will only work on amd64 and i386 right now (where it will either install grub-efi or grub-pc depending in which mode you booted, otherwise it will crash during installation). I already have lots of plans for the Bullseye release cycle (and even for Calamares specifically), so I’m not sure if I’ll get there but I’d like to get support for mips64 and arm64 into calamares-settings-debian for the bullseye release. I think it’s mostly just a case of detecting the platforms properly and installing/configuring the right bootloaders. Hopefully it’s that simple.

In the meantime, I decided to get to know this machine a bit better. I’m curious how it could be useful to me otherwise. All its expansion ports definitely seems interesting. First I plugged it into my power meter to check what power consumption looks like. According to this, it typically uses between 7.5W and 9W and about 8.5W on average.

I initially tried it out on an old Sun monitor that I salvaged from a recycling heap. It wasn’t working anymore but my anonymous friend replaced its power supply and its CFL backlight with an LED backlight, now it’s a really nice 4:3 monitor for my vintage computers. On a side-note, if you’re into electronics, follow his YouTube channel where you can see him repair things. Unfortunately the board doesn’t like this screen by default (just black screen when xorg started), I didn’t check if it was just a xorg configuration issue or a hardware limitiation, but I just moved it to an old 720P TV that I usually use for my mini collection and it displayed fine there. I thought I’d just mention it in case someone tries this board and wonders why they just see a black screen after it boots.

I was curious whether these Ethernet ports could realistically do anything more than 100mbps (sometimes they go on a bus that maxes out way before gigabit does), so I install iperf3 and gave it a shot. This went through 2 switches that has some existing traffic on it, but the ~85MB/s I got on my first test completely satisfied me that these ports are plenty fast enough.

Since I first saw the board, I was curious about the PCIe slot. I attached an older NVidia (that still runs fine with the free Nouveau driver), also attached some external power to the card and booted it all up…

The card powers on and the fan enthusiastically spins up, but sadly the card is not detected on the Loongson board. I think you need some PC BIOS equivelent stuff to poke the card at the right places so that it boots up properly.

Disk performance is great, as can be expected with the SSD it has on board. It’s significantly better than the extremely slow flash you typically get on development boards.

I was starting to get curious about whether Calamares would run on this. So I went ahead and installed it along with calamares-settings-debian. I wasn’t even sure it would start up, but lo and behold, it did. This is quite possibly the first time Calamares has ever started up on a MIPS64 machine. It started up in Chinese since I haven’t changed the language settings yet in Xfce.

I was curious whether Calamares would start up on the framebuffer. Linux framebuffer support can be really flaky on platforms with weird/incomplete Linux drivers. I ran ‘calamares -platform linuxfb’ from a virtual terminal and it just worked.

This is all very promising and makes me a lot more eager to get it all working properly and get a nice image generated that you can use Calamares to install Debian on a MIPS64 board. Unfortunately, at least for now, this board still needs its own kernel so it would need it’s own unique installation image. Hopefully all the special bits will make it into the mainline Linux kernel before too long. Graphic performance wasn’t good, but I noticed that they do have some drivers on GitHub that I haven’t tried yet, but that’s an experiment for another evening.


  • Price: A few people asked about the price, so I asked Aron if he can share some pricing information. I got this one for free, it’s an unreleased demo model. At least two models might be released that’s based on this, a smaller board with fewer pinouts for about €100, and the current demo version is about $200 (CNY 1399), so the final version might cost somewhere in that ballpark too. These aren’t any kind of final prices, and I don’t represent Loongson in any capacity, but at least this should give you some idea of what it would cost.
  • More boards: Not all Debian Developers who requested their board have received them, Aron said that more boards should become available by March/April.

Romain Perier: Meetup Debian Toulouse

Thursday 13th of February 2020 06:50:20 PM
Hi there !

My company Viveris is opening its office for hosting a Debian Meetup in Toulouse this summer (June 5th or June 12th).

Everyone is welcome to this event, we're currently looking for volunteers for presenting demo, lightning talks or conferences (following the talks any kind of hacking session is possible like bugs triaging, coding sprints etc).

Any kind of topic is welcome.

See the announcement (in french) for more details.

Dirk Eddelbuettel: digest 0.6.24: Some more refinements

Wednesday 12th of February 2020 11:17:00 PM

Another new version of digest arrived on CRAN (and also on Debian) earlier today.

digest creates hash digests of arbitrary R objects (using the md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, and spookyhash algorithms) permitting easy comparison of R language objects. It is a fairly widely-used package (currently listed at 889k monthly downloads with 255 direct reverse dependencies and 7340 indirect reverse dependencies) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation.

This release comes a few month after the previous release. It contains a few contributed fixes, some of which prepare for R 4.0.0 in its current development. This includes a testing change to the matrix/array class, and corrects the registration for the PMurHash routine as pointed out by Tomas Kalibera and Kurt Hornik (who also kindly reminded me to finally upload this as I had made the fix already in December). Moreover, Will Landau sped up one operation affecting his popular drake pipeline toolkit. Lastly, Thierry Onkelinx corrected one more aspect related to sha1.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Paulo Henrique de Lima Santana: Bits from MiniDebCamp Brussels and FOSDEM 2020

Wednesday 12th of February 2020 10:00:00 AM
Bits from MiniDebCamp Brussels and FOSDEM 2020

I traveled to Brussels from January 28th to February 6th to join MiniDebCamp and FOSDEM 2020. It was my second trip to Brussels because I was there in 2019 to join Video Team Sprint and FOSDEM

MiniDebCamp took place at Hackerspace Brussels (HSBXL) for 3 days (January 29-31). My initial idea was travel on 27th and arrive in Brussels on 28th to rest and go to MiniDebCamp on the first day, but I had buy a ticket to leave Brazil on 28th because it was cheaper.

Trip from Curitiba to Brussels

I left Curitiba on 28th at 13:20 and I arrived in São Paulo at 14:30. The flight from São Paulo to Munich departured at 18h and after 12 hours I arrived there at 10h (local time). The flight was 30 minutes late because we had to wait airport staff remove ice on the ground. I was worried because my flight to Brussels would departure at 10:25 and I had to get through by immigration yet.

After walked a lot, I arrrived at immigration desk (there wasn’t line), I got my passaport stamp, walked a lot again, took a train, I arrived in my gate and the flight was late too. So, everything was going well. I departured Munich at 10:40 and I arrived in Brussels on 29th at 12h.

I went from airport to the Hostel Galia by bus, by train and by other bus to check-in and to leave my luggage. On the way I had lunch at “Station Brussel Noord” because I was really hungry, and I arrived at hostel at 15h.

My reservation was on a coletive bedroom, and when I arrived there, I meet Marcos, a brazilian guy from Brasília and he was there to join a internationl Magic card competion. He was in Brussels for the first time and he was a little lost about what he could do in the city. I invited him to go to downtown to looking for a cellphone store because we needed to buy sim-cards. I wanted to buy from Base, and hostel frontdesk people said to us to go to the store at Rue Neuve. I showed Grand-Place to Marcos and after we bought sim-cards, we went to Primark because he needed to buy a towel. It was night and we decided to buy food to have dinner at Hostel. I gave up to go to HSBXL because I was tired and I thought it was not a good idea to go there for the first time at night.

MiniDebCamp day 1

On Thursday (30th) morning I went to HSBXL. I walked from the hostel to “Gare du Midi”, and after walk from on side to other, I finally could find the bus stop. I got off the bus at the fourth stop in front of the hackerspace building. It was a little hard to find the right entrance, but I got it. I arrived at HSBXL room, talked to other DDs there and I could find a empty table to put my laptop. Other DDs were arriving during all day.

I read and answered e-mails and went out to walking in Anderlecht to meet the city and to looking for a place to have lunch because I didn’t want eat sandwich at restaurant on the building. I stoped at Lidl and Aldi stores to buy some food to eat later, and I stoped in a turkish restaurant to have lunch, and the food was very good. After that, I decided to walk a little more to visit the Jean-Claude Van Damme statue to take some photos :-)

Backing to HSBXL my mostly interest at MiniDebCamp was to join the DebConf Vídeo Team sprint to learn how to setup a voctomix and gateway machine to be used in MiniDebConf Maceió 2020. I was asking some questions to Nicolas about that and he suggested I make a new instalation using the Video Team machine and Buster.

I installed Buster and using USB installer and ansible playbooks it was possible setup the machine as Voctotest. I already had done this setup at home using a simple machine without a Blackmagic card and a camera. From that point, I didn’t know what to do. So, Nicolas come and started to setup the machine first as Voctomix, and after as Gateway. I was watching and learning. After a while, everything worked perfect with a camera.

It was night and the group ordered some pizzas to eat with beers sold by HSBXL. I was celebreting too because during the day I received messages and a call from Rentcars because I was hired by them! Before travel, I went to a interview at Rentcars on the morning and I got a positive answer when I was in Brussels.

Before I left the hackerspace, I received doors codes to open HSBXL next day early. Some days before MiniDebCamp, Holger had asked if someone could open the room friday morning and I answered him I could. I left at 22h and back to the hostel to sleep.

MiniDebCamp day 2

On friday I arrived at HSBXL at 9h and opened the room and I took some photos with empty space. It is amazing how we can use spaces like that in Europe. Last year I was in MiniDebConf Hamburg at Dock Europe. I miss this kind of building and hackerspace in Curitiba.

I installed and setup the Video Team machine again, but this time, I was alone following what Nicolas did before. And everything worked perfectly again. Nicolas asked me to create a new ansible playbook joining voctomix and gateway to make instalation easier, send it as a MR, and test it.

I went out to have lunch in the same restaurant the day before and I discoveried there was a Leonidas factory outlet in front of HSBXL, meaning I could buy belgium chocolates cheaper. I went there and I bought a box with 1,5kg of chocolates.

When I come back to HSBXL, I started to test the new ansible playbook. The test was taking longer than I expected and on the end of the day, Nicolas needed to take away the equipments. It was really great make this hands-on with real equipments used by Video Team. I learned a lot!

To celebrate the MiniDebCamp ending, we had free beer sponsored! I have to say I drank to much and it was complicated arrived at hostel that night :-)

A complete report from DebConf Video Team can be read here.

Many thanks to Nicolas Dandrimont for teaching me Video Team stuff, to Kyle Robbertze for setting up the Video Sprint, to Holger Levsen for organizing MiniDebCamp, and to HSBXL people for receiving us there.

FOSDEM day 1

FOSDEM 2020 took place at ULB on February 1st and 2nd. On the first day I took a train and I listened a group of brazilians talking in portuguese and they were going to FOSDEM too. I arrived there around 9:30 and I went to Debian booth because I was volunteer to help and I was taking t-shirts from Brazil to sell. It was a madness with people buying Debian stuff.

After while I had to leave the booth because I was volunteer to film the talks at Janson auditorium from 11h to 13h. I had done this job last year I decided to do it again because It is a way to help the event, and they gave me a t-shirt and a free meal ticket that I changed for two sandwiches :-)

After lunch, I walked around the booths, got some stickers, talked with peolple, drank some beers from OpenSuse booth, until the end of the day. I left FOSDEM and went to hostel to leave my bag, and I went to the Debian dinner organized by Marco d’Itri at Chezleon.

The dinner was great, with 25 very nice Debian people. After the dinner, we ate waflles and some of us went to Delirium but I decided to go to the hostel to sleep.

FOSDEM day 2

On the second and last day I arrived around 9h, spent some time at Debian booth and I went to Janson auditorium to help again from 10h to 13h.

I got the free meal ticket and after lunch, I walked around, visited booths, and I went to Community devroom to watch talks. The first was “Recognising Burnout” by Andrew Hutchings and listening him I believe I had bournet symptoms organizing DebConf19. The second was “How Does Innersource Impact on the Future of Upstream Contributions?” by Bradley Kuhn. Both talks were great.

After the end of FOSDEM, we went in a group to have dinner at a restaurant near from ULB. We spent a great time together. After the dinnner we took the same train and we did a group photo.

Two days to join Brussels

With the end of MiniDebcamp and FOSDEM I had Monday and Tuesday free before returning to Brazil on Wednesday. I wanted to join Config Management Camp in Ghent, but I decided to stay in Brussels to visit some places. I visited:

  • Carrefour - to buy beers to bring to Brazil :-)

Last day and returning to Brazil

On Wednesday (5th) I woke up early to finish packing and do my check-out. I left the hostel and took a bus, a train and other bus to Brussels Airport. My flight departured at 15:05 to Frankfurt arriving there at 15:55. I thought to visit the city because I had to wait for 6 hours and I read it was possible to looking around with this time. But I was very tired and I decided to stay at airport.

I walked to my gate, got through by immigration to get my passaport stamp, and waited until 22:05 when my flight departured to São Paulo. After 12 hours flying, I arrived in São Paulo at 6h (local time). In São Paulo when we arrive from international flight, we must to take all luggages, and get through customs. After I left my luggage with local airplane company, I went to the gate to wait my flight to Curitiba.

The flight should departure at 8:30 but it was 20 minutes late. So I arrived in Curitiba 10h, took a uber and finally I was at home.

Last words

I wrote a diary (in portuguese) telling each of all my days in Brussels. It can be read starting here.

All my photos are here

Many thanks to Debian for sponsoring my trip to Brussels, and to DPL Sam Hartman for approving it. It’s a unique opportunity to go to Europe to meet and to work with a lot of DDs, and participate in a very important worldwide free software event.

Louis-Philippe Véronneau: Announcing miniDebConf Montreal 2020 -- August 6th to August 9th 2020

Wednesday 12th of February 2020 05:00:00 AM

This is a guest post by the miniDebConf Montreal 2020 orga team on pollo's blog.

Dear Debianites,

We are happy to announce miniDebConf Montreal 2020! The event will take place in Montreal, at Concordia University's John Molson School of Business from August 6th to August 9th 2020. Anybody interested in Debian development is welcome.

Following the announcement of the DebConf20 location, our desire to participate became incompatible with our commitment toward the Boycott, Divestment and Sanctions (BDS) campaign launched by Palestinian civil society in 2005. Hence, many active Montreal-based Debian developpers, along with a number of other Debian developpers, have decided not to travel to Israel in August 2020 for DebConf20.

Nevertheless, recognizing the importance of DebConf for the health of both the developper community and the project as a whole, we decided to organize a miniDebConf just prior to DebConf20 in the hope that fellow developpers who may have otherwise skipped DebConf entirely this year might join us instead. Fellow developpers who decide to travel to both events are of course most welcome.

Registration is open

Registration is open now, and free, so go add your name and details on the Debian wiki.

We'll accept registrations until July 25th, but don't wait too much before making your travel plans! Finding reasonnable accommodation in Montreal during the summer can be hard if you don't make plans in advance.

We have you covered with lots of attendee information already.

Sponsors wanted

We're looking for sponsors willing to help making this event possible. Information on sponsorship tiers can be found here.

Get in touch

We gather on the #debian-quebec on and the list.

Norbert Preining: MuPDF, QPDFView and other Debian updates

Wednesday 12th of February 2020 03:03:33 AM

For those interested, I have updated mupdf (1.16.1), pymupdf (1.16.10), and qpdfview (current bzr sources) to the latest versions and added to my local Debian apt repository:

deb unstable main deb-src unstable main

QPDFView has now the Fitz (MuPDF) backend available.

At the same time I have updated Elixir to 1.10.1. All packages are in source and amd64 binary format. Information on other apt repositories available here can be found at this post.


Sean Whitton: Traditional Perl 5 classes and objects

Tuesday 11th of February 2020 04:23:38 PM

Last summer I read chromatic’s Modern Perl, and was recommended to default to using Moo or Moose to define classes, rather than writing code to bless things into objecthood myself. At the time the project I was working on needed to avoid any dependencies outside of the Perl core, so I made a mental note of the advice, but didn’t learn how to use Moo or Moose. I do remember feeling like I was typing out a lot of boilerplate, and wishing I could use Moo or Moose to reduce that.

In recent weeks I’ve been working on a Perl distribution which can freely use non-core dependencies from CPAN, and so right from the start I used Moo to define my classes. It seemed like a no-brainer because it’s more declarative; it didn’t seem like there could be any disadvantages.

At one point, when writing a new class, I got stuck. I needed to call one of the object’s methods immediately after instantiation of the object. BUILDARGS is, roughly, the constructor for Moo/Moose classes, so I started there, but you don’t have access to the new object during BUILDARGS, so you can’t simply call its methods on it. So what I needed to do was change my design around so as to be more conformant to the Moo/Moose view of the world, such that the work of the method call could get done at the right time. I mustn’t have been in a frame of mind for that sort of thinking at the time because what I ended up doing was dropping Moo from the package and writing a constructor which called the method on the new object, after blessing the hash, but before returning a hashref to the caller.

This was my first experience of having the call to bless() not be the last line of my constructor, and I believe that this simple dislocation helped significantly improved my grip on core Perl 5 classes and objects: the point is that they’re not declarative—they’re collections of functionality to operate on encapsulated data, where the instantiation of that data, too, is a piece of functionality. I had been thinking about classes too declaratively, and this is why writing out constructors and accessors felt like boilerplate. Now writing those out feels like carefully setting down precisely what functionality for operating on the encapsulated data I want to expose. I also find core Perl 5 OO quite elegant (in fact I find pretty much everything about Perl 5 highly elegant, except of course for its dereferencing syntax; not sure why this opinion is so unpopular).

I then came across the Cor proposal and followed a link to this recent talk criticising Moo/Moose. The speaker, Tadeusz Sośnierz, argues that Moo/Moose implicitly encourages you to have an accessor for each and every piece of the encapsulated data in your class, which is bad OO. Sośnierz pointed out that if you take care to avoid generating all these accessors, while still having Moo/Moose store the arguments to the constructor provided by the user in the right places, you end up back with a new kind of boilerplate, which is Moo/Moose-specific, and arguably worse than what’s involved in defining core Perl 5 classes. So, he asks, if we are going to take care to avoid generating too many accessors, and thereby end up with boilerplate, what are we getting out of using Moo/Moose over just core Perl 5 OO? There is some functionality for typechecking and method signatures, and we have the ability to use roles instead of multiple-inheritance.

After watching Sośnierz talk, I have been rethinking about whether I should follow Modern Perl’s advice to default to using Moo/Moose to define new classes, because I want to avoid the problem of too many accessors. Considering the advantages of Moo/Moose Sośnierz ends up with at the end of his talk: I find the way that Perl provides parameters to subroutines and methods intuitive and flexible, and don’t see the need to build typechecking into that process—just throw some exceptions with croak() if the types aren’t right, before getting on with the business logic of the subroutine or method. Roles are a different matter. These are certainly an improvement on multiple inheritance. But there is Role::Tiny that you can use instead of Moo/Moose.

So for the time being it seems I should go back to blessing hashes, and that I should also get to grips with Role::Tiny. I don’t have a lot of experience with OO design, so can certainly imagine changing my mind about things like Perlish typechecking and subroutine signatures (I also don’t understand, yet, why some people find the convention of prefixing private methods and attributes with an underscore not to be sufficient—Cor wants to add attribute and method privacy to Perl). However, it seems sensible to avoid using things like Moo/Moose until I can be very clear in my own mind about what advantages using them is getting me. Bad OO with Moo/Moose seems worse than occasionally simplistic, occasionally tedious, but correct OO with the Perl 5 core.

Paulo Henrique de Lima Santana: My free software activities in january 2020

Tuesday 11th of February 2020 10:00:00 AM
My free software activities in january 2020

Hello, this is my first monthly report about activities in Debian and Free Software in general.

Since the end of DebConf19 in July 2020 I was avoiding to work in Debian stuff because the event was too stresseful to me. For months I felt discouraged to contribute to the project, until December.


On december I watched two news video tutorials from João Eriberto about:

  • Debian Packaging - using git and gbp, parts 1, 2, 3, 4, 5 and 6
  • Debian Packaging with docker, parts 1 and 2

Since then, I decided update my packages using gbd and docker and it have been great. On December and January I worked on these following packages.

I did QA Uploads of:

I adopted and packaged new release of:

  • ddir 2019.0505 closing bugs #903093 and #920066.

I packaged new releases of:

I packaged new upstream versions of:

I backported to buster-backports:

I packaged:

MiniDebConf Maceió 2020

I helped to edit the MiniDebConf Maceió 2020 website.

I wrote the sponsorship brochure and I sent it some brazilian companies.

I sent a message with call for activities to national and international mailing lists.

I sent a post to Debian Micronews.


I sent a message to UFPR Education Director asking him if we could use the Campus Rebouças auditorium to organize FLISOL there on april, but he denied. We still looking for a place to FLISOL.


I started to study DevOps culture and for that, I watch a lot of vídeos from LINUXtips

And I read the book “Docker para desenvolvedores” wrote by Rafael Gomes.

MiniDebCamp in Brussels

I traveled to Brussels to join MiniDebCamp on January 29-31 and FOSDEM on February 1-2.

At MiniDebCamp my mostly interest was to join the DebConf Vídeo Team sprint to learn how to setup a voctomix and gateway machine to be used in MiniDebConf Maceió 2020. I could setup the Video Team machine installing Buster and using ansible playbooks. It was a very nice opportunity to learn how to do that.

A complete report from DebConf Video Team can be read here.

I wrote a diary (in portuguese) telling each of all my days in Brussels. It can be read starting here. I intend to write more in english about MiniDebCamp and FOSDEM in a specific post.

Many thanks to Debian for sponsor my trip to Brussels. It’s a unique opportunity to go to Europe to meet and to work with a lot of DDs.


I did a MR to the DebConf20 website fixing some texts.

I joined the WordPress Meetup

I joined a live streaming from Comunidade Debian Brasil to talk about MiniDebConf Maceió 2020.

I watched an interesting vídeo “Who has afraid of Debian Sid” from debxp channel

I deleted the Agenda de eventos de Sofware Livre e Código Aberto because I wasn’t receiving events to add there, and I was not having free time to publicize it.

I started to write the list of FLOSS events for 2020 that I keep in my website for many years.

Finally I have watched vídeos from DebConf19. Until now, I saw these great talks:

  • Bastidores Debian - entenda como a distribuição funciona
  • Benefícios de uma comunidade local de contribuidores FLOSS
  • Caninos Loucos: a plataforma nacional de Single Board Computers para IoT
  • Como obter ajuda de forma eficiente sobre Debian
  • Comunidades: o bom o ruim e o maravilhoso
  • O Projeto Debian quer você!
  • A newbie’s perspective towards Debian
  • Bits from the DPL
  • I’m (a bit) sick of maintaining (mostly) alone, please help

That’s all folks!

Markus Koschany: My Free Software Activities in January 2020

Monday 10th of February 2020 10:57:58 PM

Welcome to Here is my monthly report (+ the first week in February) that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games
  • Again Reiner Herrman did a very good job with updating some of the most famous FOSS games in Debian. I reviewed and sponsored supertux, supertuxkart 1.1 and love 11.3, also several updates to fix build failures with the latest version of scons in Debian. Reiner Herrmann, Moritz Mühlenhoff and Phil Wyett contributed patches to fix release critical bugs in netpanzer, boswars, btanks, and xboxdrv.
  • I packaged new upstream versions of minetest 5.1.1, empire 1.15 and bullet 2.89.
  • I backported freeciv 2.6.1 to buster-backports and
  • applied a patch by Asher Gordon to fix a teleporter bug in berusky2. He also submitted another patch to address even more bugs and I hope to review and upload a new revision soon.
Debian Java Misc
  • As the maintainer I requested the removal of pyblosxom, a web blog engine written in Python 2. Unfortunately pyblosxom is no longer actively maintained and the port to Python 3 has never been finished. I thought it would be better to remove the package now since we have a couple of good alternatives like Hugo or Jekyll.
  • I packaged new upstream versions of wabt and privacybadger.
Debian LTS

This was my 47. month as a paid contributor and I have been paid to work 15 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-2065-1. Issued a security update for apache-log4j1.2 fixing 1 CVE.
  • DLA-2077-1. Issued a security update for tomcat7 fixing 2 CVE.
  • DLA-2078-1. Issued a security update for libxmlrpc3-java fixing 1 CVE.
  • DLA-2097-1. Issued a security update for ppp fixing 1 CVE.
  • DLA-2098-1. Issued a security update for ipmitool fixing 1 CVE.
  • DLA-2099-1. Issued a security update for checkstyle fixing 1 CVE.


Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my twentieth month and I have been paid to work 10 hours on ELTS.

  • ELA-208-1. Issued a security update for tomcat7 fixing 2 CVE.
  • ELA-209-1. Issued a security update for linux fixing 41 CVE.
  • Investigated CVE-2019-17023 in nss which is needed to build and run OpenJDK 7. I found that the vulnerability did not affect this version of nss because of the incomplete and experimental support for TLS 1.3.

Thanks for reading and see you next time.

Ruby Team: Ruby Team Sprint 2020 in Paris - Day Five - We’ve brok^done it

Monday 10th of February 2020 10:45:59 PM

On our last day we met like every day before, working on our packages, fixing and uploading them. The transitions went on. Antonio, Utkarsh, Lucas, Deivid, and Cédric took some time to examine the gem2deb bug reports. We uploaded the last missing Kali Ruby package. And we had our last discussion, covering the future of the team and an evaluation of the sprint:

Last discussion round of the Ruby Team Sprint 2020 in Paris

As a result:

  • We will examine ways to find leaf packages.
  • We plan to organize another sprint next year right before the release freeze, probably again about FOSDEM time. We tend to have it in Berlin but will explore the locations available and the costs.
  • We will have monthly IRC meetings.

We think the sprint was a success. Some stuff got (intentionally and less intentionally) broken on the way. And also a lot of stuff got fixed. Eventually we made our step towards a successful Ruby 2.7 transition.

So we want to thank

  • the Debian project and our DPL Sam for sponsoring the event,
  • Offensive Security for sponsoring the event too,
  • Sorbonne Université and LPSM for hosting us,
  • Cédric Boutillier for organizing the sprint and kindly hosting us,
  • and really everyone who attended, making this a success: Antonio, Abhijith, Georg, Utkarsh, Balu, Praveen, Sruthi, Marc, Lucas, Cédric, Sebastien, Deivid, Daniel.
Group photo; from the left in the Back: Antonio, Abhijith, Georg, Utkarsh, Balu, Praveen, Sruthi, Josy. And in the Front: Marc, Lucas, Cédric, Sebastien, Deivid, Daniel.

In the evening we finally closed the venue which hosted us for 5 days, cleaned up, and went for a last beer together (at least for now). Some of us will stay in Paris a few days longer and finally get to see the city.

Eiffel Tower Paris (February 2020)

Goodbye Paris and save travels to everyone. It was a pleasure.

More in Tux Machines

Fedora and Red Hat: Test Day This Thursday, Report on State of Enterprise Open Source 2020 and More

  • Fedora 32 Gnome 3.36 Test Day 2020-02-20

    Thursday, 2020-02-20 is the Fedora 32 Gnome Test Day! As part of changes Gnome 3.36 in Fedora 32, we need your help to test if everything runs smoothly!

  • The State of Enterprise Open Source 2020: Enterprise open source use rises, proprietary software declines

    Last year we set out to determine how IT leaders think about open source, why they choose it and what they intend to do with it in the future. The result was The 2019 State of Enterprise Open Source: A Red Hat Report, and the findings were clear and confirmed what we see happening in the industry. Enterprise open source has become a default choice of IT departments around the world and organizations are using open source in categories that have historically been more associated with proprietary technology. Headed into the second year of the survey, we had a new directive in mind. We wanted to dive deeper into how IT leaders’ intentions and usage have changed. We surveyed 950 IT leaders in four regions. Respondents had to have some familiarity with enterprise open source and have at least 1% Linux installed at their organization. Respondents were not necessarily Red Hat customers and were unaware that Red Hat was the sponsor of this survey. This allowed us to get a more honest and broad view of the true state of enterprise open source.

  • Manage application programming interfaces to drive new revenue for service providers

    Telecommunications service providers have valuable assets that can be exposed, secured, and monetized via API-centric agile integration. They can derive additional value from new assets, developed internally or through partners and third parties and integrated in a similar way with OSS and BSS systems. Service providers can open new revenue paths if they enhance the value they deliver to customers and to their partner- and developer-ecosystems. APIs can help them accomplish this goal. Services that providers can potentially offer with APIs include direct carrier billing, mobile health services, augmented reality, geofencing, IoT applications, and more. Mobile connectivity, for example, is key to powering IoT applications and devices, giving service providers an inside track to provide APIs to access network information for IoT services. In mobile health, APIs can serve as the link between the customer and healthcare partners through the user’s smartphone. Embracing this API-centric approach, service providers can realize increased agility by treating OSS/BSS building blocks as components that can be reused again and again. They may also innovate faster by giving partners controlled access to data and services, expand their ecosystem by improving partner and third-party collaboration, and generate more revenue through new direct and indirect channels.

today's howtos

  • Autostart Tmux Session On Remote System When Logging In Via SSH

    It is always a good practice to run a long running process inside a Tmux session when working with remote systems via SSH. Because, it prevents you from losing the control of the running process when the network connection suddenly drops. Just in case the network connection gets dropped for any reason, the processes inside the Tmux session will keep running on the remote systems, so you can re-attach to the Tmux session using “tmux attach” command once the network connection is back online. What if you forgot to start the Tmux session in the first place? No matter how careful you’re, sometimes you may forget to start Tmux session. Here is a simple way to avoid this problem. You can autostart Tmux session on the remote systems when logging via SSH. This is especially helpful if you lost the network connection when upgrading a remote Linux server via SSH from your local system.

  • Setup Static IP on Ubuntu 18.04 LTS Desktop and Server Operating System

    In this article, I am going to show you how to configure a static IP on Ubuntu 18.04 LTS server and desktop operating systems. So, let’s get started.

  • Amiga floppy recovery project scope

    The main goal of my Amiga project is to read the data from my old floppy disks. After a bit of hiatus (and after some gentle encouragement from friends at FOSDEM) I'm nearly done, 150/200 disks attempted so far. Ultimately I intend to get rid of the disks to free up space in my house, and probably the Amiga, too. In the meantime, what could I do with it?

  • Part 1: How to Enable Hardware Accelerators on OpenShift

    Managing hardware accelerator cards like GPUs or high-performance NICs in Kubernetes is hard. The special payload (driver, device-plugin, monitoring stack deployment and advanced feature discovery), updates and upgrades, are tedious and error-prone tasks, and often third-party vendor knowledge is needed to accomplish these steps. The Special Resource Operator (SRO) is a template for exposing and managing accelerator cards in a Kubernetes cluster. It handles the hardware seamlessly from bootstrapping to update and upgrades fully managed. The first part will describe the SRO in general where the second part will describe the building blocks in SRO and how to enable a different hardware accelerator step by step.

  • Everthing you need to know about tmux – Windows

    What are tmux Windows? tmux window is the entity that holds panes and resides within the tmux session. Think of a window in tmux as a tab in your notebook. Tabs (windows) help organize your work and group your individual pages (panes) based on some topic of your choice. By default, when tmux starts, a session is initialized. Within this session, tmux initializes a single window (by default) which occupies the entire area of the terminal. This window will contain one single pane (by default).

Screencasts/Audiocasts/Shows: MX Linux 19.1 Run Through, Late Night Linux, Linux Headlines and More

  • MX Linux 19.1 Run Through

    In this video, we are looking at MX Linux 19.1.

  • Late Night Linux – Episode 83

    Joe has been playing with a PinePhone for a week and gives an honest appraisal. Plus Will’s simple solution to his Mac woes, switching to Linux and a community crowdfunder in the news, and a packed KDE Korner.

  • 2020-02-17 | Linux Headlines

    Two separate VPN companies have recently open-sourced client software, and updates to some beloved projects.

  • Change Desktop Environments on Linux

    Let's go over what it takes to switch your desktop on Linux change it from KDE, GNOME, XFCE, MATE, Cinnamon, LXQt, etc.

Second Shortwave Beta

Today I can finally announce the second Shortwave Beta release! I planned to release it earlier, but unfortunately the last few weeks were a bit busy for me. Read more