Language Selection

English French German Italian Portuguese Spanish

Linux.com

Syndicate content
News For Open Source Professionals
Updated: 2 days 10 hours ago

New Open Source Projects to Confront Racial Justice

Friday 19th of February 2021 08:00:00 AM

Today the Linux Foundation announced that it would be hosting seven projects that originated at Call for Code for Racial Justice, an initiative driven by IBM to urge the global developer ecosystem and open source community to contribute to solutions that confront racial inequalities.

Launched by IBM in October 2020, Call for Code for Racial Justice facilitates the adoption and innovation of open source projects by developers, ecosystem partners, and communities across the world to promote racial justice across three distinct focus areas: Police & Judicial Reform and Accountability; Diverse Representation; and Policy & Legislation Reform.

The initiative builds upon Call for Code, created by IBM in 2018 and has grown to over 400,000 developers and problem solvers in 179 countries.

As part of today’s announcement, the Linux Foundation and IBM unveiled two new solution starters, Fair Change and TakeTwo:

Fair Change is a platform to help record, catalog, and access evidence of potentially racially charged incidents to enable transparency, reeducation, and reform as a matter of public interest and safety. For example, real-world video footage related to routine traffic stops, stop and search, or other scenarios may be recorded and accessed by the involved parties and authorities to determine whether the incidents were handled in a biased manner. Fair Change consists of a mobile application for iOS and Android built using React Native, an API for capturing data from various sources built using Node JS. It also includes a website with a geospatial map view of incidents built using Google Maps and React. Data can be stored in a cloud-hosted database and object-store. Visit the tutorial or project page to learn more.

TakeTwo aims to help mitigate digital content bias, whether overt or subtle, focusing on text across news articles, headlines, web pages, blogs, and even code. The solution is designed to leverage directories of inclusive terms compiled by trusted sources like the Inclusive Naming Initiative, which the Linux Foundation and CNCF co-founded. The terminology is categorized to train an AI model to enhance its accuracy over time. TakeTwo is built using open source technologies, including Python, FastAPI, and Docker. The API can be run locally with a CouchDB backend database or IBM Cloudant database. IBM has already deployed TakeTwo within its existing IBM Developer tools that are used to publish new content produced by hundreds of IBMers each week. IBM is trialing TakeTwo for IBM Developer website content. Visit the tutorial or project page to learn more.

In addition to the two new solution starters, The Linux Foundation will now host five existing and evolving open source projects from Call for Code for Racial Justice:

  • Five-Fifths Voter: This web app empowers minorities to exercise their right to vote and ensures their voice is heard by determining optimal voting strategies and limiting suppression issues.
  • Legit-Info: Local legislation can significantly impact areas as far-reaching as jobs, the environment, and safety. Legit-Info helps individuals understand the legislation that shapes their lives.
  • Incident Accuracy Reporting System: This platform allows witnesses and victims to corroborate evidence or provide additional information from multiple sources against an official police report.
  • Open Sentencing: To help public defenders better serve their clients and make a stronger case, Open Sentencing shows racial bias in data such as demographics.
  • Truth Loop: This app helps communities simply understand the policies, regulations, and legislation that will impact them the most.

These projects were built using open source technologies that include Red Hat OpenShift, IBM Cloud, IBM Watson, Blockchain ledger, Node.js, Vu.js, Docker, Kubernetes, and Tekton. The Linux Foundation and IBM ask developers and ecosystem partners to contribute to these solutions by testing, extending, implementing them, and adding their own diverse perspectives and expertise to make them even stronger.

For more information and to begin contributing, please visit:

https://developer.ibm.com/callforcode/racial-justice/get-started/

https://developer.ibm.com/callforcode/racial-justice/projects/

https://www.linuxfoundation.org/projects/call-for-code/

https://github.com/Call-for-Code-for-Racial-Justice/

The post New Open Source Projects to Confront Racial Justice appeared first on Linux Foundation.

The post New Open Source Projects to Confront Racial Justice appeared first on Linux.com.

Interview with KubeCF project leaders Dieu Cao and Paul Warren

Thursday 18th of February 2021 04:26:28 PM

KubeCF is a distribution of Cloud Foundry Application Runtime (CFAR) for Kubernetes. Originated at SUSE, the project is a bridge between Cloud Foundry and Kubernetes. KubeCF provides developers the productivity they love from Cloud Foundry and allows platform operators to manage the infrastructure abstraction with Kubernetes tools and APIs. To learn more about the project we hosted a discussion with Dieu Cao, CF Open Source Product Lead at VMware, and Paul Warren, Product Manager cf-for-k8s at VMWare.



The post Interview with KubeCF project leaders Dieu Cao and Paul Warren appeared first on Linux.com.

Free Introduction to Node.js Online Training Now Available

Thursday 18th of February 2021 08:00:23 AM

Node.js is the extremely popular open source JavaScript runtime, used by some of the biggest names in technology, including Bloomberg, LinkedIn, Netflix, NASA, and more. Node.js is prized for its speed, lightweight footprint, and ability to easily scale, making it a top choice for microservices architectures. With no sign of Node.js use and uptake slowing, there is a continual need for more individuals with knowledge and skills in using this technology.

For those wanting to start learning Node.js, the path has not always been clear. While there are many free resources and forums available to help, they require individual planning, research and organization which can make it difficult for some to learn these skills. That’s why The Linux Foundation and OpenJS Foundation have released a new, free, online training course, Introduction to Node.js. This course is designed for frontend or backend developers who would like to become more familiar with the fundamentals of Node.js and its most common use cases. Topics covered include how to rapidly build command line tools, mock RESTful JSON APIs and prototype real-time services. You will also discover and use various ecosystem and Node core libraries, and come away understanding common use cases for Node.js.

By immersing yourself in a full-stack development experience, this course helps bring context to Node.js as it relates to the web platform, while providing a pragmatic foundation in building various types of real-world Node.js applications. At the same time, the general principles and key understandings introduced by this course can prepare you for further study towards the OpenJS Node.js Application Developer (JSNAD) and OpenJS Node.js Services Developer (JSNSD) certifications.

Introduction to Node.js was developed by David Mark Clements, Principal Architect, technical author, public speaker and OSS creator specializing in Node.js and browser JavaScript. David has been writing JavaScript since 1996 and has been working with, speaking and writing about Node.js since Node 0.4 (2011), including authoring the first three editions of “Node Cookbook”. He is the author of various open source projects including Pino, the fastest Node.js JSON logger available and 0x, a powerful profiling tool for Node.js. David also is the technical lead and primary author of the JSNAD and JSNSD certification exams, as well as the Node.js Application Development (LFW211) and Node.js Services Development (LFW212) courses.

Enrollment is now open for Introduction to Node.js. Auditing the course through edX is free for seven weeks, or you can opt for a paid verified certificate of completion, which provides ongoing access.

The post Free Introduction to Node.js Online Training Now Available appeared first on Linux Foundation – Training.

The post Free Introduction to Node.js Online Training Now Available appeared first on Linux.com.

The Linux Foundation Announces the Election of Renesas’ Hisao Munakata and GitLab’s Eric Johnson to the Board of Directors

Wednesday 17th of February 2021 04:57:19 AM

Today, the Linux Foundation announced that Renesas’ Hisao Munakata has been re-elected to its board, representing the Gold Member community. GitLab’s Eric Johnson has been elected to represent the Silver Member community. Linux Foundation elected board directors serve 2-year terms.

Directors elected to the Linux Foundation’s board are committed to building sustainable ecosystems around open collaboration to accelerate technology development and industry adoption. The Linux Foundation expands the open collaboration communities it supports with community efforts focused on building open standards, open hardware, and open data. It is dedicated to improving diversity in open source communities and working on processes, tools, and best security practices in open development communities. 

Hisao Munakata, Renesas (Gold Member)

Renesas is a global semiconductor manufacturer that provides cutting-edge SoC (system-on-chip) devices for the automotive, industry, and infrastructure. As open source support became essential for the company, Munakata-san encouraged Renesas developers to follow an “upstream-first” scheme to minimize gaps from the mainline community codebase. The industry has now accepted this as standard practice, following Renesas’ direction and pioneering work. 

Hisao Munakata

Munakata-san has served as an LF board director since 2019 and has reflected the voice from the embedded industry. 

Renesas, which joined the Linux Foundation in 2011, has ranked in the top twelve kernel development contributor firms in the past 14 years. Munakata-san serves pivotal roles in various LF projects such as the AGL (Automotive Grade Linux) Advisory Board, Yocto Project Advisory Board, Core Embedded Linux Project, and OpenSSF. In these roles, Munakata-san has supported many industry participants in these projects to achieve harmony. 

As cloud-native trends break barriers between enterprise and embedded systems, Munakata-san seeks to improve close collaboration across the industry and increase contribution from participants in the embedded systems space, focusing on safety in a post-COVID world.

Eric Johnson, GitLab (Silver Member)

Eric Johnson is the Chief Technology Officer at GitLab, Inc. — the first single application for the DevSecOps lifecycle. GitLab is a free, open core software used by more than 30 million registered users to collaborate, author, test, secure, and release software quickly and efficiently. 

Eric Johnson

At GitLab, Eric is responsible for the organization that integrates the work of over a hundred external open source contributors into GitLab’s codebase every month. During his tenure Eric has contributed to a 10x+ increase in annual recurring revenue and has scaled Engineering from 100 to more than 550 people while dramatically increasing team diversity in gender, ethnicity, and country-of-residence. He’s also helped turn GitLab, Inc. into one of the most productive engineering organizations in the world, as evidenced by their substantial monthly on-premise releases.

Eric is also a veteran of 4 previous enterprise technology startups in fields as varied as marketing technology, localization software, streaming video, and commercial drone hardware/software. He currently advises two startups in the medical trial software and recycling robotics industries. 

Eric brings his open source and Linux background to the Foundation. In his professional work, he has spent 17 years hands-on or managing teams that develop software that runs on Linux systems, administrating server clusters, orchestrating containers, open-sourcing privately built software, and contributing back to open source projects. Personally, he’s also administered a Linux home server for the past ten years.

As a Linux Foundation board member, Eric looks forward to using his execution-focused executive experience to turn ideas into results. Collaboration with the Linux Foundation has already begun with Distributed Developer ID and Digital Bill of Materials (DBoM). As a remote work expert with years of experience developing best practices, Eric will use his expertise to help the board, the Foundation, and its partners become more efficient in a remote, asynchronous, and geographically distributed way.

The post The Linux Foundation Announces the Election of Renesas’ Hisao Munakata and GitLab’s Eric Johnson to the Board of Directors appeared first on Linux Foundation.

The post The Linux Foundation Announces the Election of Renesas’ Hisao Munakata and GitLab’s Eric Johnson to the Board of Directors appeared first on Linux.com.

Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy

Monday 15th of February 2021 11:00:39 PM

by Matt Zand

As companies are catching up in adopting blockchain technology, the choice of a private blockchain platform becomes very vital. Hyperledger, whose open source projects support/power more enterprise blockchain use cases than others, is currently leading the race of private Distributed Ledger Technology (DLT) implementation. Working from the assumption that you know how blockchain works and what is the design philosophy behind Hyperledger’s ecosystem, in this article we will briefly review five active Hyperledger DLTs. In addition to DLTs discussed in this article, Hyperledger ecosystem has more supporting tools and libraries that I will cover in more detail in my future articles.

This article mainly targets those who are relatively new to Hyperledger. This article would be a great resource for those interested in providing blockchain solution architect services and doing blockchain enterprise consulting and development. The materials included in this article will help you understand Hyperledger DLTs as a whole and use its high-level overview as a guideline for making the best of each Hyperledger project.

Since Hyperledger is supported by a robust open source community, new projects are being added to the Hyperledger ecosystem regularly. At the time of this writing, Feb 2021, it consists of six active projects and 10 others which are at the incubation stage. Each project has unique features and advantages.

1- Hyperledger Fabric

Hyperledger Fabric is the most popular Hyperledger framework. Smart contracts (also known as chaincode) are written in Golang or JavaScript, and run in Docker containers. Fabric is known for its extensibility and allows enterprises to build distributed ledger networks on top of an established and successful architecture. A permissioned blockchain, initially contributed by IBM and Digital Asset,  Fabric is designed to be a foundation for developing applications or solutions with a modular architecture. It takes plugin components for providing functionalities such as consensus and membership services. Like Ethereum, Hyperledger Fabric can host and execute smart contracts, which are named chaincode. A Fabric network consists of peer nodes, which execute smart contracts (chaincode), query ledger data, validate transactions, and interact with applications. User-entered transactions are channeled to an ordering service component, which initially serves to be a consensus mechanism for Hyperledger Fabric. Special nodes called Orderer nodes validate the transactions, ensure the consistency of the blockchain, and send the validated transactions to the peers of the network as well as to membership service provider (MSP) services.

Two major highlights of Hyperledger Fabric versus Ethereum are:

  • Multi-ledger: Each node on Ethereum has a replica of a single ledger in the network. However, Fabric nodes can carry multiple ledgers on each node, which is a great feature for enterprise applications.
  • Private Data: In addition to a private channel feature, unlike with Ethereum, Fabric members within a consortium can exchange private data among themselves without disseminating them through Fabric channel, which is very useful for enterprise applications.

Here is a good article for reviewing all Hyperledger Fabric components like peer, channel and, chaincode that are essential for building blockchain applications. In short, thorough understanding of all Hyperledger Fabric components is highly recommended for building, deploying and managing enterprise-level Hyperledger Fabric applications.

2- Hyperledger Besu

Hyperledger Besu is an open source Ethereum client developed under the Apache 2.0 license and written in Java. It can be run on the Ethereum public network or on private permissioned networks, as well as test networks such as Rinkeby, Ropsten, and Gorli. Hyperledger Besu supports several consensus algorithms including PoW, PoA, and IBFT, and has comprehensive permissioning schemes designed specifically for uses in a consortium environment.

Hyperledger Besu implements the Enterprise Ethereum Alliance (EEA) specification. The EEA specification was established to create common interfaces amongst the various open and closed source projects within Ethereum, to ensure users do not have vendor lock-in, and to create standard interfaces for teams building applications. Besu implements enterprise features in alignment with the EEA client specification.

As a basic Ethereum Client, Besu has the following features:

  • It connects to the blockchain network to synchronize blockchain transaction data or emit events to the network.
  • It processes transactions through smart contracts in an Ethereum Virtual Machine (EVM) environment.
  • It uses a data storage of networks (blocks).
  • It publishes client API interfaces for developers to interact with the blockchain network.

Besu implements Proof of Work and Proof of Authority (PoA) consensus mechanisms. Further, Hyperledger Besu implements several PoA protocols, including Clique and IBFT 2.0.

Clique is a proof-of-authority blockchain consensus protocol. The blockchain runs Clique protocol maintaining the list of authorized signers. These approved signers directly mine and seal all blocks without mining. Therefore, the transaction task is computationally light. When creating a block, a miner collects and executes transactions, updates the network state with the calculated hash of the block and signs the block using his private key. By using a defined period of time to create a block, Clique can limit the number of processed transactions.

IBFT 2.0 (Istanbul BFT 2.0) is a PoA Byzantine-Fault-Tolerant (BFT) blockchain consensus protocol. Transactions and blocks in the network are validated by authorized accounts, known as validators. Validators collect, validate and execute transactions and create the next block. Existing validators can propose and vote to add or remove validators and maintain a dynamic validator set. The consensus can ensure immediate finality. As the name suggests, IBFT 2.0 builds upon the IBFT blockchain consensus protocol with improved safety and liveness. In IBFT 2.0 blockchain, all valid blocks are directly added in the main chain and there are no forks.

3- Hyperledger Sawtooth

Sawtooth is the second Hyperledger project to reach 1.0 release maturity. Sawtooth-core is written in Python, while Sawtooth Raft and Sawtooth Sabre are written in Rust. It also has JavaScript and Golang components. Sawtooth supports both permissioned and permissionless deployments. It supports the EVM through a collaboration with the Hyperledger Burrow. By design, Hyperledger Sawtooth is created to address issues of performance. As such, one of its distinct features compared to other Hyperledger DLTs is that each node in Sawtooth can act as an orderer by validating and approving a transaction. Other notable features are:

  • Parallel Transaction Execution: While many blockchains use serial transaction execution to ensure consistent ordering at every node on the network, Sawtooth follows an advanced parallel scheduler that classifies transactions into parallel flows that eventually leads to the boost in transaction processing performance.
  • Separation of Application from Core: Sawtooth simplifies the development and deployment of an application by separating the application level from the core system level. It offers smart contract abstraction to allow developers to create contract logic in the programming language of their choice.
  • Custom Transaction Processors: In Sawtooth, each application can define the custom transaction processors to meet its unique requirements. It provides transaction families to serve as an approach for low-level functions, like storing on-chain permissions, managing chain-wide settings and for particular applications such as saving block information and performance analysis.

4- Hyperledger Iroha

Hyperledger Iroha is designed to target the creation and management of complex digital assets and identities. It is written in C++ and is user friendly. Iroha has a powerful role-based model for access control and supports complex analytics. While using Iroha for identity management, querying and performing commands are only limited to the participants who have access to the Iroha network. A robust permissions system ensures that all transactions are secure and controlled. Some of its highlights are:

  • Ease of use: You can easily create and manage simple, as well as complex, digital assets (e.g., cryptocurrency or personal medical data).
  • Built-in Smart Contracts: You can easily integrate blockchain into a business process using built-in smart-contracts called “commands.” As such, developers need not to write complicated smart-contracts because they are available in the form of commands.
  • BFT: Iroha uses BFT consensus algorithm which makes it suitable for businesses that require verifiable data consistency at a low cost.

5- Hyperledger Indy

As a self-sovereign identity management platform, Hyperledger Indy is built explicitly for decentralized identity management. The server portion, Indy node, is built in Python, while the Indy SDK is written in Rust. It offers tools and reusable components to manage digital identities on blockchains or other distributed ledgers. Hyperledger Indy architecture is well-suited for every application that requires heavy work on identity management since Indy is easily interpretable across multiple domains, organization silos and applications. As such, identities are securely stored and shared with all parties involved. Some notable highlights of Hyperledger Indy are:

●        Identity Correlation-resistant: According to the Hyperledger Indy documentation, Indy is completely identity correlation-resistant. So, you do not need to worry about connecting or mixing one Id with another. That means, you can not connect two Ids or find two similar Ids in the ledger.

●        Decentralized Identifiers (DIDs): According to the Hyperledger Indy documentation, all the decentralized identifiers are globally resolvable and unique without needing any central party in the mix. That means, every decentralized identity on the Indy platform will have a unique identifier that will solely belong to you. As a result, no one can claim or even use your identity on your behalf. So, it would eliminate the chances of identity theft.

●        Zero-Knowledge Proofs: With help from Zero-Knowledge Proof, you can disclose only the information necessary without anything else. So, when you have to prove your credentials, you can only choose to release the information that you need depending on the party that is requesting it. For instance, you may choose to share your data of birth only with one party whereas to release your driver license and financial docs to another. In short, Indy gives users great flexibility in sharing their private data whenever and wherever needed.

Summary

In this article, we briefly reviewed five popular Hyperledger DLTs. We started off by going over Hyperledger Fabric and its main components and some of its highlights compared to public blockchain platforms like Ethereum. Even though Fabric is currently used heavily for supply chain management, if you are doing lots of specific works in supply chain domain, you should explore Hyperledger Grid too. Then, we moved on to learning how to use Hyperledger Besu for building public consortium blockchain applications that support multiple consensus algorithms and how to manage Besu from EVM. Next, we covered some highlights of Hyperledger Sawtooth such as how it is designed for high performance. For instance, we learned how a single node in Sawtooth can act as an orderer by approving and validating transactions in the network. The last two DLTs (Hyperledger Iroha and Indy) are specifically geared toward digital asset management and identity . So if you are working on a project that heavily uses identity management, you should explore and use either Iroha or Indy instead of Fabric.

I have included reference and resource links for those interested in exploring topics discussed in this article in depth.

For more references on all Hyperledger projects, libraries and tools, visit the below documentation links:

  1. Hyperledger Indy Project
  2. Hyperledger Fabric Project
  3. Hyperledger Aries Library
  4. Hyperledger Iroha Project
  5. Hyperledger Sawtooth Project
  6. Hyperledger Besu Project
  7. Hyperledger Quilt Library
  8. Hyperledger Ursa Library
  9. Hyperledger Transact Library
  10. Hyperledger Cactus Project
  11. Hyperledger Caliper Tool
  12. Hyperledger Cello Tool
  13. Hyperledger Explorer Tool
  14. Hyperledger Grid (Domain Specific)
  15. Hyperledger Burrow Project
  16. Hyperledger Avalon Tool

Resources

About the Author

Matt Zand is a serial entrepreneur and the founder of 3 tech startups: DC Web Makers, Coding Bootcamps and High School Technology Services. He is a leading author of Hands-on Smart Contract Development with Hyperledger Fabric book by O’Reilly Media. He has written more than 100 technical articles and tutorials on blockchain development for Hyperledger, Ethereum and Corda R3 platforms. At DC Web Makers, he leads a team of blockchain experts for consulting and deploying enterprise decentralized applications. As chief architect, he has designed and developed blockchain courses and training programs for Coding Bootcamps. He has a master’s degree in business management from the University of Maryland. Prior to blockchain development and consulting, he worked as senior web and mobile App developer and consultant, angel investor, business advisor for a few startup companies. You can connect with him on LI: https://www.linkedin.com/in/matt-zand-64047871

The post Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy appeared first on Linux Foundation – Training.

The post Review of Five popular Hyperledger DLTs- Fabric, Besu, Sawtooth, Iroha and Indy appeared first on Linux.com.

How to create a TLS/SSL certificate with a Cert-Manager Operator on OpenShift

Friday 12th of February 2021 12:10:23 AM

How to create a TLS/SSL certificate with a Cert-Manager Operator on OpenShift

Use cert-manager to deploy certificates to your OpenStack or Kubernetes environment.
Bryant Son
Thu, 2/11/2021 at 4:10pm

Image

Photo by Tea Oebel from Pexels

cert-manager builds on top of Kubernetes, introducing certificate authorities and certificates as first-class resource types in the Kubernetes API. This feature makes it possible to provide Certificates as a Service to developers working within your Kubernetes cluster.

cert-manager is an open source project based on Apache License 2.0 provided by Jetstack. Since cert-manager is an open source application, it has its own GitHub page.

Topics:  
Linux  
Openshift  
Read More at Enable Sysadmin

The post How to create a TLS/SSL certificate with a Cert-Manager Operator on OpenShift appeared first on Linux.com.

Unikraft: Pushing Unikernels into the Mainstream

Thursday 11th of February 2021 06:00:15 PM

Unikernels have been around for many years and are famous for providing excellent performance in boot times, throughput, and memory consumption, to name a few metrics [1]. Despite their apparent potential, unikernels have not yet seen a broad level of deployment due to three main drawbacks:

  • Hard to build: Putting a unikernel image together typically requires expert, manual work that needs redoing for each application. Also, many unikernel projects are not, and don’t aim to be, POSIX compliant, and so significant porting effort is required to have standard applications and frameworks run on them.
  • Hard to extract high performance: Unikernel projects don’t typically expose high-performance APIs; extracting high performance often requires expert knowledge and modifications to the code.
  • Little or no tool ecosystem: Assuming you have an image to run, deploying it and managing it is often a manual operation. There is little integration with major DevOps or orchestration frameworks.

While not all unikernel projects suffer from all of these issues (e.g., some provide some level of POSIX compliance but the performance is lacking, others target a single programming language and so are relatively easy to build but their applicability is limited), we argue that no single project has been able to successfully address all of them, hindering any significant level of deployment. For the past three years, Unikraft (www.unikraft.org), a Linux Foundation project under the Xen Project’s auspices, has had the explicit aim to change this state of affairs to bring unikernels into the mainstream. 

If you’re interested, read on, and please be sure to check out:

High Performance

To provide developers with the ability to obtain high performance easily, Unikraft exposes a set of composable, performance-oriented APIs. The figure below shows Unikraft’s architecture: all components are libraries with their own Makefile and Kconfig configuration files, and so can be added to the unikernel build independently of each other.

Figure 1. Unikraft ‘s fully modular architecture showing high-performance APIs

APIs are also micro-libraries that can be easily enabled or disabled via a Kconfig menu; Unikraft unikernels can compose which APIs to choose to best cater to an application’s needs. For example, an RCP-style application might turn off the uksched API (➃ in the figure) to implement a high performance, run-to-completion event loop; similarly, an application developer can easily select an appropriate memory allocator (➅) to obtain maximum performance, or to use multiple different ones within the same unikernel (e.g., a simple, fast memory allocator for the boot code, and a standard one for the application itself). 

Figure 2. Unikraft memory consumption vs. other unikernel projects and Linux Figure 3. Unikraft NGINX throughput versus other unikernels, Docker, and Linux/KVM.

 

These APIs, coupled with the fact that all Unikraft’s components are fully modular, results in high performance. Figure 2, for instance, shows Unikraft having lower memory consumption than other unikernel projects (HermiTux, Rump, OSv) and Linux (Alpine); and Figure 3 shows that Unikraft outperforms them in terms of NGINX requests per second, reaching 90K on a single CPU core.

Further, we are working on (1) a performance profiler tool to be able to quickly identify potential bottlenecks in Unikraft images and (2) a performance test tool that can automatically run a large set of performance experiments, varying different configuration options to figure out optimal configurations.

Ease of Use, No Porting Required

Forcing users to port applications to a unikernel to obtain high performance is a showstopper. Arguably, a system is only as good as the applications (or programming languages, frameworks, etc.) can run. Unikraft aims to achieve good POSIX compatibility; one way of doing so is supporting a libc (e.g., musl), along with a large set of Linux syscalls. 

Figure 4. Only a certain percentage of syscalls are needed to support a wide range of applications

While there are over 300 of these, many of them are not needed to run a large set of applications; as shown in Figure 1 (taken from [5]). Having in the range of 145, for instance, is enough to support 50% of all libraries and applications in a Ubuntu distribution (many of which are irrelevant to unikernels, such as desktop applications). As of this writing, Unikraft supports over 130 syscalls and a number of mainstream applications (e.g., SQLite, Nginx, Redis), programming languages and runtime environments such as C/C++, Go, Python, Ruby, Web Assembly, and Lua, not to mention several different hypervisors (KVM, Xen, and Solo5) and ARM64 bare-metal support.

Ecosystem and DevOps

Another apparent downside of unikernel projects is the almost total lack of integration with existing, major DevOps and orchestration frameworks. Working towards the goal of integration, in the past year, we created the kraft tool, allowing users to choose an application and a target platform simply (e.g., KVM on x86_64) and take care of building the image running it.

Beyond this, we have several sub-projects ongoing to support in the coming months:

  • Kubernetes: If you’re already using Kubernetes in your deployments, this work will allow you to deploy much leaner, fast Unikraft images transparently.
  • Cloud Foundry: Similarly, users relying on Cloud Foundry will be able to generate Unikraft images through it, once again transparently.
  • Prometheus: Unikernels are also notorious for having very primitive or no means for monitoring running instances. Unikraft is targeting Prometheus support to provide a wide range of monitoring capabilities. 

In all, we believe Unikraft is getting closer to bridging the gap between unikernel promise and actual deployment. We are very excited about this year’s upcoming features and developments, so please feel free to drop us a line if you have any comments, questions, or suggestions at info@unikraft.io.

About the author: Dr. Felipe Huici is Chief Researcher, Systems and Machine Learning Group, NEC Laboratories Europe GmbH

References

[1] Unikernels Rethinking Cloud Infrastructure. http://unikernel.org/

[2] Is the Time Ripe for Unikernels to Become Mainstream with Unikraft? FOSDEM 2021 Microkernel developer room. https://fosdem.org/2021/schedule/event/microkernel_unikraft/

[3] Severely Debloating Cloud Images with Unikraft. FOSDEM 2021 Virtualization and IaaS developer room. https://fosdem.org/2021/schedule/event/vai_cloud_images_unikraft/

[4] Welcome to the Unikraft Stand! https://stands.fosdem.org/stands/unikraft/

[5] A study of modern Linux API usage and compatibility: what to support when you’re supporting. Eurosys 2016. https://dl.acm.org/doi/10.1145/2901318.2901341

The post Unikraft: Pushing Unikernels into the Mainstream appeared first on Linux.com.

Getting to Know the Cryptocurrency Open Patent Alliance (COPA)

Thursday 11th of February 2021 02:00:50 PM
Why is there a need for a patent protection alliance for cryptocurrency technologies?

With the recent surge in popularity of cryptocurrencies and related technologies, Square felt an industry group was needed to protect against litigation and other threats against core cryptocurrency technology and ensure the ecosystem remains vibrant and open for developers and companies.

The same way Open Invention Network (OIN) and LOT Network add a layer of patent protection to inter-company collaboration on open source technologies, COPA aims to protect open source cryptocurrency technology. Feeling safe from the threat of lawsuits is a precursor to good collaboration.

  • Locking up foundational cryptocurrency technologies in patents stifles innovation and adoption of cryptocurrency in novel and useful applications.
  • The offensive use of patents threatens the growth and free availability of cryptocurrency technologies. Many smaller companies and developers do not own patents and cannot deter or defend threats adequately.

By joining COPA, a member can feel secure it can innovate in the cryptocurrency space without fear of litigation between other members. 

What is Square’s involvement in COPA?

Square’s core purpose is economic empowerment, and they see cryptocurrency as a core technological pillar. Square helped start and fund COPA with the hope that by encouraging innovation in the cryptocurrency space, more useful ideas and products would get created. COPA management has now diversified to an independent board of technology and regulatory experts, and Square maintains a minority presence.

Do we need cryptocurrency patents to join COPA? 

No! Anyone can join and benefit from being a member of COPA, regardless of whether they have patents or not. There is no barrier to entry – members can be individuals, start-ups, small companies, or large corporations. Here is how COPA works:

  • First, COPA members pledge never to use their crypto-technology patents against anyone, except for defensive reasons, effectively making their patents freely available for all.
  • Second, members pool all of their crypto-technology patents together to form a shared patent library, which provides a forum to allow members to reasonably negotiate lending patents to one another for defensive purposes.
  • The patent pledge and the shared patent library work in tandem to help drive down the incidence and threat of patent litigation, benefiting the cryptocurrency community as a whole. 
  • Additionally, COPA monitors core technologies and entities that support cryptocurrency and does its best to research and help address litigation threats against community members.
What types of companies should join COPA?
  • Financial services companies and technology companies working in regulated industries that use distributed ledger or cryptocurrency technology
  • Companies or individuals who are interested in collaborating on developing cryptocurrency products or who hold substantial investments in cryptocurrency
What companies have joined COPA so far?
  • Square, Inc.
  • Blockchain Commons
  • Carnes Validadas
  • Request Network
  • Foundation Devices
  • ARK
  • SatoshiLabs
  • Transparent Systems
  • Horizontal Systems
  • VerifyChain
  • Blockstack
  • Protocol Labs
  • Cloudeya Ltd.
  • Mercury Cash
  • Bithyve
  • Coinbase
  • Blockstream
  • Stakenet
How to join

Please express interest and get access to our membership agreement here: https://opencrypto.org/joining-copa/

The post Getting to Know the Cryptocurrency Open Patent Alliance (COPA) appeared first on Linux.com.

Understanding Open Governance Networks

Thursday 11th of February 2021 08:00:00 AM

Throughout the modern business era, industries and commercial operations have shifted substantially to digital processes. Whether you look at EDI as a means to exchange invoices or cloud-based billing and payment solutions today, businesses have steadily been moving towards increasing digital operations. In the last few years, we’ve seen the promises of digital transformation come alive, particularly in industries that have shifted to software-defined models. The next step of this journey will involve enabling digital transactions through decentralized networks.

A fundamental adoption issue will be figuring out who controls and decides how a decentralized network is governed. It may seem oxymoronic at first, but decentralized networks still need governance. A future may hold autonomously self-governing decentralized networks, but this model is not accepted in industries today. The governance challenge with a decentralized network technology lies in who and how participants in a network will establish and maintain policies, network operations, on/offboarding of participants, setting fees, configurations, and software changes and are among the issues that will have to be decided to achieve a successful network. No company wants to participate or take a dependency on a network that is controlled or run by a competitor, potential competitor, or any single stakeholder at all for that matter.

Earlier this year, we presented a solution for Open Governance Networks that enable an industry or ecosystem to govern itself in an open, inclusive, neutral, and participatory model. You may be surprised to learn that it’s based on best practices in open governance we’ve developed over decades of facilitating the world’s most successful and competitive open source projects.

The Challenge

For the last few years, a running technology joke has been “describe your problem, and someone will tell you blockchain is the solution.” There have been many other concerns raised and confusion created, as overnight headlines hyped cryptocurrency schemes. Despite all this, behind the scenes, and all along, sophisticated companies understood a distributed ledger technology would be a powerful enabler for tackling complex challenges in an industry, or even a section of an industry.

At the Linux Foundation, we focused on enabling those organizations to collaborate on open source enterprise blockchain technologies within our Hyperledger community. That community has driven collaboration on every aspect of enterprise blockchain technology, including identity, security, and transparency. Like other Linux Foundation projects, these enterprise blockchain communities are open, collaborative efforts. We have had many vertical industry participants engage, from retail, automotive, aerospace, banking, and others participate with real industry challenges they needed to solve. And in this subset of cases, enterprise blockchain is the answer.

The technology is ready. Enterprise blockchain has been through many proof-of-concept implementations, and we’ve already seen that many organizations have shifted to production deployments. A few notable examples are:

  • Trust Your Supplier Network 25 major corporate members from Anheuser-Busch InBev to UPS In production since September 2019.
  • Foodtrust Launched Aug 2017 with ten members, now being used by all major retailers.
  • Honeywell 50 vendors with storefronts in the new marketplace. In its first year, GoDirect Trade processed more than $5 million in online transactions.

However, just because we have the technology doesn’t mean we have the appropriate conditions to solve adoption challenges. A certain set of challenges about networks’ governance have become a “last mile” problem for industry adoption. While there are many examples of successful production deployments and multi-stakeholder engagements for commercial enterprise blockchains already, specific adoption scenarios have been halted over uncertainty, or mistrust, over who and how a blockchain network will be governed.

To precisely state the issue, in many situations, company A does not want to be dependent on, or trust, company B to control a network. For specific solutions that require broad industry participation to succeed, you can name any industry, and there will be company A and company B.

We think the solution to this challenge will be Open Governance Networks.

The Linux Foundation vision of the Open Governance Network

An Open Governance Network is a distributed ledger service, composed of nodes, operated under the policies and directions of an inclusive set of industry stakeholders.

Open Governance Networks will set the policies and rules for participation in a decentralized ledger network that acts as an industry utility for transactions and data sharing among participants that have permissions on the network. The Open Governance Network model allows any organization to participate. Those organizations that want to be active in sharing the operational costs will benefit from having a representative say in the policies and rules for the network itself. The software underlying the Open Governance Network will be open source software, including the configurations and build tools so that anyone can validate whether a network node complies with the appropriate policies.

Many who have worked with the Linux Foundation will realize an open, neutral, and participatory governance model under a nonprofit structure that has already been thriving for decades in successful open source software communities. All we’re doing here is taking the same core principles of what makes open governance work for open source software, open standards, and open collaboration and applying those principles to managing a distributed ledger. This is a model that the Linux Foundation has used successfully in other communities, such as the Let’s Encrypt certificate authority.

Our ecosystem members trust the Linux Foundation to help solve this last mile problem using open governance under a neutral nonprofit entity. This is one solution to the concerns about neutrality and distributed control. In pan-industry use cases, it is generally not acceptable for one participant in the network to have power in any way that could be used as an advantage over someone else in the industry.  The control of a ledger is a valuable asset, and competitive organizations generally have concerns in allowing one entity to control this asset. If not hosted in a neutral environment for the community’s benefit, network control can become a leverage point over network users.

We see this neutrality of control challenge as the primary reason why some privately held networks have struggled to gain widespread adoption. In order to encourage participation, industry leaders are looking for a neutral governance structure, and the Linux Foundation has proven the open governance models accomplish that exceptionally well.

This neutrality of control issue is very similar to the rationale for public utilities. Because the economic model mirrors a public utility, we debated calling these “industry utility networks.” In our conversations, we have learned industry participants are open to sharing the cost burden to stand up and maintain a utility. Still, they want a low-cost, not profit-maximizing model. That is why our nonprofit model makes the most sense.

It’s also not a public utility in that each network we foresee today would be restricted in participation to those who have a stake in the network, not any random person in the world. There’s a layer of human trust that our communities have been enabling on top of distributed networks, which started with the Trust over IP Foundation.

Unlike public cryptocurrency networks where anyone can view the ledger or submit proposed transactions, industries have a natural need to limit access to legitimate parties in their industry. With minor adjustments to address the need for policies for transactions on the network, we believe a similar governance model applied to distributed ledger ecosystems can resolve concerns about the neutrality of control.

Understanding LF Open Governance Networks

Open Governance Networks can be reduced to the following building block components:

  • Business Governance: Networks need a decision-making body to establish core policies (e.g., network policies), make funding and budget decisions, contracting with a network manager, and other business matters necessary for the network’s success. The Linux Foundation establishes a governing board to manage the business governance.
  • Technical Governance: Networks will require software. A technical open source community will openly maintain the software, specifications, or configuration decisions implemented by the network nodes. The Linux Foundation establishes a technical steering committee to oversee technical projects, configurations, working groups, etc.
  • Transaction Entity: Networks will require a transaction entity that will a) act as counterparty to agreements with parties transacting on the network, b) collect fees from participants, and c) execute contracts for operational support (e.g., hiring a network manager).

Of these building blocks, the Linux Foundation already offers its communities the Business and Technical Governance needed for Open Governance Networks. The final component is the new, LF Open Governance Networks.

LF Open Governance Networks will enable our communities to establish their own Open Governance Network and have an entity to process agreements and collect transaction fees. This new entity is a Delaware nonprofit, a nonstock corporation that will maximize utility and not profit. Through agreements with the Linux Foundation, LF Governance Networks will be available to Open Governance Networks hosted at the Linux Foundation.

If you’re interested in learning more about hosting an Open Governance Network at the Linux Foundation, please contact us at governancenetworks@linuxfoundation.org

The post Understanding Open Governance Networks appeared first on Linux Foundation.

The post Understanding Open Governance Networks appeared first on Linux.com.

What’s the next Linux workload that you plan to containerize?

Wednesday 10th of February 2021 04:18:28 AM

What’s the next Linux workload that you plan to containerize?

You’re convinced that containers are a good thing, but what’s the next workload to push to a container?
khess
Tue, 2/9/2021 at 8:18pm

Image

Image by Gerd Altmann from Pixabay

I’m sure many of my fellow sysadmins have been tasked with cutting costs, making infrastructure more usable, making services more accessible, enhancing security, and enabling developers to be more autonomous when working with their test, development, and staging environments. You might have started your virtualization efforts by moving some web sites to containers. You might also have moved an application or two as well. This poll focuses on the next workload that you plan to containerize.

Topics:  
Linux  
Linux Administration  
Containers  
Read More at Enable Sysadmin

The post What’s the next Linux workload that you plan to containerize? appeared first on Linux.com.

How to manage Linux container registries

Tuesday 9th of February 2021 11:09:02 PM

There are many options to manage Linux container registries using the registries.conf file.
Read More at Enable Sysadmin

The post How to manage Linux container registries appeared first on Linux.com.

Linux is the Most in Demand Skill Amongst Hiring Managers – Here’s How You Can Take Advantage

Tuesday 9th of February 2021 08:00:37 AM

Linux powers modern technologies, from the internet and cloud to supercomputers and mobile phones. That’s why the 2020 Open Source Jobs Report found that 74% of hiring managers are looking for Linux talent, more than any other skill. If you want to work on today’s hottest technologies, you need to have a solid understanding of Linux.

At the same time, 93% of hiring managers reported they are having trouble finding qualified talent for these positions, the highest level in the history of this report. That means now is the time to jumpstart your IT career with Linux training and certification!

To help more folks earn a qualification to start or advance their IT career with Linux, we are offering substantial discounts for one week only!

If you’re totally new to the IT industry and are looking for a way to break in, our new Linux Foundation Certified IT Associate (LFCA) is the place to start. This multiple choice certification exam tests your knowledge of entry level IT concepts including Linux, system administration, cloud, security and more. We offer a variety of free courses and resources to help you prepare, which you can see on our LFCA resources page. This certification provides confidence to potential employers that you have the ability to carry out the duties of an entry level IT administrator.

Through February 16, 2021, you can purchase the LFCA exam for only $125 (regularly $200) using code ITSTART at checkout.

If you’d like to take that a step further, you can also add our Essentials of Linux System Administration (LFS201) training course and related Linux Foundation Certified System Administrator (LFCS) exam to the LFCA. The LFCS is an intermediate certification that helps demonstrate your skills in Linux system administration, and provides you with a credential demonstrating your ability to start work as a system administrator.

The bundle of LFCA + LFS201 + LFCS is only $249 (a $799 value) through February 16, 2021 with code ITCAREER. Enroll here.

Finally, if you feel ready to commit to a comprehensive course of study, we’re offering a power bundle to build a broad, deep set of skills. Progress from entry level (LFCA) to intermediate administrator (LFCS) to full engineer (LFCE), with the training you need to succeed. For only $279 (a $1397 value), you will get:

You will have one year from date of purchase to complete the four courses and three exams, but when you come out the other side you will have three highly respected industry certifications that will demonstrate that you have the knowledge, skills and tenacity to step into a Linux engineering role.

Use code ITPOWER to take advantage of this offer through February 16, 2021.

You can read more about all the special offers this month and take advantage of them here.

The post Linux is the Most in Demand Skill Amongst Hiring Managers – Here’s How You Can Take Advantage appeared first on Linux Foundation – Training.

The post Linux is the Most in Demand Skill Amongst Hiring Managers – Here’s How You Can Take Advantage appeared first on Linux.com.

So, you are a Linux kernel programmer and you want to do some automated testing…

Tuesday 9th of February 2021 08:00:00 AM

So, you are a Linux kernel programmer and are looking for ways to do some automated testing.

ktest can build a kernel on a host system, boot it on a target machine, and run a script on the target. It’s up to you how far in the process ktest will go, so it’s possible to build, or build and boot, or do all three steps.

Click to Read More at Oracle Linux Kernel Development

The post So, you are a Linux kernel programmer and you want to do some automated testing… appeared first on Linux.com.

Which workload did you first use Linux containers for?

Tuesday 9th of February 2021 05:24:10 AM

Which workload did you first use Linux containers for?

Linux containers are handy and efficient to use for certain workloads. Which workload got you started using containers?
khess
Mon, 2/8/2021 at 9:24pm

Image

Containerization is not really a new technology, but it endures because of its efficiency, ease of use, security, and rapid deployment capability. Containers are perfect for isolating applications from on another on a single system. You can containerize just about any service including web, database, application, storage, communication, and so on. 

Topics:  
Linux  
Linux Administration  
Containers  
Read More at Enable Sysadmin

The post Which workload did you first use Linux containers for? appeared first on Linux.com.

Sysadmin careers: How long do you typically stay in a job?

Tuesday 9th of February 2021 04:47:51 AM

Some sysadmins change jobs often, while some of us stay too long in one place. Where do you fall on the job change continuum?
Read More at Enable Sysadmin

The post Sysadmin careers: How long do you typically stay in a job? appeared first on Linux.com.

How to get started with an Open Source Program Office

Monday 8th of February 2021 04:40:45 PM

Today every company has to be a software company to be functional in today’s world and open source has become the preferred model for software development. However, many companies still don’t know how to properly engage with the open source communities and code-base. Lack of any strategy towards open source not only keeps companies from taking full advantage of Open Source, it also exposes their own IP or code-base to many risks, including the violation of Open Source license. Every company that deals with Open Source should have an Open Source Program Office. However, there is no playbook to create one. The Linux Foundation Training & Certification has released a new seven-course, training series entitled “Open Source Management & Strategy”. The course is authored by seasoned Open Source leader  Guy Martin, Executive Director of OASIS Open, an internationally recognized standards development and open source projects consortium.



The post How to get started with an Open Source Program Office appeared first on Linux.com.

Linux Foundation Certifications: A Primer

Monday 8th of February 2021 08:00:26 AM

We frequently receive questions at Linux Foundation Training & Certification about which certification is the best fit for a given individual. You may be unsure if a given certification will advance your career, help you break into a new one, or even if you have the skills needed to be successful on the exam. This article aims to provide a primer, giving an overview of each exam, who it is for, what topics are covered, how to prepare and what it demonstrates. 

Jump to:

Linux Foundation Certified IT Associate (LFCA)

LFCA is the first entry-level IT certification from The Linux Foundation. Unlike most entry-level certifications on the market, it includes elements of modern IT infrastructures such as cloud computing, which is essential in most IT roles today. The 2020 Open Source Jobs Report found that the top three skills sought by employers are Linux, cloud and security, all of which are covered here. Additionally, the most in demand job role is DevOps, skills for which are also tested on the exam, making this certification the ideal way to demonstrate you have the skills to hit the ground running in a new IT career.

About this certification

A Certified IT Associate will confirm early proficiency and aptitude in the IT field. The exam is intended to integrate with other qualifications and provides a stepping stone to more advanced credentials.

Who is it for?

The LFCA is a pre-professional certification intended for those new to the industry or considering starting an IT career as an administrator or engineer. This certification is ideal for users interested in advancing to the professional level through a demonstrated understanding of critical concepts for modern IT systems including cloud computing. This is a beginner-level certification and requires no prior experience.

Job titles for those holding this certification could include:

  • Technical Support Specialist
  • Junior System Administrator
  • Junior System Analyst

What does it demonstrate?

LFCA will test candidates’ knowledge of fundamental IT concepts including operating systems, software application installation and management, hardware installation, use of the command line and basic programming, basic networking functions, security best practices, and other related topics to validate their capability and preparedness for an entry-level IT position.

What is covered on the exam?

  • The exam is delivered online and consists of 60 multiple choice questions.
  • Candidates have 90 minutes to complete the LFCA exam.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.
  • Results will be emailed 36 hours from the time that the exam is completed.

Topics covered in the LFCA exam and their weights include: 

  • Linux Fundamentals – 20%
  • System Administration Fundamentals – 20%
  • Cloud Computing Fundamentals – 20%
  • Security Fundamentals – 16%
  • DevOps Fundamentals – 16%
  • Supporting Applications and Developers – 8%

View the full LFCA Domains & Competencies.

How should I prepare?

Unlike many Linux Foundation certifications, LFCA does not have a single course to cover all aspects of the exam. This is by design as the exam covers a broad range of topics necessary to be successful as an IT administrator today. Many of the free Linux Foundation Training courses do explore these skills, which means it’s possible to prepare for the exam without paying for a course.

Among the free courses that help prepare you for LFCA are:

View our LFCA resources page for a complete list.

Note you can audit each of these courses for free for seven weeks, therefore it is recommended to enroll in and complete each one at a time so you do not run out of time on a future course before completing your current one.

Outside of Linux Foundation resources, there are many third party training providers offering paid courses to prepare you for the LFCA (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses). 

Linux Foundation Certified System Administrator (LFCS)

LFCS is ideal for candidates looking to validate their Linux system administration skill set. It is an intermediate-level certification, and a good starting point for those wishing to work as a Linux sysadmin. This exam can also be a useful certification to hold if you plan to move into work in cloud administration, as almost all cloud instances run on Linux, and to be effective in such a role you need a strong foundation in Linux. 

About this certification

LFCS was developed by The Linux Foundation to help meet the high demand for Linux administration talent. The exam consists of performance-based items that simulate on-the-job tasks and scenarios faced by sysadmins in the real world, conducted in the command line. Candidates can select either Ubuntu 18 or CentOS 7, so it is best to practice with these distributions prior to sitting for the exam.

Who is it for?

LFCS is ideal for candidates early in their Linux system administration or open source career. Candidates should have solid experience with, or have completed training in, Linux system administration before attempting this exam.

Job titles for those holding this certification could include:

  • System Administrator
  • Linux Administrator
  • System Analyst
  • Database Administrator
  • DevOps Engineer
  • IT Technician
  • Network Technician

What does it demonstrate?

Certified Linux systems administrators can work proficiently to design, install, configure, and manage a system installation. They will have an understanding of key concepts such as networking, storage, security, maintenance, logging and monitoring, application lifecycle, troubleshooting, API object primitives and the ability to establish basic use-cases for end users.

What is covered on the exam?

  • The exam is delivered online and consists of performance-based tasks (problems) to be solved on the command line running Linux.
  • The exam consists of 20-25 performance-based tasks.
  • The exam is expected to take 2 hours to complete.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.

Topics covered in the LFCS exam and their weights include: 

  • Essential Commands – 25%
  • Operation of Running Systems – 20%
  • User and Group Management – 10%
  • Networking – 12%
  • Service Configuration – 20%
  • Storage Management – 13%

View the full LFCS Domains & Competencies.

How should I prepare?

The most direct way to prepare for LFCS is to take the Essentials of Linux System Administration (LFS201) training course. This course covers the topics and skills necessary to pass this exam, and also prepares you to be successful in a career as a Linux sysadmin. The course was developed, and is maintained by, the same team that created the exam at The Linux Foundation, so you can be assured it is relevant and up to date. 

Some free courses that help prepare you for LFCS are:

Note you can audit each of these courses for free for seven weeks, therefore it is recommended to enroll in and complete each one at a time so you do not run out of time on a future course before completing your current one.

Outside of Linux Foundation resources, there are many third party training providers offering courses to prepare you for the LFCS (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses).

Linux Foundation Certified Engineer (LFCE)

LFCE is designed for the Linux engineer looking to demonstrate a more advanced level of Linux administration and engineering skill. It can be a great step towards becoming a kernel developer or maintainer as well.

About this certification

LFCE was developed by The Linux Foundation to help meet the high demand for Linux engineering talent. The exam is performance-based on the command-line, and includes items simulating on-the-job scenarios. Candidates can select either Ubuntu 18 or CentOS 7, so it is best to practice with these distributions prior to sitting for the exam.

Who is it for?

LFCE is the ideal certification for the Linux engineer with at least three to five years of Linux experience. It is designed for the engineer looking to demonstrate a higher level of skill set to help qualify for a promotion or land a new, more advanced job.

Job titles for those holding this certification could include:

  • Site Reliability Engineer
  • Senior System Administrator
  • Senior System Analyst
  • Systems Engineer
  • Senior DevOps Engineer
  • Network Engineer
  • Linux Engineer
  • Linux Developer

What does it demonstrate?

Holding an LFCE demonstrates that the certificant is able to deploy and configure the Linux operating system at enterprise scale. It shows they possess all the necessary skills to work as a Linux engineer. Passing a performance-based exam demonstrates the candidate’s ability to perform challenging real world tasks under time constraints.

What is covered on the exam?

  • The exam is delivered online and consists of performance-based tasks (problems) to be solved on the command line running Linux.
  • The exam consists of 20-25 performance-based tasks.
  • The exam is expected to take 2 hours to complete.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.

Topics covered in the LFCE exam and their weights include: 

  • Essential Commands – 25%
  • Operation of Running Systems – 20%
  • User and Group Management – 10%
  • Networking – 12%
  • Service Configuration – 20%
  • Storage Management – 13%

View the full LFCE Domains & Competencies.

How should I prepare?

The most direct way to prepare for LFCE is to take the Linux Networking and Administration (LFS211) training course. This course covers the topics and skills necessary to pass this exam, and also prepares you to be successful in a career as a Linux engineer. The course was developed, and is maintained by, the same team that created the exam at The Linux Foundation, so you can be assured it is relevant and up to date. 

Among the free courses that help prepare you for LFCE are:

Note you can audit each of these courses for free for seven weeks, therefore it is recommended to enroll in and complete each one at a time so you do not run out of time on a future course before completing your current one.

Outside of Linux Foundation resources, there are many third party training providers offering courses to prepare you for the LFCE (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses).

Certified Kubernetes Administrator (CKA)

The CKA exam, launched in 2017, has risen extremely rapidly to become one of the most in demand cloud certifications globally. With the rapid adoption of Kubernetes by organizations of all sizes, the need for more cloud administrators and engineers with Kubernetes skills and knowledge has been of paramount importance. In fact, the 2020 Open Source Jobs Report found that knowledge of cloud and containers has the biggest impact on hiring decisions. CKA provides assurance that individuals have the skills, knowledge, and competency to perform the responsibilities of Kubernetes administrators.

About this certification

CKA was created by The Linux Foundation and the Cloud Native Computing Foundation (CNCF) as a part of their ongoing effort to help develop the Kubernetes ecosystem. The exam is an online, proctored, performance-based test that requires solving multiple tasks from a command line running Kubernetes.

Who is it for?

This certification is for Kubernetes administrators, cloud administrators and other IT professionals who manage Kubernetes instances. This is an intermediate level exam and experience and/or professional training is recommended before pursuing it.

Job titles for those holding this certification could include:

  • Cloud Administrator
  • Kubernetes Administrator
  • Kubernetes Engineer
  • Cloud Architect
  • Cloud Engineer
  • Cloud Network Administrator
  • Cloud Support Specialist
  • Cloud Computing Specialist
  • DevOps Engineer

What does it demonstrate?

A certified K8s administrator has demonstrated the ability to do basic installation as well as configuring and managing production-grade Kubernetes clusters. They will have an understanding of key concepts such as Kubernetes networking, storage, security, maintenance, logging and monitoring, application lifecycle, troubleshooting, API object primitives and the ability to establish basic use-cases for end users.

What is covered on the exam?

  • The exam is delivered online and consists of performance-based tasks (problems) to be solved on the command line running Linux.
  • The exam consists of 15-20 performance-based tasks.
  • Candidates have 2 hours to complete the exam.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.
  • Results will be emailed 36 hours from the time that the exam is completed.

Topics covered in the CKA exam and their weights include: 

  • Storage – 10%
  • Troubleshooting – 30%
  • Workloads & Scheduling – 15%
  • Cluster Architecture, Installation & Configuration – 25%
  • Services & Networking – 20%

View the full CKA Domains & Competencies.

How should I prepare?

Due to its popularity, there are a wealth of materials available to prepare for the CKA exam. The most directly relevant structured training course is Kubernetes Fundamentals (LFS258) which was developed by CNCF and The Linux Foundation – the same folks who created the CKA exam – and covers the same subject areas tested on the exam; this course can be purchased as a bundle with the CKA at a discounted rate. Since we develop both the exam and course, we guarantee the latest version of the course is always designed to prepare you for the latest version of the exam. If you are a complete beginner, you may also want to consider the Cloud Engineer Bootcamp which provides foundational Linux and cloud knowledge, culminating with the CKA as a “final exam”. 

Among the free courses that help prepare you for CKA are:

Note you can audit each of these courses for free for seven weeks, therefore it is recommended to enroll in and complete each one at a time so you do not run out of time on a future course before completing your current one.

Outside of Linux Foundation and CNCF resources, there are many third party training providers offering courses to prepare you for the CKA (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses).

Certified Kubernetes Application Developer (CKAD)

Modern applications today are built as cloud native by default. This requires knowledge of cloud tools including Kubernetes, which is why The Linux Foundation and CNCF developed the CKAD exam which certifies that users can design, build, configure, and expose cloud native applications for Kubernetes. 

About this certification

CKAD has been developed by The Linux Foundation and the Cloud Native Computing Foundation (CNCF), to help expand the Kubernetes ecosystem through standardized training and certification. This exam is an online, proctored, performance-based test that consists of a set of performance-based tasks (problems) to be solved in a command line.

Who is it for?

This certification is for Kubernetes engineers, cloud engineers and other IT professionals responsible for building, deploying, and configuring cloud native applications with Kubernetes. This is an intermediate-level certification and experience and/or professional training is recommended before pursuing it.

Job titles for those holding this certification could include:

  • Kubernetes Engineer
  • Kubernetes Developer
  • Cloud Engineer
  • Cloud Network Engineer
  • Cloud Architect
  • Cloud Systems Engineer
  • Cloud Developer
  • Cloud Applications Engineer
  • Cloud Applications Developer
  • DevOps Cloud Architect

What does it demonstrate?

The Certified Kubernetes Application Developer can design build, configure and expose cloud native applications for Kubernetes. A CKAD can define application resources and use core primitives to build, monitor, and troubleshoot scalable applications & tools in Kubernetes.

The exam assumes knowledge of, but does not test for, container runtimes and microservice architecture.

The successful candidate will be comfortable using:

– An OCI-Compliant Container Runtime, such as Docker or rkt.

– Cloud Native application concepts and architectures.

– A Programming language, such as Python, Node.js, Go, or Java.

What is covered on the exam?

  • The exam is delivered online and consists of performance-based tasks (problems) to be solved on the command line running Linux.
  • The exam consists of 15-20 performance-based tasks.
  • Candidates have 2 hours to complete the exam.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.
  • Results will be emailed 36 hours from the time that the exam is completed.

Topics covered in the CKAD exam and their weights include: 

  • Core Concepts – 13%
  • Configuration – 18%
  • Multi-Container Pods – 10%
  • Observability – 18%
  • Pod Design – 20%
  • Services & Networking – 13%
  • State Persistence – 8%

View the full CKA Domains & Competencies.

How should I prepare?

The Linux Foundation and CNCF offer an online training course that helps prepare you for the CKAD exam. Kubernetes for Developers (LFD259) comes directly from the same organizations that created and maintain CKAD, so you can be assured that the training will always cover the topics most relevant to the exam. 

If you do not already have experience with developing cloud applications, it is important that you gain foundational knowledge and skills with the cloud before pursuing this certification, or even the related training. We offer a variety of cloud and containers training courses, including many free ones such as Introduction to Cloud Infrastructure Technologies (LFS151) and Introduction to Kubernetes (LFS158) that provide much of this knowledge.

Among the other free courses that help prepare you for CKAD are:

Note you can audit each of these courses for free for seven weeks, therefore it is recommended to enroll in and complete each one at a time so you do not run out of time on a future course before completing your current one.

Outside of Linux Foundation and CNCF resources, there are many third party training providers offering courses to prepare you for the CKAD (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses).

Certified Kubernetes Security Specialist (CKS)

As production environments become more decoupled and agile, keeping the entire environment secure has become more complex. This challenge will only become more acute as cloud adoption accelerates. Additionally, we saw from the 2020 Open Source Jobs Report that cloud and security skills have the biggest and third biggest impact on hiring decisions respectively, further highlighting the talent gap for these skills.

About this certification

CKS is a performance-based certification exam that tests candidates’ knowledge of Kubernetes and cloud security in a simulated, real world environment. Candidates must have taken and passed the CKA exam prior to attempting the CKS exam. 

Who is it for?

Accomplished Kubernetes practitioners (must be CKA certified) who have demonstrated competence on a broad range of best practices for securing container-based applications and Kubernetes platforms during build, deployment and runtime are the primary audience for CKS. This is an intermediate-level certification and experience and/or professional training is recommended before pursuing it.

Job titles for those holding this certification could include:

  • Cloud Security Specialist
  • Kubernetes Engineer
  • Cloud Engineer
  • Cloud Network Administrator
  • Cloud Architect
  • Cloud Systems Engineer
  • Cloud Security Consultant
  • Cybersecurity Specialist
  • Cybersecurity Administrator
  • Cybersecurity Engineer

What does it demonstrate?

Obtaining a CKS demonstrates a candidate possesses the requisite abilities to secure container-based applications and Kubernetes platforms during build, deployment and runtime, and is qualified to perform these tasks in a professional setting.

What is covered on the exam?

  • The exam is delivered online and consist of performance-based tasks (problems) to be solved on the command line running Linux.
  • The exam consists of 15-20 performance-based tasks.
  • Candidates have 2 hours to complete the CKS exam.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.
  • Results will be emailed 36 hours from the time that the exam is completed.

Topics covered in the CKS exam and their weights include: 

  • Cluster Setup – 10%
  • Cluster Hardening – 15%
  • System Hardening – 15%
  • Minimize Microservice Vulnerabilities – 20%
  • Supply Chain Security – 20%
  • Monitoring, Logging and Runtime Security – 20%

View the full CKS Domains & Competencies.

How should I prepare?

The first thing to keep in mind if your goal is to obtain a CKS is that CKA is a prerequisite; you will not be permitted to sit for a CKS exam before first achieving that, so if you have not already done so, you should jump up to the CKA section. Assuming you already have earned a CKA, then you already have the cloud, container and Kubernetes knowledge needed for this exam, but likely will need training around security issues. The Kubernetes Security Essentials (LFS260) course from The Linux Foundation and CNCF provides the knowledge you need to be successful in security cloud native applications, and covers the topics tested in the CKS exam. 

Among the free courses that help prepare you for CKS are:

Note you can audit each of these courses for free for seven weeks, therefore it is recommended to enroll in and complete each one at a time so you do not run out of time on a future course before completing your current one.

Outside of Linux Foundation and CNCF resources, there are many third party training providers offering courses to prepare you for the CKS (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses).

FinOps Certified Practitioner (FOCP)

FinOps is a rapidly growing practice of bringing financial accountability to the variable spend model of cloud, enabling distributed teams to make business trade-offs between speed, cost, and quality. With the rapid adoption of cloud by organizations of all sizes, it is essential that these topics be explored to ensure a positive cost-benefit is achieved by use of cloud technology. FOCP is ideal for individuals who want to validate and showcase their cloud financial management and cost optimization skills regardless of the cloud platform they use.

About this certification

FOCP is an online non-proctored exam that can take up to 60 minutes to complete. It includes 50 multiple choice questions, some with multiple selections as indicated in the question text, and tests your foundational knowledge of FinOps and its practice. 

Who is it for?

The certification is designed for senior professionals who want to demonstrate a basic understanding of FinOps and how it’s applied to enhance business value from cloud spend. Organizations undergoing a public cloud-first strategy or who are in the midst of a public cloud migration will also benefit greatly.

Job titles for those holding this certification could include:

  • FinOps Practitioner
  • Director/Manager of Cloud Optimization
  • Principal Systems Engineer
  • Director/Manager of Engineering
  • Cloud Architect
  • Head  of IT Finance
  • Director/Manager of Finance
  • Technology Procurement Manager

What does it demonstrate?

Those holding an FOCP will bring a strong understanding of FinOps, its principles, capabilities and how to support and manage the FinOps lifecycle to manage cost and usage of cloud in their organization. 

What is covered on the exam?

  • The exam is delivered online and consists of 50 multiple choice questions.
  • Candidates have 60 minutes to complete the FOCP exam.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.
  • Results will be emailed 36 hours from the time that the exam is completed.

Topics covered in the FOCP exam and their weights include: 

  • Challenge of Cloud – 8%
  • What is FinOps & FinOps Principles – 12%
  • FinOps Teams & Motivation – 12%
  • FinOps Capabilities – 28%
  • FinOps Lifecycle – 30%
  • Terminology & the Cloud Bill – 10%

View the full FOCP Domains & Competencies.

How should I prepare?

The FinOps Foundation offers both self-paced and virtual instructor-led training options for those who need help preparing for the FOCP exam. Before enrolling, you should understand the basics of how cloud computing works, know the key services on your cloud providers, including their common use cases, and have a basic understanding of billing and pricing models. You should already be able to describe the basic value proposition of running in the cloud and understand the core concept of using a pay-as-you-go consumption model. We also encourage those who are new to FinOps to first complete the free Introduction to FinOps (LFS175) before pursuing further study. It provides many of the fundamentals of FinOps while the self-paced and instructor-led courses are more in-depth and prepare you for the FOCP exam.

The Linux Foundation’s free Introduction to Cloud Infrastructure Technologies (LFS151) is a good starting place. You’ll also need to have a base level of knowledge of at least one of the three main public cloud providers (AWS, Azure, Google Cloud). For AWS, we recommend AWS Business Professional training or, even better, the AWS Cloud Practitioner certification. For Google, check out the Google Cloud Platform Fundamentals course. For Azure, try the Azure Fundamentals learning path

Certified Hyperledger Fabric Administrator (CHFA)

Enterprise blockchain is one of the fastest growing areas of technology, with LinkedIn even naming it the most in demand hard skill in 2020. Hyperledger Fabric is a distributed ledger technology intended as a foundation for developing applications or solutions with a modular architecture. CHFA allows candidates to demonstrate their competence in deploying and operating a Hyperledger Fabric network through the command line.

About this certification

This two-hour Hyperledger Fabric certification exam is an online, proctored, performance-based test that consists of a set of performance-based tasks (problems) to be solved in a command line.

Who is it for?

CHFA certification is for sysadmins or developers who want to demonstrate their ability to effectively build a secure Hyperledger Fabric network for commercial deployment.

Job titles for those holding this certification could include:

  • Blockchain Specialist
  • Blockchain Administrator
  • Blockchain Engineer
  • System Administrator

What does it demonstrate?

The CHFA will be able to effectively build a secure Hyperledger Fabric network for commercial deployment. Additionally, the CHFA will be able to install, configure, operate, manage, and troubleshoot the nodes on that network. Passing a performance-based exam demonstrates the candidate’s ability to perform challenging real world tasks under time constraints.

What is covered on the exam?

  • The exam is delivered online and consists of performance-based tasks (problems) to be solved on the command line running Linux.
  • The exam consists of between 16-26 performance-based tasks.
  • Candidates have 2 hours to complete the exam.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.
  • Results will be emailed 36 hours from the time that the exam is completed.

Topics covered in the CHFA exam and their weights include: 

  • Application Lifecycle Management – 20%
  • Install and Configure Network – 25%
  • Diagnostics and Troubleshooting – 15%
  • Membership Service Provider – 20%
  • Network Maintenance and Operations – 20%

View the full CHFA Domains & Competencies.

How should I prepare?

The Linux Foundation offers a companion course, Hyperledger Fabric Administration (LFS272), which provides a good understanding of the Hyperledger Fabric network topology, chaincode operations, administration of identities, permissions, how and where to configure component logging, and much more. The topics covered in the course align to the CHFA exam and will increase the chances of passing. 

Before getting to that stage, you should be familiar with a number of concepts including:

  • Knowledge of basic Linux system administration commands and navigation
  • Knowledge of bash basics
  • Strong knowledge of containerization and Docker
  • Familiarity with NoSQL databases and general understanding of CouchDB
  • Be able to read JavaScript, TypeScript, and Go programming languages
  • Be very familiar with YAML

For those who need to brush up on these concepts, as well as blockchain basics, before pursuing this certification, Linux Foundation Training & Certification offers a number of free courses that are relevant:

Note you can audit each of these courses for free for seven weeks. Therefore it is recommended you enroll in and complete each one at a time so you do not run out of time on a future course before completing your current one.

Hyperledger also offers a variety of resources including tutorials, how-to videos, and webinars that can help learn about Fabric administration.

Outside of Linux Foundation and Hyperledger resources, there are many third party training providers offering courses to prepare you for the CHFA (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses).

Certified Hyperledger Fabric Developer (CHFD)

Note: We have temporarily suspended scheduling of new attempts for the CHFD exam to conduct an upgrade to Fabric v2.2 and address issues we have experienced in correctly scoring some exam reservations. Our target to recommence exam scheduling is end of Q1 2021.

Enterprise blockchain is one of the fastest growing areas of technology, with LinkedIn even naming it the most in demand hard skill in 2020. Hyperledger Fabric is a distributed ledger technology intended as a foundation for developing applications or solutions with a modular architecture. CHFD allows candidates to demonstrate the knowledge to develop and maintain client applications and smart contracts using the latest Fabric programming model.

About this certification

This two-hour exam is an online, proctored, performance-based test that consists of a set of performance-based tasks (problems) to be solved in a Web IDE and the command line.

Who is it for?

CHFD is for developers who want to demonstrate their ability to package and deploy Fabric applications and smart contracts, perform end-to-end Fabric application life-cycle and smart contract management, program in Java or Node.js (or Go for smart contracts) and more.

Job titles for those holding this certification could include:

  • Blockchain Specialist
  • Blockchain Developer
  • Blockchain Engineer
  • Developer
  • Software Architect

What does it demonstrate?

A CHFD should demonstrate the knowledge to develop and maintain client applications and smart contracts using the latest Fabric programming model.

Such a developer must also be able to:

– package and deploy Fabric applications and smart contracts, perform end-to-end Fabric application life-cycle and smart contract management

– program in Java or Node.js (or Go for smart contracts)

Passing a performance-based exam demonstrates the candidate’s ability to perform challenging real world tasks under time constraints.

What is covered on the exam?

  • The exam is delivered online and consists of performance-based tasks (problems) to be solved on the command line running Linux.
  • The exam consists of between 16-26 performance-based tasks.
  • Candidates have 2 hours to complete the exam.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.
  • Results will be emailed 36 hours from the time that the exam is completed.

Topics covered in the CHFD exam and their weights include: 

  • Identity Management – 7%
  • Network Configuration – 8%
  • Smart Contract Development – 40%
  • Smart Contract Invocation – 25%
  • Maintenance and Testing – 20%

View the full CHFD Domains & Competencies.

How should I prepare?

The Linux Foundation offers a companion course, Hyperledger Fabric for Developers (LFD272), which provides a good understanding of how to implement and test a chaincode in Golang for any use case, manage the chaincode life cycle, create Node.js client applications interacting with Hyperledger Fabric networks, control access to the information based on a user identity, set up and use private data collections and much more. The topics covered in the course align to the CHFD exam and will increase the chances of passing. 

Before getting to that stage, you should be familiar with a number of concepts including:

  • Understanding of Hyperledger Fabric architecture and components: Ledger, Channel, Chaincode, types of network nodes (Endorser, Committer, Orderer, etc.), transaction flow, Certificate Authority (CA)
  • Experience with GoLang and NodeJS:
    • Ability to install GoLang, run go commands from the cli; knowledge of basic language constructions
    • Ability to install NodeJS, run applications from the cli; knowledge of basic language constructions; familiarity with package management
  • Knowledge of Docker basics:
    • Ability to install docker daemon, run docker containers locally, understand and use basic commands
  • Experience with the command line/shell of a Linux operating system
  • Familiarity with NoSQL databases and general understanding of CouchDB

For those who need to brush up on these concepts, as well as blockchain basics, before pursuing this certification, Linux Foundation Training & Certification offers a number of free courses that are relevant:

Note you can audit each of these courses for free for seven weeks. Therefore, it is recommended you  enroll in and complete each one at a time so you do not run out of time on a future course before completing your current one.

Hyperledger also offers a variety of resources including tutorials, how-to videos, and webinars that can help learn about Fabric development.

Outside of Linux Foundation and Hyperledger resources, there are many third party training providers offering courses to prepare you for the CHFD (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses).

OpenJS Node.js Application Developer (JSNAD)

Node.js is the extremely popular open source JavaScript runtime, used by some of the biggest names in technology, including Bloomberg, LinkedIn, Netflix, NASA, and more. JSNAD tests and verifies candidates’ skills in using Node.js to create web-based applications.

About this certification

The two-hour exam tests your skills from debugging Node.js to managing asynchronous operations to controlling processes. It tests knowledge and skills that an experienced Node.js application developer would be expected to possess. This exam is an online, proctored, performance-based test that requires implementing multiple solutions within a Remote Desktop Linux environment. Visual Studio Code, Vim and Webstorm (kindly sponsored by JetBrains) are included as editors in this environment. The exam includes tasks simulating on-the-job scenarios.

Who is it for?

JSNAD certification is ideal for the Node.js developer with at least two years of experience working with Node.js. It is designed for anyone looking to demonstrate competence with Node.js to create applications of any kind, with a focus on knowledge of Node.js core APIs.

Job titles for those holding this certification could include:

  • Application Developer
  • Developer
  • Web Developer
  • Web Architect
  • Web Engineer
  • Node.js Specialist
  • Node.js Developer
  • Node.js Architect
  • Node.js Engineer
  • Full Stack Developer

What does it demonstrate?

JSNAD certification demonstrates the ability to perform tasks in real world-type environments, giving employers confidence that the certificant possesses a broad range of skills around JavaScript and related technologies. Passing a performance-based exam demonstrates the candidate’s ability to perform challenging, real world tasks under time constraints.

What is covered on the exam?

  • This exam is an online, proctored, performance-based test that requires implementing multiple solutions within a Remote Desktop Linux environment.
  • The exam consists of between 20-25 performance-based tasks.
  • The exam is expected to take 2 hours to complete
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.

Topics covered in the JSNAD exam and their weights include: 

  • Buffer and Streams – 11%
  • Control flow – 12%
  • Child Processes – 8%
  • Diagnostics – 6%
  • Error Handling – 8%
  • Node.js CLI – 4%
  • Events – 11%
  • File System – 8%
  • JavaScript Prerequisites – 7%
  • Module system – 7%
  • Process/Operating System – 6%
  • Package.json – 6%
  • Unit Testing – 6%

View the full JSNAD Domains & Competencies.

How should I prepare?

The Linux Foundation offers a companion training course, Node.js Application Development (LFW211), which covers a broad set of use cases and using Node.js core APIs with selected ecosystem libraries, this course fully prepares you for the JSNAD. We also encourage those who are new to Node.js to first complete the free Introduction to Node.js (LFW111, launching mid-February 2021) before pursuing further study. 

The Node.js community also offers a website with a variety of free learning resources and guides.

Outside of Linux Foundation and OpenJS Foundation resources, there are many third party training providers offering courses to prepare you for the JSNAD (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses).

OpenJS Node.js Services Developer (JSNSD)

Node.js is the extremely popular open source JavaScript runtime, used by some of the biggest names in technology, including Bloomberg, LinkedIn, Netflix, NASA, and more. JSNSD tests and verifies candidates’ skills in creating RESTful Node.js servers and services (or microservices) with a particular emphasis on security practices.

About this certification

The two-hour exam tests your skills in the areas of services, servers and security. Specific knowledge and skills tested are those an experienced Node.js developer would be expected to have. The exam is performance-based and includes items simulating on-the-job scenarios. It is an online, proctored, performance-based test that requires implementing multiple solutions within a Remote Desktop Linux environment. Visual Studio Code, Vim and Webstorm (kindly sponsored by JetBrains) are included as editors in this environment.

Who is it for?

JSNSD is for the Node.js developer with at least two years of experience creating RESTful servers and services with Node.js. It is designed for anyone looking to demonstrate competence in creating RESTful Node.js servers and services (or microservices) with a particular emphasis on security practices.

Job titles for those holding this certification could include:

  • Services Developer
  • Developer
  • Web Developer
  • Web Architect
  • Web Engineer
  • Node.js Specialist
  • Node.js Developer
  • Node.js Architect
  • Node.js Engineer
  • Full Stack Developer

What does it demonstrate?

JSNSD certification demonstrates the ability to perform tasks in a real world-type environment, giving employers confidence that the certificant possesses a broad range of skills around JavaScript and related technologies. Passing a performance-based exam demonstrates the candidate’s ability to perform challenging real world tasks under time constraints.

What is covered on the exam?

  • This exam is an online, proctored, performance-based test that requires implementing multiple solutions within a Remote Desktop Linux environment.
  • The exam consists of between 5-10 performance-based tasks.
  • The exam is expected to take 2 hours to complete
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.

Topics covered in the JSNSD exam and their weights include: 

  • Servers and Services – 70%
  • Security – 30%

View the full JSNSD Domains & Competencies.

How should I prepare?

The Linux Foundation offers a companion training course, Node.js Services Development (LFW212), which provides a deep dive into Node core HTTP clients and servers, web servers, RESTful services and web security essentials, and prepares you for the JSNSD. We also encourage those who are new to Node.js to first complete the free Introduction to Node.js (LFW111, launching mid-February 2021) before pursuing further study. The Node.js community also offers a website with a variety of free learning resources and guides.

Outside of Linux Foundation and OpenJS Foundation resources, there are many third party training providers offering courses to prepare you for the JSNSD (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses).

Cloud Foundry Certified Developer (CFCD)

Cloud Foundry provides a highly efficient, modern model for cloud native application delivery on top of Kubernetes. The platform is built for developers by developers at the largest technology companies in the world, including IBM, SAP, SUSE, and VMware. CFCD is ideal for candidates who want to validate their skill set using the Cloud Foundry platform to deploy and manage applications.

About this certification

This is an online, proctored exam that can take up to three hours to complete. The exam includes performance-based tasks and multiple choice questions, to test individual developers on their practical and conceptual knowledge of Cloud Foundry and general cloud-native architectural principles.

Who is it for?

This certification is for experienced Cloud Foundry developers responsible for deploying and managing applications with Cloud Foundry.

Job titles for those holding this certification could include:

  • Software Developer
  • Software Engineer
  • Software Architect
  • Cloud Foundry Administrator
  • Cloud Foundry Developer
  • Cloud Foundry Engineer
  • Cloud Foundry Architect
  • Cloud Engineer
  • Cloud Architect
  • Cloud Systems Engineer
  • Cloud Developer
  • Cloud Support Specialist
  • Cloud Applications Engineer
  • Cloud Computing Specialist
  • DevOps Cloud Architect

What does it demonstrate?

A Certified Cloud Foundry developer will competently use Cloud Foundry to deploy and manage applications.

What is covered on the exam?

  • The exam is delivered online and consists of 10 performance-based tasks, where you will use the CLI to interact with a Cloud Foundry environment, and 10 multiple choice questions.
  • Candidates have 3 hours  to complete the CFCD exam.
  • The exam requires use of the Cloud Foundry CLI in the provided browser based terminal to execute, and no Cloud Foundry UI should be used to complete the exam.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.
  • Results will be emailed 36 hours from the time that the exam is completed.

Topics covered in the CFCD exam include: 

  • Application Lifecycle
  • Application Management
  • Basics
  • Platform Security
  • Routing
  • Services
  • Troubleshooting

View the full CFCD Domains & Competencies.

How should I prepare?

The Linux Foundation offers a companion training course, Cloud Foundry for Developers (LFD232), which covers how to use Cloud Foundry to build, deploy and manage a cloud native microservice solution. The course has extensive labs so developers can learn by doing, and completing it will greatly increase your chances of passing the CFCD exam. 

Before pursuing this certification, you should be an active developer, comfortable using command line tools and familiar with basic cloud computing concepts. Being familiar with Java/Spring, Node.js and/or Ruby is a plus. Our free Introduction to Cloud Foundry and Cloud Native Software Architecture (LFS132) course is useful preparation for this course, as well as Introduction to Node.js (LFW111, launching mid-February 2021) and Introduction to Cloud Infrastructure Technologies (LFS151)

Note you can audit each of these courses for free for seven weeks, therefore it is recommended to enroll in and complete each one at a time so you do not run out of time on a future course before completing your current one.

Cloud Foundry offers a variety of documentation which can also help familiarize you with the platform.

Outside of Linux Foundation and Cloud Foundry resources, there are many third party training providers offering courses to prepare you for the CFCD (Linux Foundation Training & Certification cannot endorse or verify the validity of information provided in any third-party courses).

Certified ONAP Professional (COP)

Open Network Automation Platform (ONAP) is a comprehensive platform for orchestration, management, and automation of network and edge computing services for network operators, cloud providers, and enterprises. It is leveraged by several of the world’s largest telecommunications companies to manage the huge growth in data usage, network automation, and the rollout of 5G and edge computing. A Certified ONAP Professional (COP) designs, tests, and runs network functions and services using ONAP.

About this certification

This is an online proctored exam that can take up to three hours to complete. The exam is performance-based and includes items simulating on-the-job scenarios.

Who is it for?

This certification is for engineers at service providers and enterprises who develop, deploy, and scale their networks and next-generation services, especially in light of the growth in 5G and edge computing.

Job titles for those holding this certification could include:

  • Network Engineer
  • Network Architect
  • Telecommunications Architect
  • Telecommunications Engineer
  • Full Stack Software Developer
  • Developer
  • 5G Architect
  • 5G Engineer

What does it demonstrate?

A successful COP candidate will demonstrate the ability to onboard Virtual Network Functions (VNFs), design and deploy network services, and configure VNFs. Additionally, candidates must have baseline understanding of closed-loop automation in terms of closed-loop design and runtime behavior. A candidate will also possess elementary troubleshooting capabilities to find issues with network services and closed-loops.

What is covered on the exam?

  • This exam is an online, proctored, performance-based test that requires implementing multiple solutions within a Remote Desktop Linux environment.
  • The exam consists of between 15-20 performance-based tasks.
  • Candidates have 3 hours to complete the exam.
  • The exam is proctored remotely via streaming audio, video, and screen sharing feeds.
  • Results will be emailed 36 hours from the time that the exam is completed.

Topics covered in the COP exam and their weights include: 

  • Service Design – 20%
  • Service Deployment – 25%
  • Service LCM – 15%
  • Troubleshooting – 20%
  • Closed Loop Automation – 20%

View the full COP Domains & Competencies.

How should I prepare?

The Linux Foundation offers a companion course, ONAP Fundamentals (LFS263), which provides conceptual and hands-on skills around ONAP, including the basics of Network Function Virtualization (NFV); an overview of ONAP architecture, subprojects, and use case blueprints; ONAP modeling overview; interfacing with ONAP; network service design, orchestration, and lifecycle management; the ONAP Policy framework; closed control loop automation; and troubleshooting. These topics directly align with the knowledge domains tested by the COP and will substantially increase students’ ability to become certified.

The exam additionally expects baseline understanding of the underlying cloud platform (e.g. Kubernetes and OpenStack) and minimal familiarity with modeling languages (e.g. Heat and TOSCA). Some free resources to help get up to speed on the fundamentals before digging into the ONAP Fundamentals course include:

Note you can audit each of these courses for free for seven weeks, therefore it is recommended to enroll in and complete each one at a time so you do not run out of time on a future course before completing your current one.

Additionally, the ONAP project offers a wiki with many more resources to learn. You can also visit the LF Networking and ONAP websites for project information. 

The post Linux Foundation Certifications: A Primer appeared first on Linux Foundation – Training.

The post Linux Foundation Certifications: A Primer appeared first on Linux.com.

It’s the season for sysadmin reading

Thursday 4th of February 2021 01:26:54 AM

Our top 10 articles from the past month covered Ansible, working with containers, and all sorts of Linux goodness. Which was your favorite?
Read More at Enable Sysadmin

The post It’s the season for sysadmin reading appeared first on Linux.com.

Best practices for adapting Phoronix Test Suite to benchmark Linux performance

Thursday 4th of February 2021 12:00:00 AM

Detecting small changes in performance can be somewhat difficult, due to the inconsistency of data produced. This blog describes changes to three Phoronix Test Suite workloads that help to improve the consistency of data produced by reducing run to run variability.
Click to Read More at Oracle Linux Kernel Development

The post Best practices for adapting Phoronix Test Suite to benchmark Linux performance appeared first on Linux.com.

More in Tux Machines

today's howtos

  • Encryption at Rest in MariaDB – Linux Hint

    Encryption-at-rest prevents an attacker from accessing encrypted data stored on the disk even if he has access to the system. The open-source databases MySQL and MariaDB now support encryption-at-rest feature that meets the demands of new EU data protection legislation. MySQL encryption at rest is slightly different from MariaDB as MySQL only provides encryption for InnoDB tables. Whereas MariaDB also provides an option to encrypt files such as redo logs, slow logs, audit logs, error logs, etc. However, both can’t encrypt data on a RAM and protect it from a malicious root. In this article, we will learn to configure database-level encryption for MariaDB.

  • How To Install ERPNext on CentOS | RoseHosting Blog

    ERPNext is a completely robust ERP framework intended for small and medium-sized businesses. It covers an extensive variety of features, including accounting, CRM, inventory, selling, purchasing, manufacturing, projects, HR and payroll, website, e-commerce, and more – all of which make it profoundly adaptable and extendable. ERPNext is developed in Python and depends on the Frappe Framework. It utilizes Node.js for the front end, Nginx for the web server, Redis for caching, and MariaDB for the database.

  • How To Find Out Which Groups A User Belongs To In Linux

    A Linux group is a collection of one or more users with identical permission requirements on files and directories. An user can be a member of more than group at a time. In Linux, each group information is stored in the "/etc/group" file. In this tutorial, we will see all the possible ways to easily find out which groups a user belongs to in Linux and Unix-like operating systems. Finding out the groups to which a user account belongs will be helpful in many occasions. For instance, the other day I was installing Dropbox on my Ubuntu server. When configuring Dropbox, I had to enter my current user name and the group name. You could also be in a situation where you need to identify the groups a user belongs to. If so, use any one of the following methods to know what group a user is in.

  • How Do I Perform a Traceroute on Linux Mint 20? – Linux Hint

    Traceroute is a very useful utility that is used to track the path that a packet takes to reach a destination within a network. It can also act as a tool to report network congestion. In today’s article, we will discuss different examples that will demonstrate the usage of Traceroute on Linux Mint 20.

  • How do I Completely Remove a Package in Linux Mint 20? – Linux Hint

    The task of removing an installed package from any operating system can surely be a hassle if handled carelessly. It is because whenever you attempt to remove a package, you expect it not to leave any of its traces behind. In other words, you want a clean removal of the desired package. However, such a complete removal cannot be achieved without taking certain measures. That is why today’s article will be focused on the method of completely removing a package in Linux. Note: The method that we have attempted and shared with you in this article has been performed on a Linux Mint 20 system. However, the very same steps can also be performed on Ubuntu 20.04 and Debian 10.

  • How to Install Spotify in Fedora Linux – Linux Hint

    Spotify is a popular audio and video streaming service used by millions of people. Spotify is available for download on smartphones, tablets, and desktops for Windows, Mac, and Linux. Though Spotify works in Linux, this application is not actively supported, as it is on Windows and Mac. You can also enjoy Spotify on wearable gadgets. For example, if you have a Samsung smartwatch, you can listen to and control Spotify using the watch only. You need only install the app on your smartphone from the Play Store to start listening to tracks on Spotify. The free version of the application provides access to limited audio streaming services with advertisements. The premium service offers many features, including the ability to download media, ad-free browsing, better sound quality, and more. There are also other plans offered to specific individuals and groups. Spotify also supports various devices, such as Wireless Speakers, Wearables, Smart TVs, and Streamers.

  • How to Install Official Wallpaper Packs on Fedora? – Linux Hint

    Wallpapers are great for improving the user experience of any operating system. In the case of Fedora, one of its iconic features is the wallpapers it comes with. Every single Fedora release gets its own set of wallpaper, and these are some of the most anticipated components of any of its releases. In this guide, check out how to install official wallpaper packs on Fedora.

  • How to Reset Your Gnome Desktop to Default Settings

    Linux is a very versatile platform for not only power users, but also tweakers and tinkerers. With the rise of Linux desktop distros have come a whole new level of options for these users. Gnome is one of the most popular desktop environments on Linux and Ubuntu. The most popular desktop Linux distro now comes with Gnome out of the box following the shelving of Ubuntu’s Unity desktop environment. It, therefore, follows that there are countless ways to tweak your Gnome and make it truly yours.

  • How to Find Files Based on Timestamp in Linux

    The find command in Linux is used to search for files and folders based on different parameters. These parameters can be the filename, size, type of file, etc.

  • How to Delete Files Older Than Specified Days in Linux

    As you might already know, we use the rm command in Linux to delete files and folders. The filenames to be deleted have to be passed as arguments to rm. However, rm does not offer other options by itself, like deleting files based on timestamps. That’s the reason, we use the find command in Linux, which is used to search for files and folders based on different parameters. It is a complex command which can be used to search with parameters like the filename, size, type of file, etc. There is an option in the find command to search for files based on how old they are and today we will see how to use find and rm together to delete files older than the specified number of days.

  • How Can I Sudo Another User Without A Password? – Linux Hint

    In Linux platforms, a sudo user is a tool that implies “superuser do” to run various systems’ commands. A sudo user is typically a root user or any other user who has some privileges. To delegate important tasks like server rebooting or restarting the Apache server, or even to create a backup using the sudo command, you can use the sudo without having to enter the password again and again. By default, sudo user needs to provide some user authentication. At times, user requirements are to run a command with these root privileges, but they do not desire to type a password multiple times, especially while scripting. This is easily doable in Linux systems. In this article, we will check the method to sudo another user without entering their password.

  • How to configure Route53 with our DomainName to access a static website from S3 on AWS

    This article will help you with the steps to host a static website on S3 and redirect traffic from your subdomain to the static website on the S3 bucket. For this, you will need a domain purchased on AWS. Once you have the domain on AWS, you can create a subdomain and redirect requests from it to the S3 bucket.

  • How to install Zoom on Ubuntu, Lubuntu (latest version) using terminal

    What is zoom? Zoom is the leader in modern enterprise video communications, with an easy, reliable cloud platform for video and audio conferencing, chat, and webinars. You can use free and payed versios.

  • How to install mutliple Ubuntu VMs using Multipass on Ubunut 20.04 - Linux Shout

    Multipass is a platform developed by Canonical to launch and run Ubuntu virtual machines while offering a user the ability to configure them with cloud-init like a public cloud. Here we learn how to install Multipass on Ubuntu 20.04 Linux and use the same to launch Virtual machine instance. Although when it comes to launching lightweight pre-built virtual machine images with just a command, Docker comes to mind, however, Multipass could be another option for those who love to work on Ubuntu Server. Yes, if you want to launch Ubuntu Linux command line server VMs instantly on Windows, Linux and macOS then cross-platform Multipass is one of the good options to consider.

  • How to use the sipcalc Linux command line tool | Enable Sysadmin

    The only network numbers I can keep in my head are now and always have been a Class C network with a 24-bit netmask, such as 192.168.1.0/24. I know there are 254 usable host addresses available with a broadcast address of 192.168.1.255, a gateway/router address of 192.168.1.1 or 192.168.1.254 (depending on who's running the network), and a human-readable netmask of 255.255.255.0. That's my standard network. After all, 254 hosts are enough for any subnet, right? Wrong. A few years back, I had to step outside of my standard 254 hosts per subnet scenario when I decided to use a 22-bit netmask (255.255.252.0) to get a 1022 usable address space. I knew little about this address space, and it was frustrating to try to search for the simple information that I needed without scrolling through forums with all the idle chatter and off-topic rhetoric. I guess some people just need a space in which to air their grievances about everything. I digress.

GhostBSD Review: Simple and Lightweight

Because there are so many different options out there for your free and open-source operating system, it can be hard to figure out what the best option is for you. Sifting between Linux distros is difficult – Debian and its derivatives, Ubuntu and its derivatives, Fedora, Arch, openSUSE, the list goes on. However, what if the best choice for you isn’t actually technically Linux? Here we review GhostBSD, a FreeBSD-based Unix OS designed for a simple desktop experience, to see if it’s the right fit for you. [...] The applications that are installed are all necessary. It’s exactly what you might expect to find in your typical lean open-source desktop OS configuration, with no frills and just the essential applications. There is not much to remark on with the user experience – it is a very simple and friendly version of the MATE desktop that’s designed to be light on system resources and simple to use. Overall, I think there is no way you could go wrong. Read more

Games: Predictions, Free Software, and Titles Developed on GNU/Linux

  • Thrilling Linux Gaming Predictions for 2021 - Boiling Steam

    Last week we reached out to the community at large with a simple question: What do you predict will happen in the world of Linux Gaming by the end of 2021? To make things a little more fun, we asked everyone to limit their Linux Gaming predictions to 5 items, and be as specific as possible as to what they expect to occur. We also asked everyone to work on their predictions individually to avoid any potential bias. Now, we are sharing with you all the predictions we received, from quite a few places across the world as you can see from the below map. The Linux Gaming Community knows no frontiers.

  • Team Cherry upgrade the excellent Hollow Knight with Vulkan for Linux | GamingOnLinux

    Team Cherry have given their excellent action-platformer metroidvania Hollow Knight a bit of an upgrade, which you can test out on Steam in a fresh Beta test. Not played it before? You're missing out. Hollow Knight is a classically styled 2D action adventure across a vast interconnected world. Explore twisting caverns, ancient cities and deadly wastes; battle tainted creatures and befriend bizarre bugs; and solve ancient mysteries at the kingdom's heart.

  • OpenLoco is a free and open source re-implementation of Chris Sawyer's Locomotion | GamingOnLinux

    Just like there's the awesome OpenTTD for fans of Transport Tycoon Deluxe, there's also OpenLoco for players who want to play through the classic Locomotion. Not a project we've covered here before it seems, so we're making that right today. Originally released back in 2004, it's actually a spiritual successor to Transport Tycoon but it was not as loved due to various problems with the original release. Perhaps though it can have a new life thanks to OpenLoco.

  • VRWorkout is a free and open source VR fitness rhythm game

    Well, that's certainly one way to get a bit more exercise in. Whatever helps right? No judgement here, I could probably do with a little more myself… It's built with the free and open source game engine Godot Engine, so not only is the source code open for the game itself it's properly open for anyone to put it together from the source and will remain so. Speaking about VRWorkout to us on Twitter, the developer mentioned they actually do develop for it on Linux but they use a Quest headset not supported on Linux so they have to work with that on Windows. Perhaps though, in time, Monado might break down that barrier.

  • Free and open source voxel game engine Minetest 5.4 is out, makes mods easier for users | GamingOnLinux

    Minetest, the Minecraft-like voxel game engine (and a basic game that comes with it) has a big new release out with Minetest 5.4.0 and it's worth trying again. As we covered before during the Release Candidate stage, one of the big features for users in this release is vastly easier modding with both small mod packs and entire games. Minetest had a way to browse and download them all directly in the game for a while, but now it will also actually download all the dependencies mods need - making it vastly easier to get what you want and then into a game. No more downloading one mod, then finding all the individual bits it needs.

GNOME 40 Beta Released for Public Testing, Here’s What’s New

As you already know, GNOME 40 will introduce a new Activities Overview design that promises better overview spatial organization, improved touchpad navigation using gestures, more engaging app browsing and launching, as well as better boot performance. But the GNOME 40 beta release is packed with many other goodies, including the ability to switch workspaces with Super+scroll on Wayland, the implementation of a Welcome dialog after major updates, improved fingerprint login support, better handling of a large number of window previews, on-screen keyboard improvements, support for handling monitor changes during screencasts, as well as integration of the clipboard with remote desktop sessions. Read more