Language Selection

English French German Italian Portuguese Spanish

Linux.com

Syndicate content
News For Open Source Professionals
Updated: 2 hours 50 min ago

CNCF: Fostering the Evolution of TiKV

Wednesday 25th of November 2020 02:57:16 PM

PingCAP had high hopes that its TiKV project would develop into a building block for the next generation of distributed systems by providing reliable high quality and practical storage foundation. To accomplish that, it decided to contribute TiKV to the Cloud Native Computing Foundation (CNCF) to make it vendor-neutral and widely used across organizations. It seems headed in that direction, especially now that the project recently graduated, further demonstrating its maturity and sustainability. On behalf of the Linux Foundation, Swapnil Bhartiya, founder and host of TFiR, sat down with two members of the TiKV project, Siddon Tang and Calvin Weng, to learn more about the project’s evolution.

Here is a transcript of the discussion:

Swapnil Bhartiya: What is TiKV project and what problem are you trying to solve?
Siddon Tang: TiKV is an open source, distributed transactional key value database. TiKV is inspired by Google Spanner and HBase, but the design is simpler and more practical. Why did we develop TiKV at PingCAP? We want to build a distributed database with SQL compatibility. We built the SQL and then we wanted to build a distributed key-value storage layer that supported our database. At first, we tried to use HBase, but its performance was not what we expected so we decided to build our own distributed key-value database. That’s how TiKV started.

Calvin Weng: It was originally created to complement TiDB, but we soon realized that the TiKV project could be decoupled from TiDB and serve as a unified distributed storage layer that supported distributed transactions, horizontal scalability, and cloud-native architecture.

We also realized that with the amount of data we generate, there could be a demand for such a solution in the cloud-native communities. So, we contributed it to the CNCF to develop it as a building block for the next generation of distributed systems by providing a reliable high quality and practical storage foundation.

Swapnil Bhartiya: How is CNCF helping the TiKV project and the community?
Calvin Weng: Thanks for the question. I am a liaison between the CNCF and the TiKV project. The CNCF has been immensely helpful in shaping TiKV into what it is today in terms of both the project and the community. There are a few things that I would like to elaborate on and the first is neutrality. CNCF provides a neutral home to projects like ours, so that developers from different organizations are willing to collaborate, contribute and eventually become the leaders in the project. This is very important for the broader community to perceive TiKV as a vendor-neutral and universal project that belongs to the community instead of a single company like PingCAP. People will feel comfortable adopting it or developing their own apps on TiKV.

Another important aspect is exposure, which includes publicity and marketing support that we get from CNCF so that we are known by the broader community. More people and more companies could get involved, which also means more adoption.

Last but not least is diversity in the maintainer and the contributor structure. This is a very important criterion for CNCF graduation.

Swapnil Bhartiya: Since you mentioned graduation, can you talk about what it means for a project like TiKV to become a graduated project? How does it affect the project and what does it mean for its users?
Calvin Weng: TiKV has a lot of adoptions. There are more than 1,000 deployments in production. It is battle-tested. Moving from incubation to graduation is a very solid and convincing validation of the technology, its open governance, its vision, maturity and sustainability.

From a user’s perspective, graduation means the credibility and reliability of the project. It means that the TiKV project is a mature enough project for cloud-native architecture. It also means that the TiKV community is an active and healthy community. It boosts the confidence of users.

Swapnil Bhartiya: One last question before we wrap this up: can you talk about the roadmap of the TiKV project?
Siddon Tang: Our focus is on making it faster, easier-to-use, and cost-effective. We just released the 4.0 version and in the next major release of 5.0, we want it to be more cloud-friendly and be able to smoothly run on AWS S3, AWS EBS Cloud Disk or any other cloud storage. We are also working on making TiKV handle different workloads. We are also working on adding support for other database engines so it can support different workloads. The long-term goal is to introduce AI so it can use different engines to certify different workloads.

The post CNCF: Fostering the Evolution of TiKV appeared first on Linux.com.

Communication by example: Which methods do high-performing open source communities use?

Tuesday 24th of November 2020 02:43:53 PM
“Good words are worth much, and cost little.” George Herbert

Although effective communication is an essential life skill, it is the most critical element in any business [2]. Lack of accurate communication is the common cause of any organization’s issues, causing conflicts, reducing client relationships, team effectiveness, and profitability [2]. According to the Project Management Institute (PMI), ineffective communication is the main contributor to project failure one-third of the time. It has a negative impact on project success more than half of the time [1].

In open source projects where there is a diverse and world spread community, effective communication is the key to projects’ success. Using the right technology is crucial for that. So, which tools do open source communities use for communication?

Open Source community communication by example Ubuntu

The Ubuntu community uses mailing lists for development and team coordination. The mailing lists are split into announcements and news, support, development, testing and quality assurance, and general (such as translation, marketing, and documentation) [3]. Despite the mailing lists, IRC (Internet Relay Chat) channels are used for informal daily chats and short-term coordination tasks [3]. If someone wants to know what is going on on Ubuntu, but doesn’t want to subscribe to the high traffic mailing list, the web forum can be used to get support and discuss the future of Ubuntu. Finally, Ask Ubuntu can be used to ask technical questions.

Linux Kernel

Mailing lists are the main communication channels in the Linux Kernel. For newcomers that would like to learn more about the Linux kernel development, there is the kernelnewbies resource and #kernelnewbies IRC channel on OFTC. This online resource provides information on basic kernel development questions. Additionally, the kernelnewbies  IRC channel is a vehicle for contributors to ask questions in real-time and get help from experts in the kernel community. The Linux Kernel Mailing List (LKML) is where most development discussions and announcements are made. Kernel developers send patches to the mailing lists as outlined in the Submitting patches: the essential guide to getting your code into the kernel. The archives from each mailing list can be found at https://lore.kernel.org/lists.html.

Shuah Khan, a Linux Fellow, mentioned in an interview [4] that before contributing to the Linux Kernel, it’s important to subscribe to the kernel-related mailing lists “to understand the dynamics.” Khan said, “The process works like this: you walk into a room. People are gathering in small groups and are talking to each other. You have to break into one of these conversations. That is the process of watching the mailing lists, watching the interaction, and learning from that before you start sending out a patch.”

OpenStack

OpenStack has many communication channels such as IRC channels for both public meetings and projects as well as mailing lists. The mailing lists are used to asynchronously communicate and share information, team communication, and cross-project communication. Additionally, mailing lists in OpenStack are used to communicate with non-developer community members of OpenStack [5]. 

GNOME

IRC channels are one of the most important communication methods in Gnome. They are a google place to know what the community is talking about and also ask for help. There are many channels on Discourse, including discussions about Gnome’s sub-projects, community-related topics, internationalization, etc. Similar to other communities, mailing lists can be used for discussing specific topics. Finally, PlanetGnome and GnomeNews can be used to follow the latest news of the project.

So, where does communication occur in open source projects?

As observed in our previous discussion, mailing lists seem to be the most used communication method. Previous work has also found that “mailing lists are the bread and butter of project communications” [11] and that “the developer mailing list is the primary communication channel for an OSS project” [12]. However, as we have previously mentioned, mailing lists are not the only communication channel used in OSS. Other channels (such as IRC channels and forums) also play an important role.

Guzzy et. al [10] mention that when more than one communication repository exists, the policy of most OSS is to transfer all official decisions and useful discussions to the mailing lists, so that they can later be retrieved. Thus, traceability and transparency of information is an important matter here.

The benefit of using mailing lists is that it is an asynchronous form of communication, and it is an easy resource to share information with the entire community. Additionally, mailing lists allow people that are in different timezones to engage, as well as people that have different levels of English proficiency, may better manage it in text messages [5].

However, mailing lists might also have their disadvantages. Previous work [10] found that developers have problems maintaining awareness of each other’s work when discussing on the mailing lists. Additionally, recovering traceability links among different communication repositories might help researchers and community members to have a more complete picture of the development process.

What are the common DOs and DON’Ts when using OSS mailing lists?

Given that mailing lists are one of the common ways to communicate in open source projects, it is worth knowing how to communicate in mailing lists. Although each project has its own set of rules, certain conventions should be followed.

DOs

Subject

      • Prefix the subject with topic tags in square brackets. This makes email threads easier for readers to categorize and decide what they should read quickly. For example, OpenStack has documentation [13] establishing how to prefix the subject, i.e., community members should use [docs] to address any kind of documentation discussions that are cross-projects and so on.
      • Sometimes it’s appropriate to change the subject rather than start a new thread.
        • Exceptions: Linux Kernel mailing lists use “bottom post” protocol (writing the message below the original text) rather than “top post” (writing the message above the original text of an email, which is what most mail clients are set to do by default.)

Formating

      • Plain text: Send your email as plain text only! Please, don’t send HTML emails.
      • Line wrapping: Lines should be wrapped at 72 characters or fewer.

Replies

      • Always use inline replies, i.e., break the original message by replying to each specific part of the message.
      • When replying to long discussions, trim your message and leave only the relevant parts to the reply.
DON’Ts
      • Avoid cross-posting, i.e., posting the same message to many mailing lists at the same time.
        • Exceptions: The Linux Kernel maintains mailing lists for each subsystem, and patches are often sent to multiple mailing lists for review and discussion. However, avoid “top posting” on a Linux Kernel mailing list.
      • Avoid sending the wrong topic to the wrong mailing list. Make sure that your topic is the topic of the mailing list.
Setting up your email client

The Linux Kernel has great documentation on setting different email clients according to the rules mentioned above.

How to minimize the harm caused by conflicts?

Even if the code of conduct is applied, conflicts might exist. Many actions can be taken in case of dispute, and here are some examples:

Gather information about the situation

If someone has violated the code of conduct, you should carefully analyze the situation according to the experience working with that person [6]. It is essential to read the past comments and interactions with that person to have an unbiased perspective about what happened. Stephanie Zvan [7] has mentioned that the best way to avoid a conflict is not to get pulled into an argument. It is important to focus on what you need to do instead of getting sidetracked into dealing with others’ behaviors.

Take appropriate actions

Two ways to respond to the code of conduct violation is that the moderator of the community (i) in a thoughtful way explain in public how the person’s behavior affected the community, or (ii) privately reach out to the person and explain how that behavior was negative [6].

“A code of conduct that isn’t (or can’t be) enforced is worse than no code of conduct at all: it sends the message that the values in the code of conduct aren’t actually important or respected in your community.” Ada Initiative General tips
  • Open source projects are, in large part, successful due to the collaborative nature of projects. Thus, start conversations that lead to collaboration. That means, give feedback, support each other’s communication, and share your ideas.
  • There is no additional cost to being transparent and authentic with your community. In that way, it is easy to keep your team informed, empowered, and focused on one specific goal or task.

About the author: 

Isabella Ferreira is an Advocate at TARS Foundation, a cloud-native open-source microservice foundation under the Linux Foundation.

References:

[1]https://www.pmi.org/-/media/pmi/documents/public/pdf/learning/thought-leadership/pulse/the-essential-role-of-communications.pdf

[2]https://www.orangescrum.org/articles/communication-challenges-in-project-management-how-to-overcome.html

[3] https://wiki.ubuntu.com/ContributeToUbuntu#Community_Communication

[4] https://thenewstack.io/how-to-begin-your-journey-as-a-contributor-to-the-linux-kernel/

[5] https://docs.openstack.org/project-team-guide/open-community.html

[6] https://opensource.guide/code-of-conduct/#:~:text=A%20code%20of%20conduct%20is,just%20your%20participants%2C%20but%20yourself.

[7] 

https://the-orbit.net/almostdiamonds/2014/04/10/so-youve-got-yourself-a-policy-now-what/

[9] https://www.forbes.com/sites/forbescommunicationscouncil/2019/11/22/open-source-software-a-model-for-transparent-organizational-communication/#1b834e0d32c4

[10] Guzzi, Anja, et al. “Communication in open source software development mailing lists.” 2013 10th Working Conference on Mining Software Repositories (MSR). IEEE, 2013.

[11] Fogel, Karl. Producing open source software: How to run a successful free software project. ” O’Reilly Media, Inc.”, 2005.

[12] Gutwin, Carl, Reagan Penner, and Kevin Schneider. “Group awareness in distributed software development.” Proceedings of the 2004 ACM conference on Computer supported cooperative work. 2004.

[13] https://docs.openstack.org/project-team-guide/open-community.html#mailing-lists

This Linux Foundation Platinum Sponsor content was contributed by Tencent.

The post Communication by example: Which methods do high-performing open source communities use? appeared first on Linux.com.

Consolidation of AI, ML and Date Projects at The Linux Foundation

Thursday 19th of November 2020 09:42:47 PM

The Linux Foundation consolidated its projects around AI, ML & Data by bringing them under the umbrella of the LF AI & Data Foundation. Swapnil Bhartiya, founder and host at TFiR.io, sat down with Ibrahim Haddad, Executive Director of LF AI & Data to discuss this consolidation.

Transcript of the discussion:

Swapnil Bhartiya: A lot of consolidation is happening within the Linux Foundation around AI/ML projects. Can you talk about what AI/ML & data projects are there under the Linux Foundation umbrella right now?

Ibrahim Haddad: So, if you think of Linux Foundation, it is kind of a foundation of foundations. There are multiple umbrella foundations. There’s the CNCF (Cloud Native Computing Foundation), there’s LF Edge, there’s the Hyperledger project, automotive, et cetera. And LF AI & Data is one of these umbrella foundations. We share the same goal, which is to accelerate the development of open-source projects and innovation. However, we each do it in our specific domains.

We’re focused on AI, machine learning, deep learning, and the data aspects of AI. The LF AI & Data Foundation was initially kicked off as LF Deep Learning in March of 2018. We grew a bit, and we started to host projects in other subdomains within the AI umbrella. And then we rebranded again to LF AI & Data to reflect the additional growth in our portfolio.

As of today, we host 22 projects across multiple domains of machine learning, deep learning, data models, and trusted AI. We have, I believe, 36 numbered companies that are involved in our foundation.

Swapnil Bhartiya: Within the Linux Foundation, there are a lot of projects that at times overlap, and then there are gaps as well. So, within the AI/ML space, where do you still see gaps that need to be bridged and overlaps that need consolidation?

Ibrahim Haddad: When a project is contributed to the foundation, we see under which umbrella it fits, however it’s the decision of the project where they want to go, we only offer guidance. If projects do overlap under the same umbrella, it’s their call to make. In terms of consolidation, we’re actually in the process of doing this at least in the AI space. We recently announced the formation of LF AI & Data, which consolidates two projects – LF AI Foundation and ODPi.

Swapnil Bhartiya: Can you also talk about what are the new goals or new areas that the Foundation is focusing on after this consolidation and merger?

Ibrahim Haddad: The first one is increasing the collaboration between the projects that are on the data side and the traditional open-source AI projects that we host. We host about seven projects that focus on the data and 15 projects in the general AI domain. One of the activities we launched, which we are going to accelerate in 2021, is creating integration across different projects so that companies see a tighter integration within projects inside the foundation.

The second area is trusted AI to build trust and a responsible AI system, which is really a hot topic across industry verticals including governments, NGOs and companies. They all are putting emphasis on building fair systems, systems that don’t create bias, systems that are transparent, systems that are robust. Building trust with the consumer of these systems is a very critical thing. So trusted and responsible AI would be a key area in addition to the integration and growing the data/AI collaborations.

The post Consolidation of AI, ML and Date Projects at The Linux Foundation appeared first on Linux.com.

Open Source Web Engine Servo to be Hosted at Linux Foundation

Tuesday 17th of November 2020 08:22:14 PM

KubeCon, November 17, 2020 – The Linux Foundation, the nonprofit organization enabling mass innovation through open source, today announced it will host the Servo web engine. Servo is an open source, high-performance browser engine designed for both application and embedded use and is written in the Rust programming language, bringing lightning-fast performance and memory safety to browser internals. Industry support for this move is coming from Futurewei, Let’s Encrypt, Mozilla, Samsung, and Three.js, among others.

“The Linux Foundation’s track record for hosting and supporting the world’s most ubiquitous open source technologies makes it the natural home for growing the Servo community and increasing its platform support,” said Alan Jeffrey, Technical Chair of the Servo project. “There’s a lot of development work and opportunities for our Servo Technical Steering Committee to consider, and we know this cross-industry open source collaboration model will enable us to accelerate the highest priorities for web developers.”

Read more at The Linux Foundation and Read more at the Mozilla Foundation

The post Open Source Web Engine Servo to be Hosted at Linux Foundation appeared first on Linux.com.

Linux Foundation Discounts Instructor-Led Courses

Tuesday 17th of November 2020 04:12:22 PM

The Linux Foundation is home to many of the world’s most important open source projects, and also home to many of the top open source experts. Our instructor-led training courses are taught by hands-on practitioners who have used, built, and contributed to these projects for years. Instructor-led courses work differently to eLearning courses in that they take place at a specific time and are led by a teacher in real-time. The courses typically involve 3-4 full, consecutive days of instructional and lab time, meaning you can complete the training quickly and in a highly structured format. Having a live instructor also means you have the opportunity to ask questions and interact in real-time.

To increase access to this training, through November 24 all instructor-led training courses are discounted by 30-50%!

Read more: Linux Foundation Training

The post Linux Foundation Discounts Instructor-Led Courses appeared first on Linux.com.

New CNCF Kubernetes Security Specialist Certification Now Available

Tuesday 17th of November 2020 04:08:12 PM

The Linux Foundation, the nonprofit organization enabling mass innovation through open source, and Cloud Native Computing Foundation® (CNCF®), which builds sustainable ecosystems for cloud-native software, today announced the Certified Kubernetes Security Specialist (CKS), previously announced to be in development in July, is now generally available.

CKS is a two-hour, performance-based certification exam that provides assurance that a certificant has the skills, knowledge, and competence on a broad range of best practices for securing container-based applications and Kubernetes platforms during build, deployment, and runtime. The exam is taken remotely with a live proctor monitoring via webcam and screen sharing. Candidates for CKS must hold a current Certified Kubernetes Administrator (CKA) certification to demonstrate they possess sufficient Kubernetes expertise before sitting for the CKS. The certification remains valid for two years from the date it is awarded.

Read more: Linux Foundation Training

The post New CNCF Kubernetes Security Specialist Certification Now Available appeared first on Linux.com.

The state of the art of microservices in 2020

Friday 13th of November 2020 04:22:53 PM
“The microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies. James Lewis and Martin Fowler (2014) [6] Introduction

It is expected that in 2020, the global cloud microservices market will grow at a rate of 22.5%, with the US market projected to maintain a growth rate of 27.4% [5]. The tendency is that developers will move away from locally hosted applications and shift into the cloud. Consequently, this will help businesses minimize downtime, optimize resources, and reduce infrastructure costs. Experts also predict that by 2022, 90% of all applications will be developed using microservices architecture [5]. This article will help you to learn what microservices are and how companies have been using it nowadays. 

What are microservices?

Microservices have been widely used around the world. But what are microservices? Microservice is an architectural pattern in which the application is based on many small interconnected services. They are based on the single responsibility principle, which according to Robert C. Martin is “gathering things that change for the same reason, and separate those things that change for different reasons” [2]. The microservices architecture is also extended to the loosely coupled services that can be developed, deployed, and maintained independently [2]. 

Moving away from monolithic architectures

Microservices are often compared to traditional monolithic software architecture. In a monolithic architecture, a software is designed to be self-contained, i.e., the program’s components are interconnected and interdependent rather than loosely coupled. In a tightly-coupled architecture (monolithic), each component and its associated components must be present in order for the code to be executed or compiled [7]. Additionally, if any component needs to be updated, the whole application needs to be rewritten.

That’s not the case for applications using the microservices architecture. Since each module is independent, it can be changed without affecting other parts of the program. Consequently, reducing the risk that a change made to one component will create unanticipated changes in other components. 

Companies might run in trouble if they cannot scale a monolithic architecture if their architecture is difficult to upgrade or the maintenance is too complex and costly [4]. Breaking down a complex task into smaller components that work independently from each other is the solution to this problem.

Monolithic vs. microservices architecture. Image extracted from [3]. How developers around the world build their microservices

Microservices are well known for improving scalability and performance. However, are those the main reasons that developers around the world build their microservices? The State of Microservices 2020 research project [1] has found out how developers worldwide build their microservices and what they think about it. The report was created with the help of 660 microservice experts from Europe, North America, Central and South America, the Middle East, South-East Asia, Australia, and New Zealand. The table below presents the average rating on questions related to the maturity of microservices [1]. 

Category Average rating (out of 5) Setting up a new project 3.8 Maintenance and debugging 3.4 Efficiency of work 3.9 Solving scalability issues 4.3 Solving performance issues 3.9 Teamwork 3.9

 

As observed on the table, most experts are happy with microservices for solving scalability issues. On the contrary, maintenance and debugging seem to be a challenge for them.

According to their architecture’s leading technologies, most experts reported that they use Javascript/Typescript (almost ⅔ of microservices are built on those languages). In the second place, they mostly use Java.

Although there are plenty of options to choose to deploy microservices, most experts use Amazon Web Services (49%), followed by their own server. Additionally, 62% prefer AWS Lambda as a serverless solution.

Most microservices used by the experts use HTTP for communication, followed by events and gRPC. Additionally, most experts use RabbitMQ for message-brokers, followed by Kafka and Redis.

Also, most people work with microservices continuous integration (CI). In the report, 87% of the respondents use CI solutions such as GitLab CI, Jenkins, or GitHub Actions.

The most popular debugging solution among 86% of the respondents was logging, in which 27% of the respondents ONLY use logs. 

Finally, most people think that microservice architecture will become either a standard for more complex systems or backend development.

Successful use cases of Microservices

Many companies have changed from a monolithic architecture to microservices. 

Amazon

In 2001, development delays, coding challenges, and service interdependencies didn’t allow Amazon to address its growing user base’s scalability requirements. With the need to refactor their monolithic architecture from scratch, Amazon broke its monolithic applications into small, independent, and service-specific applications [3][9].

In 2001, Amazon decided to change to microservices, which was years before the term came into fashion. This change led Amazon to develop several solutions to support microservices architectures, such as Amazon AWS. With the rapid growth and adaptation to microservices, Amazon became the most valuable company globally, valued by market cap at $1.433 trillion on July 1st, 2020 [8].

Netflix

Netflix started its movie-streaming service in 2007, and by 2008 it was suffering scaling challenges. They experienced a major database corruption, and for three days, they could not ship DVDs to their members [10]. This was the starting point when they realized the need to move away from single points of failure (e.g., relational databases) towards a more scalable and reliable distributed system in the cloud. In 2009, Netflix started to refactor its monolithic architecture into microservices. They began by migrating its non-customer-facing, movie-coding platform to run on the cloud as an independent microservices [11]. Changing to microservices allowed Netflix to overcome its scaling challenges and service outages. Despite that, it allowed them to reduce costs by having cloud costs per streaming instead of costs with a data center [10]. Today, Netflix streams approximately 250 million hours of content daily to over 139 million subscribers in 190 countries [11].

Uber

After launching Uber, they struggled to develop and launch new features, fix bugs, and rapidly integrate new changes. Thus, they decided to change to microservices, and they broke the application structure into cloud-based microservices. In other words, Uber created one microservice for each function, such as passenger management and trip management. Moving to microservices brought Uber many benefits, such as having a clear idea of each service ownership. This boosted speed and quality, facilitating fast scaling by allowing teams to focus only on the services they needed to scale, updating virtual services without disrupting other services, and achieving more reliable fault tolerance [11].

It’s all about scalability!

A good example of how to provide scalability is by looking at China. With its vast number of inhabitants, China had to adapt by creating and testing new solutions to solve new challenges at scale. Statistics show that China serves roughly 900 million Internet users nowadays [14]. During the 2019 Single’s Day (the equivalent of Black Friday in China), the peak transaction of Alibaba’s various shopping platforms was 544,00 transactions per second. The total amount of data processed on Alibaba Cloud was around 970 petabytes [15]. So, what’s the implication of these numbers of users in technology?

Many technologies have emerged from the need to address scalability. For example, Tars was created in 2008 by Tencent and contributed to the Linux Foundation in 2018. It’s being used at scale and enhanced for ten years [12]. Tars is open source, and many organizations are significantly contributing and extending the framework’s features and value [12]. Tars supports multiple programming languages, including C++, Golang, Java, Node.js, PHP, and Python; and it can quickly build systems and automatically generate code, allowing the developer to focus on the business logic to improve operational efficiency effectively. Tars has been widely used in Tencent’s QQ, WeChat social network, financial services, edge computing, automotive, video, online games, maps, application market, security, and many other core businesses. In March of 2020, the Tars project transitioned into the TARS Foundation, an open source microservice foundation to support the rapid growth of contributions and membership for a community focused on building an open microservices platform [12].

Be sure to check out the Linux Foundation’s new free training course, Building Microservice Platforms with TARS

About the authors: 

Isabella Ferreira is an Advocate at TARS Foundation, a cloud-native open-source microservice foundation under the Linux Foundation

Mark Shan is Chairman at Tencent Open Source Alliance and also Board Chair at TARS Foundation. 

References:

[1] https://tsh.io/state-of-microservices/#ebook

[2]https://medium.com/hashmapinc/the-what-why-and-how-of-a-microservices-architecture-4179579423a9

[3] https://www.plutora.com/blog/understanding-microservices

[4] https://www.leanix.net/en/blog/a-brief-history-of-microservices

[5] https://www.charterglobal.com/five-microservices-trends-in-2020/

[6] https://martinfowler.com/articles/microservices.html#footnote-etymology

[7] https://whatis.techtarget.com/definition/monolithic-architecture

[8] https://ycharts.com/companies/AMZN/market_cap

[9] https://thenewstack.io/led-amazon-microservices-architecture/

[10] https://media.netflix.com/en/company-blog/completing-the-netflix-cloud-migration

[11] https://blog.dreamfactory.com/microservices-examples/

[12] https://www.linuxfoundation.org/blog/2020/03/the-tars-foundation-the-formation-of-a-microservices-ecosystem/

[13] https://medium.com/microservices-architecture/top-10-microservices-framework-for-2020-eefb5e66d1a2

[14] https://www.statista.com/statistics/265140/number-of-internet-users-in-china/

[15] https://interconnected.blog/china-scale-technology-sandbox/

This Linux Foundation Platinum Sponsor content was contributed by Tencent.

The post The state of the art of microservices in 2020 appeared first on Linux.com.

Building a healthy relationship between security and sysadmins

Friday 13th of November 2020 12:32:46 PM

Learn how to bridge the gap between operations/development and security.
Read More at Enable Sysadmin

The post Building a healthy relationship between security and sysadmins appeared first on Linux.com.

How to report security vulnerabilities to the Linux Foundation

Friday 13th of November 2020 06:22:59 AM

We at The Linux Foundation (LF) work to develop secure software in our foundations and projects, and we also work to secure the infrastructure we use. But we’re all human, and mistakes can happen.

So if you discover a security vulnerability in something we do, please tell us!

If you find a security vulnerability in the software developed by one of our foundations or projects, please report the vulnerability directly to that foundation or project. For example, Linux kernel security vulnerabilities should be reported to <security@kernel.org> as described in security bugs. If the foundation/project doesn’t state how to report vulnerabilities, please ask them to do so. In many cases, one way to report vulnerabilities is to send an email to <security@DOMAIN>.

If you find a security vulnerability in the Linux Foundation’s infrastructure as a whole, please report it to <security@linuxfoundation.org>, as noted on our contact page.

For example, security researcher Hanno Böck recently alerted us that some of the retired linuxfoundation.org service subdomains were left delegated to some cloud services, making them potentially vulnerable to a subdomain takeover. Once we were alerted to that, the LF IT Ops Team quickly worked to eliminate the problem and will also be working on a way to monitor and alert about such problems in the future. We thank Hanno for alerting us!

We’re also working to make open source software (OSS) more secure in general. The Open Source Security Foundation (OpenSSF) is a broad initiative to secure the OSS that we all depend on. Please check out the OpenSSF if you’re interested in learning more.

David A. Wheeler

Director, Open Source Supply Chain Security, The Linux Foundation

The post How to report security vulnerabilities to the Linux Foundation appeared first on The Linux Foundation.

The post How to report security vulnerabilities to the Linux Foundation appeared first on Linux.com.

How to report security vulnerabilities to the Linux Foundation

Friday 13th of November 2020 06:22:59 AM

We at The Linux Foundation (LF) work to develop secure software in our foundations and projects, and we also work to secure the infrastructure we use. But we’re all human, and mistakes can happen.

So if you discover a security vulnerability in something we do, please tell us!

If you find a security vulnerability in the software developed by one of our foundations or projects, please report the vulnerability directly to that foundation or project. For example, Linux kernel security vulnerabilities should be reported to <security@kernel.org> as described in security bugs. If the foundation/project doesn’t state how to report vulnerabilities, please ask them to do so. In many cases, one way to report vulnerabilities is to send an email to <security@DOMAIN>.

If you find a security vulnerability in the Linux Foundation’s infrastructure as a whole, please report it to <security@linuxfoundation.org>, as noted on our contact page.

For example, security researcher Hanno Böck recently alerted us that some of the retired linuxfoundation.org service subdomains were left delegated to some cloud services, making them potentially vulnerable to a subdomain takeover. Once we were alerted to that, the LF IT Ops Team quickly worked to eliminate the problem and will also be working on a way to monitor and alert about such problems in the future. We thank Hanno for alerting us!

We’re also working to make open source software (OSS) more secure in general. The Open Source Security Foundation (OpenSSF) is a broad initiative to secure the OSS that we all depend on. Please check out the OpenSSF if you’re interested in learning more.

David A. Wheeler

Director, Open Source Supply Chain Security, The Linux Foundation

The post How to report security vulnerabilities to the Linux Foundation appeared first on The Linux Foundation.

The post How to report security vulnerabilities to the Linux Foundation appeared first on Linux.com.

How to handle a Linux kernel panic

Wednesday 11th of November 2020 12:26:18 PM

How to handle a Linux kernel panic

Here is a collection of resources to help you deal with kernel panic events.
Peter Gervase
Wed, 11/11/2020 at 4:26am

Image

A kernel panic often lives up to its name, causing panic for the admin. But the good news is that all is not lost; there are steps you can take.

So, first off, what is a kernel panic? As defined in the Computer Security Resource Center (CSRC) Glossary, a kernel panic is “a system error that cannot be recovered from, and requires the system to be restarted.” As we all know, a forced restart is never good.

Topics:  
Linux  
Linux Administration  
Read More at Enable Sysadmin

The post How to handle a Linux kernel panic appeared first on Linux.com.

CNCF Releases Free Training Course Covering Basics of Service Mesh with Linkerd

Tuesday 10th of November 2020 03:13:04 PM

Introduction to Service Mesh with Linkerd is the newest training course from CNCF and The Linux Foundation. This course, offered on the non-profit edX learning platform, can be audited by anyone at no cost. The course is designed for site reliability engineers, DevOps professionals, cluster administrators, and developers who want to learn more about service mesh and Linkerd, the open source service mesh hosted by CNCF and focused on simplicity, speed, and low resource usage.

Read more: Linux Foundation Training

The post CNCF Releases Free Training Course Covering Basics of Service Mesh with Linkerd appeared first on Linux.com.

Renewing my thrill at work with Ansible

Tuesday 10th of November 2020 05:08:30 AM

Renewing my thrill at work with Ansible

Ansible empowered me to utilize my own technical strengths and passion to improve processes and enjoy my time.
Joseph Tejal
Mon, 11/9/2020 at 9:08pm

Image

Image by Michal Jarmoluk from Pixabay

Sitting on my work-from-home desk, sipping black coffee, and watching the cool demos at AnsibleFest 2020 on demand—it all flashed back to me: The challenges of a few years ago when I was a Linux systems admin at another company. Back then, you strove to reduce the number of incidents, stabilized customer systems, put standard maintenance procedures in place, scripted the mundane tasks, documented everything well, and finally, ensured others could do your job, etc.

Topics:  
Linux  
Automation  
Ansible  
Read More at Enable Sysadmin

The post Renewing my thrill at work with Ansible appeared first on Linux.com.

DevOps Replaces Developers As Most Sought After Skill Set

Tuesday 10th of November 2020 01:03:51 AM

The 2020 Open Source Jobs Report just came out so we took the opportunity to speak with Clyde Seepersad, Senior Vice President and General Manager of Training and Certification at the Linux Foundation, about the significance of the report and the insights it provides on the current open source landscape. He touched on the effects of COVID-19 on hiring trends, the open source skills that are in high demand, and how the Foundation is helping organizations meet this demand through high-quality intensive training. Bottom line, he says “We still don’t have enough open source talent. The urgency of finding new ways to bring talent into the market continues to be something that should be front and center for all of us.”

Swapnil Bhartiya: What is the importance of this report? Not only for the open source ecosystem, but companies outside of the open source ecosystem, because today almost everybody’s leveraging open source in one capacity or another.

Clyde Seepersad: One of the things that we didn’t realize several years ago is that there is a lot of data around general employment reports and a few around IT and technology in general, but there was really this gap when it comes to what’s happening on open source talent, and we kept hearing anecdotally that people can’t hire or can’t find enough talent.

And so what we wanted to do was put a really clear spotlight on what’s going on specifically when it comes to the talent pool around open source, to be able to share with the market a sort of non-anecdotal state of the world, but also to be able to inform our own strategy and our own mission, which is to try to ensure not just that there is fantastic code coming out of open source projects, but also that there is enough talent to implement and use it as tool.

Swapnil Bhartiya: What are some of the key highlights of this report?

Clyde Seepersad: A couple of things. One is the rise of DevOps skills. I think everybody knows cloud is hot. It’s been that way for a while, but the companion piece to that around DevOps and the importance of understanding CI/CD pipelines and also the cultural difference of working in that sort of continuous delivery. The rise of that, I think, is something that maybe most people are not quite as aware of.

The second thing I would highlight is that there were a lot of questions about what’s happening to tech hiring in response to the COVID pandemic. We have some answers for that, that says that although hiring slowed down, it did not slow down nearly as much as people might have worried at the outset. In fact, it’s now accelerating.

The top-level thing, which is continuing to be the case, is we still don’t have enough open-source talent. The urgency of finding new ways to bring talent into the market continues to be something that should be front and center for all of us.

Swapnil Bhartiya: So if we look at this report, what are the skills that are kind of not only most in demand, but also hardest to find? That is like a chicken-and-egg solution, right?

Clyde Seepersad: Yeah. Obviously, it’s the cloud skills, right? A lot of the smaller companies, more conservative companies, they kind of make us push them to be much more active on the cloud. What that’s done is raise the stakes in terms of people who are familiar with cloud-native development, cloud-native architecture, Kubernetes orchestration, and then what does CI/CD pipelines look like in a cloud world because obviously, there’s some changes there when you’re running that sort of infrastructure. So those interwoven skillsets, right?

Of course, sitting underneath all of that is what operating system does the cloud run on? I think we all know now that the vast, fast 98% of instances are running on Linux. So you have this tiered approach where understanding from basic Linux competence is a baseline and then you’re building on top of that, looking for cloud-native development, cloud-native orchestration, and then what the CI/CD pipelines look like to bring that to life.

Swapnil Bhartiya: So when we look at this shortage of talent and, at the same time, the demand for talent, in addition to just coming out with this report, do you have any kind of advice or suggestion to the hiring managers? What can they do to attract top developers or talent to their organizations because there is heavy demand and everybody wants them?

Clyde Seepersad: Right. Well, some of the things actually have happened in response to the pandemic, right? One of the trends we saw last year was people wanting the flexibility to be able to work from home. Of course, now we all work from home so that helps. But what came out in the report that was really interesting is that more and more talent managers are realizing that you don’t just have to go externally for talent, that you can, in fact, upskill people who are currently in your organization.

The data suggests that a lot more people are waking up and realizing that trolling LinkedIn for your next hire is a zero-sum game because other people are doing the same. They’re starting to invest more in training, especially online training. They’re starting to invest more in certifications for their employees. And just in general, they’re starting to be much more proactive in looking at investing into their talent pool and finding ways to provide new opportunities for development. Of course, that also comes with new job opportunities for the existing employee base.

Swapnil Bhartiya: I just want to talk a little bit more about COVID-19. A couple of things are happening with COVID-19: a lot of companies that are scaling down. They’re cutting budgets and everything. At the same time, since people are able to work remotely, you don’t have to relocate yourself or you don’t have to find talent in the same area. You have access to almost everybody wherever they are. So how has COVID-19 affected the hiring process itself in terms of while they do have to scale down to some extent, the beauty is, I should not say that, the world that we are living in is all powered by cloud and technology. All the purchases that I was making even in my Indian grocery, they now have a website. I can just go and place an order. It was not the case earlier. So, cloud actually enabled companies to stay in business. That also means that you do need developers and all those talents to keep those businesses running. At the same time, you have the advantage of not having to relocate. So talk a bit about it.

Clyde Seepersad: Yeah, that’s true. I don’t think that’s tied together, right? So as people have been forced to use the cloud more, I had the same experience you did. My local Chinese restaurant suddenly developed a website and they have an ordering business that they did not previously have. Every business is now an e-commerce business is true, right? So there’s this broader footprint.

On the flip side of it, you also have people who are now having to work from home, where they maybe didn’t use to, either for practical or maybe cultural reasons within the company. That also then intersects with the sort of cultural change and the cultural norms of CI/CD and DevOps, right? This idea that you have to be in person together versus this idea that you have a well-documented pipeline and everybody can contribute to that pipeline and do their commits and do their code, that whole tooling ecosystem of cloud native and DevOps has actually made it, made it easier—and I would argue, possible—to do what’s happened and what we’ve seen over the past several months, which is people being productive, working from home, working with people they haven’t worked with before, onboarding new team members, and being able to get them provisioned with the right access and up skill on the right systems. It’s all really come together. In my view, we have been lucky that we’ve got the technology infrastructure that we have today because I don’t know that we would have been able to stay as productive and focused in a sudden shift to remote work if we were trying to do this even five years ago,

Swapnil Bhartiya: I’m a good example of that because I have been working from home ever since I moved out of India. What I realized was that I work when I feel that I’m most productive instead of hey, I have to clock in at 9:00 AM and I have to clock out at 5:00 PM. I have to sit there and do something. It doesn’t matter how I feel. And then sometimes, there are personal issues. Somebody is sick in the family and your mind is there, but you have to come to the office. I think remote working offers the best balance between work and life. Of course, it is actually more challenging because you may end up working all the time, but still, it offers a better balance. Earlier you were talking about how you don’t have to go out to hire people; you can also internally train people. So when we look at organizations and they look at all these new cloud-native technologies and they want to retain or prepare their own workforce, what resources are available there, especially from the Linux Foundation so that they can better equip their own workforce when there is already a shortage of a lot of talent?

Clyde Seepersad: Yeah, it’s a good question, Swapnil. From a practical perspective, the portfolio that we have provided, which is very heavily focused on self-paced e-learning that you take online, but at the same time, very skills-based, very lab-intensive online training, because ultimately, what do you care about as a colleague or as a hiring manager? It’s not whether they check the box and they have a certificate saying they completed a course. What you care about is the skills, right? Did they actually develop those skills? So, we’ve got a pretty big portfolio of very hands-on, self-paced e-learning programs to help people develop the skills. And then we’ve continued to build our portfolio of performance-based certification exams. So this is not your grandad’s pick an answer out of a lineup, right? These are live systems with variable questions, and you have to demonstrate your skills under the pressure of time, under the pressure of being proctored by an independent person. I think it’s that one-two punch of really focusing on skills.

I joke with people all the time. We get feedback sometimes that our courses aren’t don’t have enough video. And I say, “Well, true, because we’re not trying to entertain you. We’re trying to develop skills and the way you develop skills is not by staring at a screen and listening to a video. The way you develop skills is by doing a lab.” So we’ve got a very lab-centric mindset in terms of the training side of it and that carries on into the certification side of it, where it’s all about performance. Show that you can do the work, take the time to develop the skills because that’s what your colleagues are going to be looking for. That’s what your employers are going to be looking for. That’s what’s going to benefit you personally, as an individual — to be able to have that broader skill set and to be able to do that in a remote way and not have to rely on a senior trainer coming onto site and working with you. I think that’s going to be the new normal.

Swapnil Bhartiya: The advantage of this crisis is that people are realizing that they don’t have to move. Actually, they can move to the ideal pace they wanted to live. It could be a big ranch, it could be a beach, and they can work for companies who are operating in Silicon Valley, which also means you can also cross national boundaries. The whole idea of hope is open source is the best and the brightest people from around the globe. So how do you enable these people? People come from different cultural backgrounds, different education backgrounds, and different languages. Do you also help them irrespective of where they’re coming from, whether it’s internationalizing or supporting different languages so people can get training?

Clyde Seepersad: Yeah. Our LS training, we do that. Obviously, the online format helps because it’s truly available 24/7 globally, nights and weekends. So that really has expanded the footprint of what we’re able to do and who we are able to reach. We’ve also done some translations, particularly for the certification exams to make those available in languages that we know folks may not otherwise be comfortable with, for Japan and China, for instance.

What we’re trying to do is mirror what we’re seeing in the workforce. The shift towards more remote work has actually opened up the pipeline. When you think about hiring and talent management, if you think about somebody who is in the US or in Western Europe, your pool is not as limited. You really can reach out to this global pool of talent in non-traditional markets. We’ve seen sectors get hot. Obviously, India has a lot of workshops today. There’s a ton of stuff happening in Eastern Europe now, but it really is global, right? We’ve got folks on our team in South America being super productive in this new remote way of working. I think that’s becoming more and more typical. Because of the rise of cloud native, because of the rise of Cloud Native, because of the rise of this sort of collaborative DevOps mindset, we are able to collaborate across regions, across countries, across time zones, much more effectively than they ever have before.

Swapnil Bhartiya: Awesome. Clyde, thank you so much for talking to me today about not only this report, but also how to help hiring managers not only get more talent, but also not retrain their own employees. I look forward to talking to you again. Thank you.

Clyde Seepersad: Hey, it’s always a pleasure to be with you, Swap. Thank you.

 

The post DevOps Replaces Developers As Most Sought After Skill Set appeared first on Linux.com.

Linux for beginners: 10 more commands for manipulating files

Friday 6th of November 2020 04:27:46 AM

Check out these ten additional commands from a sysadmin to help you learn Linux at the command line.
Read More at Enable Sysadmin

The post Linux for beginners: 10 more commands for manipulating files appeared first on Linux.com.

How to read and correct SELinux denial messages

Wednesday 4th of November 2020 02:49:42 AM

A look at SELinux denial messages, where they’re logged, and how to parse them.
Read More at Enable Sysadmin

The post How to read and correct SELinux denial messages appeared first on Linux.com.

The value of sysadmin to sysadmin mentorship

Tuesday 3rd of November 2020 02:18:58 PM

Mentors might expose your weaknesses, but they will also provide you with opportunities to improve.
Read More at Enable Sysadmin

The post The value of sysadmin to sysadmin mentorship appeared first on Linux.com.

October 2020 top 10 sysadmin how-tos and tutorials

Tuesday 3rd of November 2020 01:54:25 AM

October 2020 top 10 sysadmin how-tos and tutorials

Take a look back at our spookiest month yet.
tcarriga
Mon, 11/2/2020 at 5:54pm

Image

Photo by Ylanite Koppens from Pexels

October 2020 was a collosal month here at Enable Sysadmin. We smashed every record previously set with some very impressive numbers. We published 36 articles from 22 different authors, earning north of 429k pageviews and 312k unique visitors.

We covered a vast array of technologies and interest areas; from command line tips and tricks, YAML, systemctl, and ssh, to Linux/Windows collaborations and sysadmin career advice. We are confident that you will find something of interest to you.
Read More at Enable Sysadmin

The post October 2020 top 10 sysadmin how-tos and tutorials appeared first on Linux.com.

An open guide to evaluating software composition analysis tools

Monday 2nd of November 2020 10:07:50 PM
Overview

With the help of software composition analysis (SCA) tools, software development teams can track and analyze any open source code brought into a project from a licensing compliance and security vulnerabilities perspective. Such tools discover open source code (at various levels of details and capabilities), their direct and indirect dependencies, licenses in effect, and the presence of any known security vulnerabilities and potential exploits. Several companies provide SCA suites, open source tools, and related services driven as community projects. The question of what tool is most suitable for a specific usage model and environment always comes up. It is difficult to answer given the lack of a standard method to compare and evaluate such tools.

The goal of this paper is to recommend a series of comparative metrics when evaluating multiple SCA tools.

Download Whitepaper

The post An open guide to evaluating software composition analysis tools appeared first on The Linux Foundation.

The post An open guide to evaluating software composition analysis tools appeared first on Linux.com.

More in Tux Machines

Devices: Allwinner, Yocto, Arduino

  • Allwinner H6 SBC offers dual Ethernet, four display outputs, M.2 expansion

    While the processor was introduced in 2017, there are only a few Allwinner H6 SBC’s on the market with, for instance, Orange Pi 3 or Pine H64 boards, and it never became as popular as solutions based Allwinner H3 processor. But Boardcon has now launched its own Allwinner H6 SBC targeting professionals with Boardcon EMH6 board combining a carrier board and a computer-on-module that can be integrated into products.

  • Automotive Grade Linux Releases UCB 10 Software Platform with Yocto Long Term Support

    Automotive Grade Linux (AGL), an open source project developing a shared software platform for in-vehicle technology, today announced the latest code release of the AGL platform, UCB 10, also known under the codename "Jumping Jellyfish." Developed through a joint effort by dozens of member companies, the AGL Unified Code Base (UCB) is an open source software platform that can serve as the de facto industry standard for infotainment, telematics and instrument cluster applications.

  • Arduino Blog » These cornhole boards react to your bean bag tosses

    The lawn game of cornhole has seen a surge in popularity over the last couple of decades. But if you’ve ever thought about raising its cool factor, then YouTuber Hardware Unknown has just what you’ve been waiting for: light and audio effects that react to your throws. Hardware Unknown’s foldable boards each feature an Arduino Nano for control. A vibration sensor is used to tell when a bean bag hits the board, and an IR break-beam setup senses when one goes into the hole.

The Best 21 Open-source Headless CMS for 2020

A headless CMS (content management system) is a backend system which works the content available through API (RESTful API or GraphQL). It's built to give the developers the possibilities to create what they want. The API-driven headless approach is trending right now especially for enterprise users and developers. Headless CMS programs can be used as a backend for mobile apps, static generated websites with frameworks like Next, Nuxt, Gridsome and Hugo which also supports server-side rendering. They can be also used to manage IoT (Internet of Things) applications. Read more Also: 17 Best Open-source Self-hosted Commenting Systems

Secuity Leftovers

  • Security updates for Wednesday

    Security updates have been issued by Debian (spip and webkit2gtk), Fedora (kernel and libexif), openSUSE (chromium and rclone), Slackware (mutt), SUSE (kernel, mariadb, and slurm), and Ubuntu (igraph).

  • Top Tips to Protect Your Linux System

    Linux-based operating systems have a reputation for their high-security level. That's one of the reasons why the market share for Linux has been growing. The most commonly used operating systems such as Windows are often affected by targeted attacks in the form of ransomware infections, spyware, as well as worms, and malware. As a result, many personal, as well as enterprise users, are turning to Linux-based operating systems such as the Ubuntu-based Linux OS for security purposes. While Linux based systems are not targeted as frequently as other popular operating systems, they are not completely foolproof. There are plenty of risks and vulnerabilities for all types of Linux devices which put your privacy as well as your identity at risk.

  • Building a healthy relationship between security and sysadmins | Enable Sysadmin

    Learn how to bridge the gap between operations/development and security.

today's howtos